1929Hotelling - Stability in Competition
nominal rigidities and the dynamic effects of a shock to monetary policy
Nominal Rigidities and the Dynamic Effects of a Shockto Monetary Policy∗Lawrence J.Christiano†Martin Eichenbaum‡Charles L.Evans§August27,2003AbstractWe present a model embodying moderate amounts of nominal rigidities that ac-counts for the observed inertia in inflation and persistence in output.The key featuresof our model are those that prevent a sharp rise in marginal costs after an expansion-ary shock to monetary policy.Of these features,the most important are staggeredwage contracts which have an average duration of three quarters,and variable capitalutilization.JEL:E3,E4,E5∗Thefirst two authors are grateful for thefinancial support of a National Science Foundation grant to the National Bureau of Economic Research.We would like to acknowledge helpful comments from Lars Hansen and Mark Watson.We particularly want to thank Levon Barseghyan for his superb research assistance,as well as his insightful comments on various drafts of the paper.This paper does not necessarily reflect the views of the Federal Reserve Bank of Chicago or the Federal Reserve System.†Northwestern University,National Bureau of Economic Research,and Federal Reserve Banks of Chicago and Cleveland.‡Northwestern University,National Bureau of Economic Research,and Federal Reserve Bank of Chicago.§Federal Reserve Bank of Chicago.1.IntroductionThis paper seeks to understand the observed inertial behavior of inflation and persistence in aggregate quantities.To this end,we formulate and estimate a dynamic,general equilibrium model that incorporates staggered wage and price contracts.We use our model to investigate what mix of frictions can account for the evidence of inertia and persistence.For this exercise to be well defined,we must characterize inertia and persistence precisely.We do so using estimates of the dynamic response of inflation and aggregate variables to a monetary policy shock.With this characterization,the question that we ask reduces to:‘Can models with moderate degrees of nominal rigidities generate inertial inflation and persistent output movements in response to a monetary policy shock?’1Our answer to this question is,‘yes’.The model that we construct has two key features.First,it embeds Calvo style nominal price and wage contracts.Second,the real side of the model incorporates four departures from the standard textbook one sector dynamic stochastic growth model.These depar-tures are motivated by recent research on the determinants of consumption,asset prices, investment and productivity.The specific departures that we include are habit formation in preferences for consumption,adjustment costs in investment and variable capital utilization. In addition,we assume thatfirms must borrow working capital tofinance their wage bill.Our keyfindings are as follows.First,the average duration of price and wage contracts in the estimated model is roughly2and3quarters,respectively.Despite the modest nature of these nominal rigidities,the model does a very good job of accounting quantitatively for the estimated response of the US economy to a policy shock.In addition to reproducing the dynamic response of inflation and output,the model also accounts for the delayed,hump-shaped response in consumption,investment,profits,productivity and the weak response of the real wage.2Second,the critical nominal friction in our model is wage contracts,not price contracts.A version of the model with only nominal wage rigidities does almost as well as the estimated model.In contrast,with only nominal price rigidities,the model performs very poorly.Consistent with existing results in the literature,this version of the model cannot generate persistent movements in output unless we assume price contracts of extremely long duration.The model with only nominal wage rigidities does not have this problem.Third,we document how inference about nominal rigidities varies across different spec-ifications of the real side of our model.3Estimated versions of the model that do not in-1This question that is the focus of a large and growing literature.See,for example,Chari,Kehoe and McGrattan(2000),Mankiw(2001),Rotemberg and Woodford(1999)and the references therein.2In related work,Sbordone(2000)argues that,taking as given aggregate real variables,a model with staggered wages and prices does well at accounting for the time series properties of wages and prices.See also Ambler,Guay and Phaneuf(1999)and Huang and Liu(2002)for interesting work on the role of wage contracts.3For early discussions about the impact of real frictions on the effects of nominal rigidities,see Blanchardcorporate our departures from the standard growth model imply implausibly long price and wage contracts.Fourth,wefind that if one only wants to generate inertia in inflation and persistence in output with moderate wage and price stickiness,then it is crucial to allow for variable capital utilization.To understand why this feature is so important,note that in our modelfirms set prices as a markup over marginal costs.The major components of marginal costs are wages and the rental rate of capital.By allowing the services of capital to increase after a positive monetary policy shock,variable capital utilization helps dampen the large rise in the rental rate of capital that would otherwise occur.This in turn dampens the rise in marginal costs and,hence,prices.The resulting inertia in inflation implies that the rise in nominal spending that occurs after a positive monetary policy shock produces a persistent rise in real output.Similar intuition explains why sticky wages play a critical role in allowing our model to explain inflation inertia and output persistence.It also explains why our assumption about working capital plays a useful role:other things equal,a decline in the interest rate lowers marginal cost.Fifth,although investment adjustment costs and habit formation do not play a central role with respect to inflation inertia and output persistence,they do play a critical role in accounting for the dynamics of other variables.Sixth,the major role played by the working capital channel is to reduce the model’s reliance on sticky prices.Specifically,if we estimate a version of the model that does not allow for this channel,the average duration of price contracts increases dramatically.Finally,wefind that our model embodies strong internal propagation mechanisms.The impact of a monetary policy shock on aggregate activity continues to grow and persist even beyond the time when the typical contract in place at the time of the shock is reoptimized.In addition,the effects persist well beyond the effects of the shock on the interest rate and the growth rate of money.We pursue a particular limited information econometric strategy to estimate and evaluate our model.To implement this strategy wefirst estimate the impulse response of eight key macroeconomic variables to a monetary policy shock using an identified vector autoregres-sion(V AR).We then choose six model parameters to minimize the difference between the estimated impulse response functions and the analogous objects in our model.4 The remainder of this paper is organized as follows.In section2we briefly describe our estimates of how the U.S.economy responds to a monetary policy shock.Section3 displays our economic model.In Section4we discuss our econometric methodology.Our empirical results are reported in Section5and analyzed in Section6.Concluding commentsand Fisher(1989),Ball and Romer(1990)and Romer(1996).For more recent quantitative discussions, see Chari,Kehoe and McGrattan(2000),Edge(2000),Fuhrer(2000),Kiley(1997),McCallum and Nelson (1998)and Sims(1998).4Christiano,Eichenbaum and Evans(1998),Edge(2000)and Rotemberg and Woodford(1997)have also applied this strategy in the context of monetary policy shocks.are contained in Section7.2.The Consequences of a Monetary Policy ShockThis section begins by describing how we estimate a monetary policy shock.We then re-port estimates of how major macroeconomic variables respond to a monetary policy shock. Finally,we report the fraction of the variance in these variables that is accounted for by monetary policy shocks.The starting point of our analysis is the following characterization of monetary policy:R t=f(Ωt)+εt.(2.1)Here,R t is the Federal Funds rate,f is a linear function,Ωt is an information set,andεt is the monetary policy shock.We assume that the Fed allows money growth to be whatever is necessary to guarantee that(2.1)holds.Our basic identifying assumption is thatεt is orthogonal to the elements inΩt.Below,we describe the variables inΩt and elaborate on the interpretation of this orthogonality assumption.We now discuss how we estimate the dynamic response of key macroecomomic variables to a monetary policy shock.Let Y t denote the vector of variables included in the analysis. We partition Y t as follows:Y t=[Y1t,R t,Y2t]0.The vector Y1t is composed of the variables whose time t elements are contained inΩt,and are assumed not to respond contemporaneously to a monetary policy shock.The vector Y2t consists of the time t values of all the other variables inΩt.The variables in Y1t are real GDP, real consumption,the GDP deflator,real investment,the real wage,and labor productivity. The variables in Y2t are real profits and the growth rate of M2.All these variables,except money growth,have been logged.We measure the interest rate,R t,using the Federal Funds rate.The data sources are in an appendix,available from the authors.With one exception(the growth rate of money)all the variables in Y t are include in levels.Altig,Christiano,Eichenbaum and Linde(2003)adopt an alternative specification of Y t,in which cointegrating relationships among the variables are imposed.For example,the growth rate of GDP and the log difference between labor productivity and the real wage are included.The key properties of the impulse responses to a monetary policy shock are insensitive to this alternative specification.The ordering of the variables in Y t embodies two key identifying assumptions.First,the variables in Y1t do not respond contemporaneously to a monetary policy shock.Second,the time t information set of the monetary authority consists of current and lagged values of the variables in Y1t and only past values of the variables in Y2t.Our decision to include all variables,except for the growth rate of M2and real profits in Y1t,reflects a long-standing view that macroeconomic variables do not respond instanta-neously to policy shocks(see Friedman(1968)).We refer the reader to Christiano,Eichen-baum and Evans(1999)for a discussion of sensitivity of inference to alternative assumptions about the variables included in Y1t.While our assumptions are certainly debatable,the anal-ysis is internally consistent in the sense that we make the same assumptions in our economic model.To maintain consistency with the model,we place profits and the growth rate of money in Y2t.The V AR contains4lags of each variable and the sample period is1965Q3-1995Q3.5 Ignoring the constant term,the V AR can be written as follows:Y t=A1Y t−1+...+A4Y t−4+Cηt,(2.2)where C is a9×9lower triangular matrix with diagonal terms equal to unity,andηt is a9−dimensional vector of zero-mean,serially uncorrelated shocks with diagonal variance-covariance matrix.Since there are six variables in Y1t,the monetary policy shock,εt,is the7th element ofηt.A positive shock toεt corresponds to a contractionary monetary policy shock. We estimate the parameters-A i,i=1,...,4,C,and the variances of the elements ofηt-using standard least squares ing these estimates,we compute the dynamic path of Y t following a one-standard-deviation shock inεt,setting initial conditions to zero.This path, which corresponds to the coefficients in the impulse response functions of interest,is invariant to the ordering of the variables within Y1t and within Y2t(see Christiano,Eichenbaum and Evans(1999).)The impulse response functions of all variables in Y t are displayed in Figure1.Lines marked‘+’correspond to the point estimates.The shaded areas indicate95%confidence intervals about the point estimates.6The solid lines pertain to the properties of our structural model,which will be discussed in section3.The results suggest that after an expansionary monetary policy shock there is a:•hump-shaped response of output,consumption and investment,with the peak effect occurring after about1.5years and returning to their pre-shock levels after about three years,•hump-shaped response in inflation,with a peak response after about2years,•fall in the interest rate for roughly one year,•rise in profits,real wages and labor productivity,and5This sample period is the same as in Christiano,Eichenbaum and Evans(1999).6We use the method described in Sims and Zha(1999).•an immediate rise in the growth rate of money.Interestingly,these results are consistent with the claims in Friedman(1968).For example, Friedman argued that an exogenous increase in the money supply leads to a drop in the interest rate that lasts one to two years,and a rise in output and employment that lasts from two tofive years.Finally,the robustness of the qualitative features of ourfindings to alternative identifying assumptions and sample sub-periods,as well as the use of monthly data,is discussed in Christiano,Eichenbaum and Evans(1999).Our strategy for estimating the parameters of our model focuses on only a component of thefluctuations in the data,namely the portion that is due to a monetary policy shock. It is natural to ask how large that component is,since ultimately we are interested in a model that can account for the variation in the data.With this question in mind,the following table reports variance decompositions.In particular,it displays the percent of the variance of the k−step forecast error in the elements of Y t due to monetary policy shocks, for k=4,8and20.Numbers in parentheses are the boundaries of the associated95% confidence interval.7Notice that policy shocks account for only a small fraction of inflation. At the same time,with the exception of real wages,monetary policy shocks account for a non-trivial fraction of the variation in the real variables.This last inference should be treated with caution.The confidence intervals about the point estimates are rather large. Also,while the impulse response functions are robust to the various perturbations discussed in Christiano,Eichenbaum and Evans(1999)and Altig,Christiano,Eichenbaum and Linde (2003),the variance decompositions can be sensitive.For example,the analogous point estimates reported in Altig,Christiano,Eichenbaum and Linde(2003)are substantially smaller than those reported in Table1.3.The Model EconomyIn this section we describe our model economy and display the problems solved byfirms and households.In addition,we describe the behavior offinancial intermediaries and the monetary andfiscal authorities.The only source of uncertainty in the model is a shock to monetary policy.7These confidence intervals are computed based on bootstrap simulations of the estimated VAR.In each artificial data set we computed the variance decompositions corresponding to the ones in Table1.The lower and upper bounds of the confidence intervals correspond to the2.5and97.5percentiles of simulated variance decompositions.3.1.Final Good FirmsAt time t,afinal consumption good,Y t,is produced by a perfectly competitive,representative firm.Thefirm produces thefinal good by combining a continuum of intermediate goods, indexed by j∈[0,1],using the technologyY t=·Z10Y jt1f dj¸λf(3.1) where1≤λf<∞and Y jt denotes the time t input of intermediate good j.Thefirm takes its output price,P t,and its input prices,P jt,as given and beyond its control.Profit maximization implies the Euler equationµP t jt¶λfλf−1=Y jt t.(3.2)Integrating(3.2)and imposing(3.1),we obtain the following relationship between the price of thefinal good and the price of the intermediate good:P t=·Z10P11−λf jt dj¸(1−λf).(3.3) 3.2.Intermediate Good FirmsIntermediate good j∈(0,1)is produced by a monopolist who uses the following technology:Y jt=½kαjt L1−αjt−φif kαjt L1−αjt≥φ0otherwise(3.4) where0<α<1.Here,L jt and k jt denote time t labor and capital services used to produce the j th intermediate good.Also,φ>0denotes thefixed cost of production.We rule out entry and exit into the production of intermediate good j.Intermediatefirms rent capital and labor in perfectly competitive factor markets.Profits are distributed to households at the end of each time period.Let R k t and W t denote the nominal rental rate on capital services and the wage rate,respectively.Workers must be paid in advance of production.As a result,the j thfirm must borrow its wage bill,W t L jt, from thefinancial intermediary at the beginning of the period.Repayment occurs at the end of time period t at the gross interest rate,R t.Thefirm’s real marginal cost iss t=∂S t(Y)∂Y,where S t(Y)=mink,l©r k t k+w t R t l,Y given by(3.4)ª,where r k t=R k t/P t and w t=W t/P t.Given our functional forms,we haves t=µ1¶1−αµ1¶α¡r k t¢α(w t R t)1−α.(3.5)Apart fromfixed costs,thefirm’s time t profits are:·P jt P t−s t¸P t Y jt,where P jt isfirm j’s price.We assume thatfirms set prices according to a variant of the mechanism spelled out in Calvo(1983).This model has been widely used to characterize price-setting frictions.A useful feature of the model is that it can be solved without explicitly tracking the distribution of prices acrossfirms.In each period,afirm faces a constant probability,1−ξp,of being able to reoptimize its nominal price.The ability to reoptimize its price is independent across firms and time.If afirm can reoptimize its price,it does so before the realization of the time t growth rate of money.Firms that cannot reoptimize their price simply index to lagged inflation:P jt=πt−1P j,t−1.(3.6) Here,πt=P t/P t−1.We refer to this price-setting rule as lagged inflation indexation.Let˜P t denote the value of P jt set by afirm that can reoptimize at time t.Our notation does not allow˜P t to depend on j.We do this in anticipation of the well known result that, in models like ours,allfirms who can reoptimize their price at time t choose the same price (see Woodford,1996and Yun,1996).Thefirm chooses˜P t to maximize:∞X l=0¡βξp¢lυt+l h˜P t X tl−s t+l P t+l i Y j,t+l,(3.7)E t−1subject to(3.2),(3.5)and.(3.8)X tl=½πt×πt+1×···×πt+l−1for l≥11l=0In(3.7),υt is the marginal value of a dollar to the household,which is treated as exogenous by thefiter,we show that the value of a dollar,in utility terms,is constant across households.Also,E t−1denotes the expectations operator conditioned on lagged growth rates of money,µt−l,l≥1.This specification of the information set captures our assumption that thefirm chooses˜P t before the realization of the time t growth rate of money.To understand (3.7),note that˜P t influencesfirm j’s profits only as long as it cannot reoptimize its price.The probability that this happens for l periods is¡ξp¢l,in which case P j,t+l=˜P t X tl.The presence of¡ξp¢l in(3.7)has the effect of isolating future realizations of idiosyncratic uncertainty in which˜P t continues to affect thefirm’s profits.3.3.HouseholdsThere is a continuum of households,indexed by j∈(0,1).The j th household makes a sequence of decisions during each period.First,it makes its consumption decision,its capital accumulation decision,and it decides how many units of capital services to supply.Second, it purchases securities whose payoffs are contingent upon whether it can reoptimize its wage decision.Third,it sets its wage rate afterfinding out whether it can reoptimize or not. Fourth,it receives a lump-sum transfer from the monetary authority.Finally,it decides how much of itsfinancial assets to hold in the form of deposits with afinancial intermediary and how much to hold in the form of cash.Since the uncertainty faced by the household over whether it can reoptimize its wage is idiosyncratic in nature,households work different amounts and earn different wage rates.So, in principle,they are also heterogeneous with respect to consumption and asset holdings.A straightforward extension of arguments in Erceg,Henderson and Levin(2000)and Woodford (1996)establish that the existence of state contingent securities ensures that,in equilibrium, households are homogeneous with respect to consumption and asset holdings.Reflecting this result,our notation assumes that households are homogeneous with respect to consumption and asset holdings but heterogeneous with respect to the wage rate that they earn and hours worked.The preferences of the j th household are given by:∞X l=0βl−t[u(c t+l−bc t+l−1)−z(h j,t+l)+v(q t+l)].(3.9)E j t−1Here,E j t−1is the expectation operator,conditional on aggregate and household j idiosyn-cratic information up to,and including,time t−1;c t denotes time t consumption;h jt denotes time t hours worked;q t≡Q t/P t denotes real cash balances;Q t denotes nominal cash balances.When b>0,(3.9)allows for habit formation in consumption preferences.The household’s asset evolution equation is given by:M t+1=R t[M t−Q t+(µt−1)M a t]+A j,t+Q t+W j,t h j,t(3.10)+R k t u t¯k t+D t−P t¡i t+c t+a(u t)¯k t¢.Here,M t is the household’s beginning of period t stock of money and W j,t h j,t is time t labor income.In addition,¯k t,D t and A j,t denote,respectively,the physical stock of capital,firm profits and the net cash inflow from participating in state-contingent securities at time t. The variableµt represents the gross growth rate of the economy-wide per capita stock of money,M a t.The quantity(µt−1)M a t is a lump-sum payment made to households by the monetary authority.The quantity M t−P t q t+(µt−1)M a t,is deposited by the household with afinancial intermediary where it earns the gross nominal rate of interest,R t.The remaining terms in(3.10),aside from P t c t,pertain to the stock of installed capital, which we assume is owned by the household.The household’s stock of physical capital,¯k t, evolves according to:¯k=(1−δ)¯k t+F(i t,i t−1).(3.11)t+1Here,δdenotes the physical rate of depreciation and i t denotes time t purchases of invest-ment goods.The function,F,summarizes the technology that transforms current and past investment into installed capital for use in the following period.We discuss the properties of F below.Capital services,k t,are related to the physical stock of capital byk t=u t¯k t.Here,u t denotes the utilization rate of capital,which we assume is set by the household.8 In(3.10),R k t u t¯k t represents the household’s earnings from supplying capital services.The increasing,convex function a(u t)¯k t denotes the cost,in units of consumption goods,of setting the utilization rate to u t.3.4.The W age DecisionAs in Erceg,Henderson and Levin(2000),we assume that the household is a monopoly sup-plier of a differentiated labor service,h jt.It sells this service to a representative,competitive firm that transforms it into an aggregate labor input,L t,using the following technology:L t=·Z10h1λjt dj¸λw.The demand curve for h jt is given by:h jt=µW t jt¶λwλw−1L t,1≤λw<∞.(3.12) Here,W t is the aggregate wage rate,i.e.,the price of L t.It is straightforward to show that W t is related to W jt via the relationship:W t=·Z10(W jt)11−λw dj¸1−λw.(3.13) The household takes L t and W t as given.8Our assumption that households make the capital accumulation and utilization decisions is a matter of convenience.At the cost of a more complicated notation,we could work with an alternative decentralization scheme in whichfirms make these decisions.Households set their wage rate according to a variant of the mechanism used to model price setting byfirms.In each period,a household faces a constant probability,1−ξw, of being able to reoptimize its nominal wage.The ability to reoptimize is independent across households and time.If a household cannot reoptimize its wage at time t,it sets W jt according to:W j,t=πt−1W j,t−1.(3.14) 3.5.Monetary and Fiscal PolicyWe assume that monetary policy is given by:µt=µ+θ0εt+θ1εt−1+θ2εt−2+...(3.15)Here,µdenotes the mean growth rate of money andθj is the response of E tµt+j to a time t monetary policy shock.We assume that the government has access to lump sum taxes and pursues a Ricardianfiscal policy.Under this type of policy,the details of tax policy have no impact on inflation and other aggregate economic variables.As a result,we need not specify the details offiscal policy.93.6.Loan Market Clearing,Final Goods Clearing and EquilibriumFinancial intermediaries receive M t−Q t from households and a transfer,(µt−1)M t from the monetary authority.Our notation here reflects the equilibrium condition,M a t=M t. Financial intermediaries lend all of their money to intermediate goodfirms,which use the funds to pay for L t.Loan market clearing requiresW t L t=µt M t−Q t.(3.16) The aggregate resource constraint isc t+i t+a(u t)≤Y t.We adopt a standard sequence-of-markets equilibrium concept.In the appendix we discuss our computational strategy for approximating that equilibrium.This strategy involves taking a linear approximation about the non-stochastic steady state of the economy and using the solution method discussed in Christiano(2003).For details,see the previous version of this paper,Christiano,Eichenbaum and Evans(2001).In principle,the non-negativity constraint on intermediate good output in(3.4)is a problem for this approximation.It turns out that the constraint is not binding for the experiments that we consider and so we ignore it.Finally, it is worth noting that since profits are stochastic,the fact that they are zero,on average, 9See Sims(1994)or Woodford(1994)for a further discussion.implies that they are often negative.As a consequence,our assumption thatfirms cannot exit is binding.Allowing forfirm entry and exit dynamics would considerably complicate our analysis.3.7.Functional Form AssumptionsWe assume that the functions characterizing utility are given by:u(·)=log(·)z(·)=ψ0(·)2.(3.17)v(·)=ψq(·)1−σqqIn addition,investment adjustment costs are given by:F(i t,i t−1)=(1−Sµi t i t−1¶)i t.(3.18) We restrict the function S to satisfy the following properties:S(1)=S0(1)=0,andκ≡S00(1)>0.It is easy to verify that the steady state of the model does not depend on the adjustment cost parameter,κ.Of course,the dynamics of the model are influenced byκ. Given our solution procedure,no other features of the S function need to be specified for our analysis.We impose two restrictions on the capital utilization function,a(u t).First,we require that u t=1in steady state.Second,we assume a(1)=0.Under our assumptions,the steady state of the model is independent ofσa=a00(1)/a0(1).The dynamics do depend onσa.Given our solution procedure,we do not need to specify any other features of the function a. 4.Econometric MethodologyIn this section we discuss our methodology for estimating and evaluating our model.We partition the model parameters into three groups.Thefirst group is composed ofβ,φ,α,δ,ψ0,ψq,λw andµ.We setβ=1.03−0.25,which implies a steady state annualized real interest rate of3percent.We setα=0.36,which corresponds to a steady state share of capital income equal to roughly36percent.We setδ=0.025,which implies an annual rate of depreciation on capital equal to10percent.This value ofδis roughly equal to the estimate reported in Christiano and Eichenbaum(1992).The parameter,φ,is set to guarantee that profits are zero in steady state.This value is consistent with Basu and Fernald(1994),Hall (1988),and Rotemberg and Woodford(1995),who argue that economic profits are close to zero on average.Although there are well known problems with the measurement of profits, we think that zero profits is a reasonable benchmark.。
Hotelling模型再探讨
Hotelling模型再探讨本文修改Hotelling(1929)模型的基本假定,假定厂商边际生产成本为正,交通成本由消费者负担,厂商区位可以为内生变量,也可以为外生变量,在此假定前提下,分析厂商的最优的区位——价格策略,以探讨最大差异原则或者最小差异化原则何时成立,或者不成立。
标签:最大差异化最小差异化区位内生区位外生一、绪论Hotelling(1929年)最早使用线性区位模型研究了厂商间空间竞争的问题。
假定厂商的边际生产成本为零,在利润最大化假设下,其结论为两厂商会同时在城市中点的位置进行选址,即最小差异化原则。
D.Aspremont、Gabszewicz 与Thisse(1979年)延续其构想,但对其假设加以修正,却得到截然不同的结果。
在Bertrand竞争(价格竞争)下,两厂商聚集在中心点会使均衡价格为零,故两厂商必会在线形城市的两个不同端点选址,即最大差异化原则。
此后,关于最大差异化原则和最小差异化原则何者成立或者不成立的问题引起了广泛的讨论。
在Hotelling(1929年)和D’Aspremont,Gabszewicz and Thisse(1979年)的模型内,有两个重要的假设值得讨论。
第一,消费者所负担的单位交通成本相同。
这一假设与实际的情形可能不符。
比如排队购买,消费者宁愿多花时间购买某一产品而不购买对手的产品,这相当于提高了消费者的单位旅行成本。
又比如,厂商的服务效率不同,也会影响消费者的单位交通成本。
第二,厂商边际生产成本为零。
这在现实生活中是比较少见的。
因此,如果上述假设不成立,厂商的区位——价格策略发生何种变化,这是本文要探讨的第一个问题。
此外,研究厂商区位选择的文献,多假定厂商的区位内生变量,即通常假定厂商进行两个阶段的博弈,这又可以分为两种情况:两厂商同时选址,此后同时制定价格,即完全信息静态博弈;两厂商先后进入市场,后进入市场的厂商在观察到先进入市场的厂商选择后再进入市场,即完全信息动态博弈。
最近鲁棒优化进展Recent Advances in Robust Optimization and Robustness An Overview
Recent Advances in Robust Optimization and Robustness:An OverviewVirginie Gabrel∗and C´e cile Murat†and Aur´e lie Thiele‡July2012AbstractThis paper provides an overview of developments in robust optimization and robustness published in the aca-demic literature over the pastfive years.1IntroductionThis review focuses on papers identified by Web of Science as having been published since2007(included),be-longing to the area of Operations Research and Management Science,and having‘robust’and‘optimization’in their title.There were exactly100such papers as of June20,2012.We have completed this list by considering 726works indexed by Web of Science that had either robustness(for80of them)or robust(for646)in their title and belonged to the Operations Research and Management Science topic area.We also identified34PhD disserta-tions dated from the lastfive years with‘robust’in their title and belonging to the areas of operations research or management.Among those we have chosen to focus on the works with a primary focus on management science rather than system design or optimal control,which are broadfields that would deserve a review paper of their own, and papers that could be of interest to a large segment of the robust optimization research community.We feel it is important to include PhD dissertations to identify these recent graduates as the new generation trained in robust optimization and robustness analysis,whether they have remained in academia or joined industry.We have also added a few not-yet-published preprints to capture ongoing research efforts.While many additional works would have deserved inclusion,we feel that the works selected give an informative and comprehensive view of the state of robustness and robust optimization to date in the context of operations research and management science.∗Universit´e Paris-Dauphine,LAMSADE,Place du Mar´e chal de Lattre de Tassigny,F-75775Paris Cedex16,France gabrel@lamsade.dauphine.fr Corresponding author†Universit´e Paris-Dauphine,LAMSADE,Place du Mar´e chal de Lattre de Tassigny,F-75775Paris Cedex16,France mu-rat@lamsade.dauphine.fr‡Lehigh University,Industrial and Systems Engineering Department,200W Packer Ave Bethlehem PA18015,USA aure-lie.thiele@2Theory of Robust Optimization and Robustness2.1Definitions and BasicsThe term“robust optimization”has come to encompass several approaches to protecting the decision-maker against parameter ambiguity and stochastic uncertainty.At a high level,the manager must determine what it means for him to have a robust solution:is it a solution whose feasibility must be guaranteed for any realization of the uncertain parameters?or whose objective value must be guaranteed?or whose distance to optimality must be guaranteed? The main paradigm relies on worst-case analysis:a solution is evaluated using the realization of the uncertainty that is most unfavorable.The way to compute the worst case is also open to debate:should it use afinite number of scenarios,such as historical data,or continuous,convex uncertainty sets,such as polyhedra or ellipsoids?The answers to these questions will determine the formulation and the type of the robust counterpart.Issues of over-conservatism are paramount in robust optimization,where the uncertain parameter set over which the worst case is computed should be chosen to achieve a trade-off between system performance and protection against uncertainty,i.e.,neither too small nor too large.2.2Static Robust OptimizationIn this framework,the manager must take a decision in the presence of uncertainty and no recourse action will be possible once uncertainty has been realized.It is then necessary to distinguish between two types of uncertainty: uncertainty on the feasibility of the solution and uncertainty on its objective value.Indeed,the decision maker generally has different attitudes with respect to infeasibility and sub-optimality,which justifies analyzing these two settings separately.2.2.1Uncertainty on feasibilityWhen uncertainty affects the feasibility of a solution,robust optimization seeks to obtain a solution that will be feasible for any realization taken by the unknown coefficients;however,complete protection from adverse realiza-tions often comes at the expense of a severe deterioration in the objective.This extreme approach can be justified in some engineering applications of robustness,such as robust control theory,but is less advisable in operations research,where adverse events such as low customer demand do not produce the high-profile repercussions that engineering failures–such as a doomed satellite launch or a destroyed unmanned robot–can have.To make the robust methodology appealing to business practitioners,robust optimization thus focuses on obtaining a solution that will be feasible for any realization taken by the unknown coefficients within a smaller,“realistic”set,called the uncertainty set,which is centered around the nominal values of the uncertain parameters.The goal becomes to optimize the objective,over the set of solutions that are feasible for all coefficient values in the uncertainty set.The specific choice of the set plays an important role in ensuring computational tractability of the robust problem and limiting deterioration of the objective at optimality,and must be thought through carefully by the decision maker.A large branch of robust optimization focuses on worst-case optimization over a convex uncertainty set.The reader is referred to Bertsimas et al.(2011a)and Ben-Tal and Nemirovski(2008)for comprehensive surveys of robust optimization and to Ben-Tal et al.(2009)for a book treatment of the topic.2.2.2Uncertainty on objective valueWhen uncertainty affects the optimality of a solution,robust optimization seeks to obtain a solution that performs well for any realization taken by the unknown coefficients.While a common criterion is to optimize the worst-case objective,some studies have investigated other robustness measures.Roy(2010)proposes a new robustness criterion that holds great appeal for the manager due to its simplicity of use and practical relevance.This framework,called bw-robustness,allows the decision-maker to identify a solution which guarantees an objective value,in a maximization problem,of at least w in all scenarios,and maximizes the probability of reaching a target value of b(b>w).Gabrel et al.(2011)extend this criterion from afinite set of scenarios to the case of an uncertainty set modeled using intervals.Kalai et al.(2012)suggest another criterion called lexicographicα-robustness,also defined over afinite set of scenarios for the uncertain parameters,which mitigates the primary role of the worst-case scenario in defining the solution.Thiele(2010)discusses over-conservatism in robust linear optimization with cost uncertainty.Gancarova and Todd(2012)studies the loss in objective value when an inaccurate objective is optimized instead of the true one, and shows that on average this loss is very small,for an arbitrary compact feasible region.In combinatorial optimization,Morrison(2010)develops a framework of robustness based on persistence(of decisions)using the Dempster-Shafer theory as an evidence of robustness and applies it to portfolio tracking and sensor placement.2.2.3DualitySince duality has been shown to play a key role in the tractability of robust optimization(see for instance Bertsimas et al.(2011a)),it is natural to ask how duality and robust optimization are connected.Beck and Ben-Tal(2009) shows that primal worst is equal to dual best.The relationship between robustness and duality is also explored in Gabrel and Murat(2010)when the right-hand sides of the constraints are uncertain and the uncertainty sets are represented using intervals,with a focus on establishing the relationships between linear programs with uncertain right hand sides and linear programs with uncertain objective coefficients using duality theory.This avenue of research is further explored in Gabrel et al.(2010)and Remli(2011).2.3Multi-Stage Decision-MakingMost early work on robust optimization focused on static decision-making:the manager decided at once of the values taken by all decision variables and,if the problem allowed for multiple decision stages as uncertainty was realized,the stages were incorporated by re-solving the multi-stage problem as time went by and implementing only the decisions related to the current stage.As thefield of static robust optimization matured,incorporating–ina tractable manner–the information revealed over time directly into the modeling framework became a major area of research.2.3.1Optimal and Approximate PoliciesA work going in that direction is Bertsimas et al.(2010a),which establishes the optimality of policies affine in the uncertainty for one-dimensional robust optimization problems with convex state costs and linear control costs.Chen et al.(2007)also suggests a tractable approximation for a class of multistage chance-constrained linear program-ming problems,which converts the original formulation into a second-order cone programming problem.Chen and Zhang(2009)propose an extension of the Affinely Adjustable Robust Counterpart framework described in Ben-Tal et al.(2009)and argue that its potential is well beyond what has been in the literature so far.2.3.2Two stagesBecause of the difficulty in incorporating multiple stages in robust optimization,many theoretical works have focused on two stages.Regarding two-stage problems,Thiele et al.(2009)presents a cutting-plane method based on Kelley’s algorithm for solving convex adjustable robust optimization problems,while Terry(2009)provides in addition preliminary results on the conditioning of a robust linear program and of an equivalent second-order cone program.Assavapokee et al.(2008a)and Assavapokee et al.(2008b)develop tractable algorithms in the case of robust two-stage problems where the worst-case regret is minimized,in the case of interval-based uncertainty and scenario-based uncertainty,respectively,while Minoux(2011)provides complexity results for the two-stage robust linear problem with right-hand-side uncertainty.2.4Connection with Stochastic OptimizationAn early stream in robust optimization modeled stochastic variables as uncertain parameters belonging to a known uncertainty set,to which robust optimization techniques were then applied.An advantage of this method was to yield approaches to decision-making under uncertainty that were of a level of complexity similar to that of their deterministic counterparts,and did not suffer from the curse of dimensionality that afflicts stochastic and dynamic programming.Researchers are now making renewed efforts to connect the robust optimization and stochastic opti-mization paradigms,for instance quantifying the performance of the robust optimization solution in the stochastic world.The topic of robust optimization in the context of uncertain probability distributions,i.e.,in the stochastic framework itself,is also being revisited.2.4.1Bridging the Robust and Stochastic WorldsBertsimas and Goyal(2010)investigates the performance of static robust solutions in two-stage stochastic and adaptive optimization problems.The authors show that static robust solutions are good-quality solutions to the adaptive problem under a broad set of assumptions.They provide bounds on the ratio of the cost of the optimal static robust solution to the optimal expected cost in the stochastic problem,called the stochasticity gap,and onthe ratio of the cost of the optimal static robust solution to the optimal cost in the two-stage adaptable problem, called the adaptability gap.Chen et al.(2007),mentioned earlier,also provides a robust optimization perspective to stochastic programming.Bertsimas et al.(2011a)investigates the role of geometric properties of uncertainty sets, such as symmetry,in the power offinite adaptability in multistage stochastic and adaptive optimization.Duzgun(2012)bridges descriptions of uncertainty based on stochastic and robust optimization by considering multiple ranges for each uncertain parameter and setting the maximum number of parameters that can fall within each range.The corresponding optimization problem can be reformulated in a tractable manner using the total unimodularity of the feasible set and allows for afiner description of uncertainty while preserving tractability.It also studies the formulations that arise in robust binary optimization with uncertain objective coefficients using the Bernstein approximation to chance constraints described in Ben-Tal et al.(2009),and shows that the robust optimization problems are deterministic problems for modified values of the coefficients.While many results bridging the robust and stochastic worlds focus on giving probabilistic guarantees for the solutions generated by the robust optimization models,Manuja(2008)proposes a formulation for robust linear programming problems that allows the decision-maker to control both the probability and the expected value of constraint violation.Bandi and Bertsimas(2012)propose a new approach to analyze stochastic systems based on robust optimiza-tion.The key idea is to replace the Kolmogorov axioms and the concept of random variables as primitives of probability theory,with uncertainty sets that are derived from some of the asymptotic implications of probability theory like the central limit theorem.The authors show that the performance analysis questions become highly structured optimization problems for which there exist efficient algorithms that are capable of solving problems in high dimensions.They also demonstrate that the proposed approach achieves computationally tractable methods for(a)analyzing queueing networks,(b)designing multi-item,multi-bidder auctions with budget constraints,and (c)pricing multi-dimensional options.2.4.2Distributionally Robust OptimizationBen-Tal et al.(2010)considers the optimization of a worst-case expected-value criterion,where the worst case is computed over all probability distributions within a set.The contribution of the work is to define a notion of robustness that allows for different guarantees for different subsets of probability measures.The concept of distributional robustness is also explored in Goh and Sim(2010),with an emphasis on linear and piecewise-linear decision rules to reformulate the original problem in aflexible manner using expected-value terms.Xu et al.(2012) also investigates probabilistic interpretations of robust optimization.A related area of study is worst-case optimization with partial information on the moments of distributions.In particular,Popescu(2007)analyzes robust solutions to a certain class of stochastic optimization problems,using mean-covariance information about the distributions underlying the uncertain parameters.The author connects the problem for a broad class of objective functions to a univariate mean-variance robust objective and,subsequently, to a(deterministic)parametric quadratic programming problem.The reader is referred to Doan(2010)for a moment-based uncertainty model for stochastic optimization prob-lems,which addresses the ambiguity of probability distributions of random parameters with a minimax decision rule,and a comparison with data-driven approaches.Distributionally robust optimization in the context of data-driven problems is the focus of Delage(2009),which uses observed data to define a”well structured”set of dis-tributions that is guaranteed with high probability to contain the distribution from which the samples were drawn. Zymler et al.(2012a)develop tractable semidefinite programming(SDP)based approximations for distributionally robust individual and joint chance constraints,assuming that only thefirst-and second-order moments as well as the support of the uncertain parameters are given.Becker(2011)studies the distributionally robust optimization problem with known mean,covariance and support and develops a decomposition method for this family of prob-lems which recursively derives sub-policies along projected dimensions of uncertainty while providing a sequence of bounds on the value of the derived policy.Robust linear optimization using distributional information is further studied in Kang(2008).Further,Delage and Ye(2010)investigates distributional robustness with moment uncertainty.Specifically,uncertainty affects the problem both in terms of the distribution and of its moments.The authors show that the resulting problems can be solved efficiently and prove that the solutions exhibit,with high probability,best worst-case performance over a set of distributions.Bertsimas et al.(2010)proposes a semidefinite optimization model to address minimax two-stage stochastic linear problems with risk aversion,when the distribution of the second-stage random variables belongs to a set of multivariate distributions with knownfirst and second moments.The minimax solutions provide a natural distribu-tion to stress-test stochastic optimization problems under distributional ambiguity.Cromvik and Patriksson(2010a) show that,under certain assumptions,global optima and stationary solutions of stochastic mathematical programs with equilibrium constraints are robust with respect to changes in the underlying probability distribution.Works such as Zhu and Fukushima(2009)and Zymler(2010)also study distributional robustness in the context of specific applications,such as portfolio management.2.5Connection with Risk TheoryBertsimas and Brown(2009)describe how to connect uncertainty sets in robust linear optimization to coherent risk measures,an example of which is Conditional Value-at-Risk.In particular,the authors show the link between polyhedral uncertainty sets of a special structure and a subclass of coherent risk measures called distortion risk measures.Independently,Chen et al.(2007)present an approach for constructing uncertainty sets for robust opti-mization using new deviation measures that capture the asymmetry of the distributions.These deviation measures lead to improved approximations of chance constraints.Dentcheva and Ruszczynski(2010)proposes the concept of robust stochastic dominance and shows its applica-tion to risk-averse optimization.They consider stochastic optimization problems where risk-aversion is expressed by a robust stochastic dominance constraint and develop necessary and sufficient conditions of optimality for such optimization problems in the convex case.In the nonconvex case,they derive necessary conditions of optimality under additional smoothness assumptions of some mappings involved in the problem.2.6Nonlinear OptimizationRobust nonlinear optimization remains much less widely studied to date than its linear counterpart.Bertsimas et al.(2010c)presents a robust optimization approach for unconstrained non-convex problems and problems based on simulations.Such problems arise for instance in the partial differential equations literature and in engineering applications such as nanophotonic design.An appealing feature of the approach is that it does not assume any specific structure for the problem.The case of robust nonlinear optimization with constraints is investigated in Bertsimas et al.(2010b)with an application to radiation therapy for cancer treatment.Bertsimas and Nohadani (2010)further explore robust nonconvex optimization in contexts where solutions are not known explicitly,e.g., have to be found using simulation.They present a robust simulated annealing algorithm that improves performance and robustness of the solution.Further,Boni et al.(2008)analyzes problems with uncertain conic quadratic constraints,formulating an approx-imate robust counterpart,and Zhang(2007)provide formulations to nonlinear programming problems that are valid in the neighborhood of the nominal parameters and robust to thefirst order.Hsiung et al.(2008)present tractable approximations to robust geometric programming,by using piecewise-linear convex approximations of each non-linear constraint.Geometric programming is also investigated in Shen et al.(2008),where the robustness is injected at the level of the algorithm and seeks to avoid obtaining infeasible solutions because of the approximations used in the traditional approach.Interval uncertainty-based robust optimization for convex and non-convex quadratic programs are considered in Li et al.(2011).Takeda et al.(2010)studies robustness for uncertain convex quadratic programming problems with ellipsoidal uncertainties and proposes a relaxation technique based on random sampling for robust deviation optimization sserre(2011)considers minimax and robust models of polynomial optimization.A special case of nonlinear problems that are linear in the decision variables but convex in the uncertainty when the worst-case objective is to be maximized is investigated in Kawas and Thiele(2011a).In that setting,exact and tractable robust counterparts can be derived.A special class of nonconvex robust optimization is examined in Kawas and Thiele(2011b).Robust nonconvex optimization is examined in detail in Teo(2007),which presents a method that is applicable to arbitrary objective functions by iteratively moving along descent directions and terminates at a robust local minimum.3Applications of Robust OptimizationWe describe below examples to which robust optimization has been applied.While an appealing feature of robust optimization is that it leads to models that can be solved using off-the-shelf software,it is worth pointing the existence of algebraic modeling tools that facilitate the formulation and subsequent analysis of robust optimization problems on the computer(Goh and Sim,2011).3.1Production,Inventory and Logistics3.1.1Classical logistics problemsThe capacitated vehicle routing problem with demand uncertainty is studied in Sungur et al.(2008),with a more extensive treatment in Sungur(2007),and the robust traveling salesman problem with interval data in Montemanni et al.(2007).Remli and Rekik(2012)considers the problem of combinatorial auctions in transportation services when shipment volumes are uncertain and proposes a two-stage robust formulation solved using a constraint gener-ation algorithm.Zhang(2011)investigates two-stage minimax regret robust uncapacitated lot-sizing problems with demand uncertainty,in particular showing that it is polynomially solvable under the interval uncertain demand set.3.1.2SchedulingGoren and Sabuncuoglu(2008)analyzes robustness and stability measures for scheduling in a single-machine environment subject to machine breakdowns and embeds them in a tabu-search-based scheduling algorithm.Mittal (2011)investigates efficient algorithms that give optimal or near-optimal solutions for problems with non-linear objective functions,with a focus on robust scheduling and service operations.Examples considered include parallel machine scheduling problems with the makespan objective,appointment scheduling and assortment optimization problems with logit choice models.Hazir et al.(2010)considers robust scheduling and robustness measures for the discrete time/cost trade-off problem.3.1.3Facility locationAn important question in logistics is not only how to operate a system most efficiently but also how to design it. Baron et al.(2011)applies robust optimization to the problem of locating facilities in a network facing uncertain demand over multiple periods.They consider a multi-periodfixed-charge network location problem for which they find the number of facilities,their location and capacities,the production in each period,and allocation of demand to facilities.The authors show that different models of uncertainty lead to very different solution network topologies, with the model with box uncertainty set opening fewer,larger facilities.?investigate a robust version of the location transportation problem with an uncertain demand using a2-stage formulation.The resulting robust formulation is a convex(nonlinear)program,and the authors apply a cutting plane algorithm to solve the problem exactly.Atamt¨u rk and Zhang(2007)study the networkflow and design problem under uncertainty from a complexity standpoint,with applications to lot-sizing and location-transportation problems,while Bardossy(2011)presents a dual-based local search approach for deterministic,stochastic,and robust variants of the connected facility location problem.The robust capacity expansion problem of networkflows is investigated in Ordonez and Zhao(2007),which provides tractable reformulations under a broad set of assumptions.Mudchanatongsuk et al.(2008)analyze the network design problem under transportation cost and demand uncertainty.They present a tractable approximation when each commodity only has a single origin and destination,and an efficient column generation for networks with path constraints.Atamt¨u rk and Zhang(2007)provides complexity results for the two-stage networkflow anddesign plexity results for the robust networkflow and network design problem are also provided in Minoux(2009)and Minoux(2010).The problem of designing an uncapacitated network in the presence of link failures and a competing mode is investigated in Laporte et al.(2010)in a railway application using a game theoretic perspective.Torres Soto(2009)also takes a comprehensive view of the facility location problem by determining not only the optimal location but also the optimal time for establishing capacitated facilities when demand and cost parameters are time varying.The models are solved using Benders’decomposition or heuristics such as local search and simulated annealing.In addition,the robust networkflow problem is also analyzed in Boyko(2010),which proposes a stochastic formulation of minimum costflow problem aimed atfinding network design andflow assignments subject to uncertain factors,such as network component disruptions/failures when the risk measure is Conditional Value at Risk.Nagurney and Qiang(2009)suggests a relative total cost index for the evaluation of transportation network robustness in the presence of degradable links and alternative travel behavior.Further,the problem of locating a competitive facility in the plane is studied in Blanquero et al.(2011)with a robustness criterion.Supply chain design problems are also studied in Pan and Nagi(2010)and Poojari et al.(2008).3.1.4Inventory managementThe topic of robust multi-stage inventory management has been investigated in detail in Bienstock and Ozbay (2008)through the computation of robust basestock levels and Ben-Tal et al.(2009)through an extension of the Affinely Adjustable Robust Counterpart framework to control inventories under demand uncertainty.See and Sim (2010)studies a multi-period inventory control problem under ambiguous demand for which only mean,support and some measures of deviations are known,using a factor-based model.The parameters of the replenishment policies are obtained using a second-order conic programming problem.Song(2010)considers stochastic inventory control in robust supply chain systems.The work proposes an inte-grated approach that combines in a single step datafitting and inventory optimization–using histograms directly as the inputs for the optimization model–for the single-item multi-period periodic-review stochastic lot-sizing problem.Operation and planning issues for dynamic supply chain and transportation networks in uncertain envi-ronments are considered in Chung(2010),with examples drawn from emergency logistics planning,network design and congestion pricing problems.3.1.5Industry-specific applicationsAng et al.(2012)proposes a robust storage assignment approach in unit-load warehouses facing variable supply and uncertain demand in a multi-period setting.The authors assume a factor-based demand model and minimize the worst-case expected total travel in the warehouse with distributional ambiguity of demand.A related problem is considered in Werners and Wuelfing(2010),which optimizes internal transports at a parcel sorting center.Galli(2011)describes the models and algorithms that arise from implementing recoverable robust optimization to train platforming and rolling stock planning,where the concept of recoverable robustness has been defined in。
酒店管理理论 经济型酒店起源
酒店管理理论经济型酒店起源经济型酒店的发展和投资风险分析国外经济型酒店的发展分析国外经济型酒店的发展历程经济型酒店起源于美国,1951 年,凯蒙·威尔逊看准了汽车旅馆蕴藏的巨大商机,在通向田纳西州首府的孟菲斯城的主要通道上兴建了一座名为“HolidayInn”的拥有120 间客房的汽车旅馆,由于生意红火,他很快就开了连锁店。
与此同时,美国和欧洲的酒店集团也纷纷成立,其中多数的酒店集团是从经济型酒店起家的,如Best Western、Ramada 等等。
至二十世纪六、七十年代,随着美国在全国范围内大规模修建高速公路经济型酒店投资达到高潮。
美国经济型酒店在高速发展的同时,十分注重市场细分,大中城市多修建商务型的经济型酒店、旅游胜地多修建度假型的经济型酒店。
目前,美国经济型酒店有近140 个品牌,并有明确的市场定位,如,商务、观光、家庭等,且在硬件设施和服务配套等方面体现出不同的特色。
欧洲的经济型酒店情况相似,法国雅高集团旗下就有多个经济型酒店品牌,其中,Ibis 定位于商务型,Formula l 定位于个人旅游。
仅Ibis 品牌旗下的经济型酒店数量就达到700 多家。
由于银行金融业的渗入,许多区域性的经济型酒店集团已逐渐发展成世界性的酒店集团,引领着世界饭店业的发展。
经济型酒店在全球的发展经历了四个历史阶段:萌芽与发展初期;蓬勃发展时期;品牌调整时期;全球布局期。
(一)20 世纪30 年代末~50 年代末——萌芽与发展初期20 世纪30 年代后,随着美国大众消费的兴起,以及公路网络的发展,汽车旅馆开始出现,为平民的出游提供廉价的住宿服务。
二战后,美国经济的繁荣促生了大众旅游的兴起,引发了人们对中低档住宿设施的大量需求;城际高速公路网络的发展则促进了汽车旅馆的风行。
如1952 年诞生的假日汽车旅馆,它吸取了过去汽车旅馆发展的经验,重视改善服务质量,第一次尝试用标准化的方式复制其产品和服务,在短短的10 年时间里,沿着美国的公路网络迅速发展起来。
SOCIAL CAPITAL Its Origins and Applications in Modern Sociology
Annu.Rev.Sociol.1998.24:1–24Copyright ©1998by Annual Reviews.All rights reservedSOCIAL CAPITAL:Its Origins andApplications in Modern Sociology Alejandro Portes Department of Sociology,Princeton University,Princeton,New Jersey 08540KEY WORDS:social control,family support,networks,sociabilityA BSTRACTThispaper reviews the origins and definitionsof social capital in the writings of Bourdieu,Loury,and Coleman,among other authors.It distinguishes foursources of social capital and examines their dynamics.Applications of the concept in the sociological literature emphasize its role in social control,infamilysupport,and in benefits mediated by extrafamilial networks.I provideexamples of each of these positive functions.Negative consequences of thesame processes also deserve attention for a balanced picture of the forces at play.I review four such consequences and illustrate them with relevant ex-amples.Recent writings on social capital have extended the concept from an individual asset to a feature of communities and even nations.The final sec-tionsdescribe this conceptual stretch and examine its limitations.I argue that,as shorthand for the positive consequences of sociability,social capitalhas a definite place in sociological theory.However,excessive extensions of the concept may jeopardize its heuristic value.Alejandro Portes:Biographical SketchAlejandroPortes is professor of sociology at Princeton University andfaculty associate of the Woodrow Wilson School of Public Affairs.He for-merly taught at Johns Hopkins where he held the John Dewey Chair in Artsand Sciences,Duke University,and the University of Texas-Austin.In 1997he heldthe Emilio Bacardi distinguished professorship at the University ofMiami.In the same year he was elected president of the American Sociologi-cal Association.Born in Havana,Cuba,he came to the United States in 1960.He was educated at the University of Havana,Catholic University of Argen-tina,and Creighton University.He received his MA and PhD from the Uni-versity of Wisconsin-Madison.0360-0572/98/0815-0001$08.001A n n u . R e v . S o c i o l . 1998.24:1-24. D o w n l o a d e d f r o m a r j o u r n a l s .a n n u a l r e v i e w s .o r g b yS w i ssAca dem icLi bra ryC onsor t iaon3/24/9.F orpe rsonaluseo nl y.Portes is the author of some 200articles and chapters on national devel-opment,international migration,Latin American and Caribbean urbaniza-tion,and economic sociology.His most recent books include City on the Edge,the Transformation of Miami (winner of the Robert Park award for best book in urban sociology and of the Anthony Leeds award for best book in urban anthropology in 1995);The New Second Generation (Russell Sage Foundation 1996);Caribbean Cities (Johns Hopkins University Press);and Immigrant America,a Portrait.The latter book was designated as a centen-nial publication by the University of California Press.It was originally pub-lished in 1990;the second edition,updated and containing new chapters on American immigration policy and the new second generation,was published in 1996.Introduction During recent years,the concept of social capital has become one of the most popular exports from sociological theory into everyday language.Dissemi-nated by a number of policy-oriented journals and general circulation maga-zines,social capital has evolved into something of a cure-all for the maladies affecting society at home and abroad.Like other sociological concepts that have traveled a similar path,the original meaning of the term and its heuristic value are being put to severe tests by these increasingly diverse applications.As in the case of those earlier concepts,the point is approaching at which so-cial capital comes to be applied to so many events and in so many different contexts as to lose any distinct meaning.Despite its current popularity,the term does not embody any idea really new to sociologists.That involvement and participation in groups can have positive consequences for the individual and the community is a staple notion,dating back to Durkheim’s emphasis on group life as an antidote to anomie and self-destruction and to Marx’s distinction between an atomized class-in-itself and a mobilized and effective class-for-itself.In this sense,the term social capital simply recaptures an insight present since the very beginnings of the disci-pline.Tracing the intellectual background of the concept into classical times would be tantamount to revisiting sociology’s major nineteenth century sources.That exercise would not reveal,however,why this idea has caught on in recent years or why an unusual baggage of policy implications has been heaped on it.The novelty and heuristic power of social capital come from two sources.First,the concept focuses attention on the positive consequences of sociability while putting aside its less attractive features.Second,it places those positive consequences in the framework of a broader discussion of capital and calls atten-tion to how such nonmonetary forms can be important sources of power and in-fluence,like the size of one’s stock holdings or bank account.The potential fungi-bility of diverse sources of capital reduces the distance between the sociologi-2PORTESA n n u . R e v . S o c i o l . 1998.24:1-24. D o w n l o a d e d f r o m a r j o u r n a l s .a n n u a l r e v i e w s .o r g b y S w i s s A c a d e m i c L i b r a r y C o n s o r t i a o n 03/24/09. F o r p e r s o n a l u s e o n l y .cal and economic perspectives and simultaneously engages the attention of policy-makers seeking less costly,non-economic solutions to social problems.In the course of this review,I limit discussion to the contemporary reemer-gence of the idea to avoid a lengthy excursus into its classical predecessors.To an audience of sociologists,these sources and the parallels between present so-cial capital discussions and passages in the classical literature will be obvious.I examine,first,the principal authors associated with the contemporary usage of the term and their different approaches to it.Then I review the various mechanisms leading to the emergence of social capital and its principal appli-cations in the research literature.Next,I examine those not-so-desirable con-sequences of sociability that are commonly obscured in the contemporary lit-erature on the topic.This discussion aims at providing some balance to the fre-quently celebratory tone with which the concept is surrounded.That tone is es-pecially noticeable in those studies that have stretched the concept from a property of individuals and families to a feature of communities,cities,and even nations.The attention garnered by applications of social capital at this broader level also requires some discussion,particularly in light of the poten-tial pitfalls of that conceptual stretch.Definitions The first systematic contemporary analysis of social capital was produced by Pierre Bourdieu,who defined the concept as “the aggregate of the actual or po-tential resources which are linked to possession of a durable network of more or less institutionalized relationships of mutual acquaintance or recognition”(Bourdieu 1985,p.248;1980).This initial treatment of the concept appeared in some brief “Provisional Notes”published in the Actes de la Recherche en Sciences Sociales in 1980.Because they were in French,the article did not gar-ner widespread attention in the English-speaking world;nor,for that matter,did the first English translation,concealed in the pages of a text on the sociol-ogy of education (Bourdieu 1985).This lack of visibility is lamentable because Bourdieu’s analysis is arguably the most theoretically refined among those that introduced the term in contem-porary sociological discourse.His treatment of the concept is instrumental,fo-cusing on the benefits accruing to individuals by virtue of participation in groups and on the deliberate construction of sociability for the purpose of cre-ating this resource.In the original version,he went as far as asserting that “the profits which accrue from membership in a group are the basis of the solidarity which makes them possible”(Bourdieu 1985,p.249).Social networks are not a natural given and must be constructed through investment strategies oriented to the institutionalization of group relations,usable as a reliable source of other benefits.Bourdieu’s definition makes clear that social capital is decomposable into two elements:first,the social relationship itself that allows individuals to SOCIAL CAPITAL:ORIGINS AND APPLICATIONS 3A n n u . R e v . S o c i o l . 1998.24:1-24. D o w n l o a d e d f r o m a r j o u r n a l s .a n n u a l r e v i e w s .o r g b y S w i s s A c a d e m i c L i b r a r y C o n s o r t i a o n 03/24/09. F o r p e r s o n a l u s e o n l y .claim access to resources possessed by their associates,and second,the amount and quality of those resources.Throughout,Bourdieu’s emphasis is on the fungibility of different forms of capital and on the ultimate reduction of all forms to economic capital,defined as accumulated human labor.Hence,through social capital,actors can gain di-rect access to economic resources (subsidized loans,investment tips,protected markets);they can increase their cultural capital through contacts with experts or individuals of refinement (i.e.embodied cultural capital);or,alternatively,they can affiliate with institutions that confer valued credentials (i.e.institu-tionalized cultural capital).On the other hand,the acquisition of social capital requires deliberate invest-ment of both economic and cultural resources.Though Bourdieu insists that the outcomes of possession of social or cultural capital are reducible to economic capital,the processes that bring about these alternative forms are not.They each possess their own dynamics,and,relative to economic exchange,they are characterized by less transparency and more uncertainty.For example,trans-actions involving social capital tend to be characterized by unspecified obliga-tions,uncertain time horizons,and the possible violation of reciprocity expec-tations.But,by their very lack of clarity,these transactions can help disguise what otherwise would be plain market exchanges (Bourdieu 1979,1980).A second contemporary source is the work of economist Glen Loury (1977,1981).He came upon the term in the context of his critique of neoclassical theories of racial income inequality and their policy implications.Loury ar-gued that orthodox economic theories were too individualistic,focusing exclu-sively on individual human capital and on the creation of a level field for com-petition based on such skills.By themselves,legal prohibitions against em-ployers’racial tastes and implementation of equal opportunity programs would not reduce racial inequalities.The latter could go on forever,according to Loury,for two reasons—first,the inherited poverty of black parents,which would be transmitted to their children in the form of lower material resources and educational opportunities;second,the poorer connections of young black workers to the labor market and their lack of information about opportunities:The merit notion that,in a free society,each individual will rise to the level justified by his or her competence conflicts with the observation that no one travels that road entirely alone.The social context within which individual maturation occurs strongly conditions what otherwise equally competent in-dividuals can achieve.This implies that absolute equality of opportunity,…is an ideal that cannot be achieved.(Loury 1977,p.176)Loury cited with approval the sociological literature on intergenerational mobility and inheritance of race as illustrating his anti-individualist argument.However,he did not go on to develop the concept of social capital in any detail.4PORTESA n n u . R e v . S o c i o l . 1998.24:1-24. D o w n l o a d e d f r o m a r j o u r n a l s .a n n u a l r e v i e w s .o r g b y S w i s s A c a d e m i c L i b r a r y C o n s o r t i a o n 03/24/09. F o r p e r s o n a l u s e o n l y .He seems to have run across the idea in the context of his polemic against or-thodox labor economics,but he mentions it only once in his original article and then in rather tentative terms (Loury 1977).The concept captured the differen-tial access to opportunities through social connections for minority and nonmi-nority youth,but we do not find here any systematic treatment of its relations to other forms of capital.Loury’s work paved the way,however,for Coleman’s more refined analy-sis of the same process,namely the role of social capital in the creation of hu-man capital.In his initial analysis of the concept,Coleman acknowledges Loury’s contribution as well as those of economist Ben-Porath and sociolo-gists Nan Lin and Mark Granovetter.Curiously,Coleman does not mention Bourdieu,although his analysis of the possible uses of social capital for the ac-quisition of educational credentials closely parallels that pioneered by the French sociologist.1Coleman defined social capital by its function as “a vari-ety of entities with two elements in common:They all consist of some aspect of social structures,and they facilitate certain action of actors—whether per-sons or corporate actors—within the structure”(Coleman 1988a:p.S98,1990,p.302).This rather vague definition opened the way for relabeling a number of dif-ferent and even contradictory processes as social capital.Coleman himself started that proliferation by including under the term some of the mechanisms that generated social capital (such as reciprocity expectations and group en-forcement of norms);the consequences of its possession (such as privileged access to information);and the “appropriable”social organization that pro-vided the context for both sources and effects to materialize.Resources ob-tained through social capital have,from the point of view of the recipient,the character of a gift.Thus,it is important to distinguish the resources themselves from the ability to obtain them by virtue of membership in different social structures,a distinction explicit in Bourdieu but obscured in Coleman.Equat-ing social capital with the resources acquired through it can easily lead to tau-tological statements.2Equally important is the distinction between the motivations of recipients and of donors in exchanges mediated by social capital.Recipients’desire toSOCIAL CAPITAL:ORIGINS AND APPLICATIONS 51The closest equivalent to human capital in Bourdieu’s analysis is embodied cultural capital,which is defined as the habitus of cultural practices,knowledge,and demeanors learned through exposure to role models in the family and other environments (Bourdieu 1979).2Saying,for example,that student A has social capital because he obtained access to a large tuition loan from his kin and that student B does not because she failed to do so neglects the possibility that B’s kin network is equally or more motivated to come to her aid but simply lacks the means to do.Defining social capital as equivalent with the resources thus obtained is tantamount to saying that the successful succeed.This circularity is more evident in applications of social capital that define it as a property of collectivities.These are reviewed below.A n n u . R e v . S o c i o l . 1998.24:1-24. D o w n l o a d e d f r o m a r j o u r n a l s .a n n u a l r e v i e w s .o r g b y S w i s s A c a d e m i c L i b r a r y C o n s o r t i a o n 03/24/09. F o r p e r s o n a l u s e o n l y .gain access to valuable assets is readily understandable.More complex are the motivations of the donors,who are requested to make these assets available without any immediate return.Such motivations are plural and deserve analy-sis because they are the core processes that the concept of social capital seeks to capture.Thus,a systematic treatment of the concept must distinguish among:(a )the possessors of social capital (those making claims);(b )the sources of social capital (those agreeing to these demands);(c )the resources themselves.These three elements are often mixed in discussions of the concept following Coleman,thus setting the stage for confusion in the uses and scope of the term.Despite these limitations,Coleman’s essays have the undeniable merit of introducing and giving visibility to the concept in American sociology,high-lighting its importance for the acquisition of human capital,and identifying some of the mechanisms through which it is generated.In this last respect,his discussion of closure is particularly enlightening.Closure means the existence of sufficient ties between a certain number of people to guarantee the obser-vance of norms.For example,the possibility of malfeasance within the tightly knit community of Jewish diamond traders in New York City is minimized by the dense ties among its members and the ready threat of ostracism against vio-lators.The existence of such a strong norm is then appropriable by all members of the community,facilitating transactions without recourse to cumbersome legal contracts (Coleman 1988a:S99).After Bourdieu,Loury,and Coleman,a number of theoretical analyses of social capital have been published.In 1990,WE Baker defined the concept as “a resource that actors derive from specific social structures and then use to pursue their interests;it is created by changes in the relationship among actors”(Baker 1990,p.619).More broadly,M Schiff defines the term as “the set of elements of the social structure that affects relations among people and are in-puts or arguments of the production and/or utility function”(Schiff 1992,p.161).Burt sees it as “friends,colleagues,and more general contacts through whom you receive opportunities to use your financial and human capital”(Burt 1992,p.9).Whereas Coleman and Loury had emphasized dense net-works as a necessary condition for the emergence of social capital,Burt high-lights the opposite situation.In his view,it is the relative absence of ties,la-beled “structural holes,”that facilitates individual mobility.This is so because dense networks tend to convey redundant information,while weaker ties can be sources of new knowledge and resources.Despite these differences,the consensus is growing in the literature that so-cial capital stands for the ability of actors to secure benefits by virtue of mem-bership in social networks or other social structures.This is the sense in which it has been more commonly applied in the empirical literature although,as we will see,the potential uses to which it is put vary greatly.6PORTESA n n u . R e v . S o c i o l . 1998.24:1-24. D o w n l o a d e d f r o m a r j o u r n a l s .a n n u a l r e v i e w s .o r g b y S w i s s A c a d e m i c L i b r a r y C o n s o r t i a o n 03/24/09. F o r p e r s o n a l u s e o n l y .Sources of Social CapitalBoth Bourdieu and Coleman emphasize the intangible character of social capital relative to other forms.Whereas economic capital is in people’s bank accounts and human capital is inside their heads,social capital inheres in the structure of their relationships.To possess social capital,a person must be related to others,and it is those others,not himself,who are the actual source of his or her advan-tage.As mentioned before,the motivation of others to make resources avail-able on concessionary terms is not uniform.At the broadest level,one may dis-tinguish between consummatory versus instrumental motivations to do so.As examples of the first,people may pay their debts in time,give alms to charity,and obey traffic rules because they feel an obligation to behave in this manner.The internalized norms that make such behaviors possible are then ap-propriable by others as a resource.In this instance,the holders of social capital are other members of the community who can extend loans without fear of nonpayment,benefit from private charity,or send their kids to play in the street without concern.Coleman (1988a:S104)refers to this source in his analysis of norms and sanctions:“Effective norms that inhibit crime make it possible to walk freely outside at night in a city and enable old persons to leave their houses without fear for their safety.”As is well known,an excessive emphasis on this process of norm internalization led to the oversocialized conception of human action in sociology so trenchantly criticized by Wrong (1961).An approach closer to the undersocialized view of human nature in modern economics sees social capital as primarily the accumulation of obligations from others according to the norm of reciprocity.In this version,donors pro-vide privileged access to resources in the expectation that they will be fully re-paid in the future.This accumulation of social chits differs from purely eco-nomic exchange in two aspects.First,the currency with which obligations are repaid may be different from that with which they were incurred in the first place and may be as intangible as the granting of approval or allegiance.Sec-ond,the timing of the repayment is unspecified.Indeed,if a schedule of repay-ments exists,the transaction is more appropriately defined as market exchange than as one mediated by social capital.This instrumental treatment of the term is quite familiar in sociology,dating back to the classical analysis of social ex-change by Simmel ([1902a]1964),the more recent ones by Homans (1961)and Blau (1964),and extensive work on the sources and dynamics of reciproc-ity by authors of the rational action school (Schiff 1992,Coleman 1994).Two other sources of social capital exist that fit the consummatory versus instrumental dichotomy,but in a different way.The first finds its theoretical underpinnings in Marx’s analysis of emergent class consciousness in the in-dustrial proletariat.By being thrown together in a common situation,workers learn to identify with each other and support each other’s initiatives.This soli-SOCIAL CAPITAL:ORIGINS AND APPLICATIONS 7A n n u . R e v . S o c i o l . 1998.24:1-24. D o w n l o a d e d f r o m a r j o u r n a l s .a n n u a l r e v i e w s .o r g b y S w i s s A c a d e m i c L i b r a r y C o n s o r t i a o n 03/24/09. F o r p e r s o n a l u s e o n l y .darity is not the result of norm introjection during childhood,but is an emer-gent product of a common fate (Marx [1894]1967,Marx &Engels [1848]1947).For this reason,the altruistic dispositions of actors in these situations are not universal but are bounded by the limits of their community.Other members of the same community can then appropriate such dispositions and the actions that follow as their source of social capital.Bounded solidarity is the term used in the recent literature to refer to this mechanism.It is the source of social capital that leads wealthy members of a church to anonymously endow church schools and hospitals;members of a suppressed nationality to voluntarily join life-threatening military activities in its defense;and industrial proletarians to take part in protest marches or sym-pathy strikes in support of their fellows.Identification with one’s own group,sect,or community can be a powerful motivational force.Coleman refers to extreme forms of this mechanism as “zeal”and defines them as an effective an-tidote to free-riding by others in collective movements (Coleman 1990,pp.273–82;Portes &Sensenbrenner 1993).The final source of social capital finds its classical roots in Durkheim’s ([1893]1984)theory of social integration and the sanctioning capacity of group rituals.As in the case of reciprocity exchanges,the motivation of donors of socially mediated gifts is instrumental,but in this case,the expectation of re-payment is not based on knowledge of the recipient,but on the insertion of both actors in a common social structure.The embedding of a transaction into suchstructure has two consequences.First,the donor’s returns may come not8PORTESFigure 1Actual and potential gains and losses in transactions mediated by social capitalA n n u . R e v . S o c i o l . 1998.24:1-24. D o w n l o a d e d f r o m a r j o u r n a l s .a n n u a l r e v i e w s .o r g b y S w i s s A c a d e m i c L i b r a r y C o n s o r t i a o n 03/24/09. F o r p e r s o n a l u s e o n l y .directly from the recipient but from the collectivity as a whole in the form of status,honor,or approval.Second,the collectivity itself acts as guarantor that whatever debts are incurred will be repaid.As an example of the first consequence,a member of an ethnic group may endow a scholarship for young co-ethnic students,thereby expecting not re-payment from recipients but rather approval and status in the collectivity.The students’social capital is not contingent on direct knowledge of their benefac-tor,but on membership in the same group.As an example of the second effect,a banker may extend a loan without collateral to a member of the same relig-ious community in full expectation of repayment because of the threat of com-munity sanctions and ostracism.In other words,trust exists in this situation precisely because obligations are enforceable,not through recourse to law or violence but through the power of the community.In practice,these two effects of enforceable trust are commonly mixed,as when someone extends a favor to a fellow member in expectation of both guaranteed repayment and group approval.As a source of social capital,en-forceable trust is hence appropriable by both donors and recipients:For recipi-ents,it obviously facilitates access to resources;for donors,it yields approval and expedites transactions because it ensures against malfeasance.No lawyer need apply for business transactions underwritten by this source of social capi-tal.The left side of Figure 1summarizes the discussion in this section.Keeping these distinctions in mind is important to avoid confusing consummatory and instrumental motivations or mixing simple dyadic exchanges with those em-bedded in larger social structures that guarantee their predictability and course.Effects of Social Capital:Recent Research Just as the sources of social capital are plural so are its consequences.The em-pirical literature includes applications of the concept as a predictor of,among others,school attrition and academic performance,children’s intellectual de-velopment,sources of employment and occupational attainment,juvenile de-linquency and its prevention,and immigrant and ethnic enterprise.3Diversity of effects goes beyond the broad set of specific dependent variables to which social capital has been applied to encompass,in addition,the character and meaning of the expected consequences.A review of the literature makes it pos-sible to distinguish three basic functions of social capital,applicable in a vari-ety of contexts:(a )as a source of social control;(b )as a source of family sup-port;(c )as a source of benefits through extrafamilial networks.SOCIAL CAPITAL:ORIGINS AND APPLICATIONS 93The following review does not aim at an exhaustive coverage of the empirical literature.That task has been rendered obsolete by the advent of computerized topical searches.My purpose instead is to document the principal types of application of the concept in the literature and to highlight their interrelationships.A n n u . R e v . S o c i o l . 1998.24:1-24. D o w n l o a d e d f r o m a r j o u r n a l s .a n n u a l r e v i e w s .o r g b y S w i s s A c a d e m i c L i b r a r y C o n s o r t i a o n 03/24/09. F o r p e r s o n a l u s e o n l y .As examples of the first function,we find a series of studies that focus on rule enforcement.The social capital created by tight community networks is useful to parents,teachers,and police authorities as they seek to maintain dis-cipline and promote compliance among those under their charge.Sources of this type of social capital are commonly found in bounded solidarity and en-forceable trust,and its main result is to render formal or overt controls unnec-essary.The process is exemplified by Zhou &Bankston’s study of the tightly knit Vietnamese community of New Orleans:Both parents and children are constantly observed as under a “Vietnamese microscope.”If a child flunks out or drops out of a school,or if a boy falls into a gang or a girl becomes pregnant without getting married,he or she brings shame not only to himself or herself but also to the family.(Zhou &Bankston 1996,p.207)The same function is apparent in Hagan et al’s (1995)analysis of right-wing extremism among East German beling right-wing extremism a sub-terranean tradition in German society,these authors seek to explain the rise of that ideology,commonly accompanied by anomic wealth aspirations among German adolescents.These tendencies are particularly strong among those from the formerly communist eastern states.That trend is explained as the joint outcome of the removal of social controls (low social capital),coupled with the long deprivations endured by East Germans.Incorporation into the West has brought about new uncertainties and the loosening of social integration,thus allowing German subterranean cultural traditions to re-emerge.Social control is also the focus of several earlier essays by Coleman,who laments the disappearance of those informal family and community structures that produced this type of social capital;Coleman calls for the creation of for-mal institutions to take their place.This was the thrust of Coleman’s 1992presidential address to the American Sociological Association,in which he traced the decline of “primordial”institutions based on the family and their re-placement by purposively constructed organizations.In his view,modern soci-ology’s task is to guide this process of social engineering that will substitute obsolete forms of control based on primordial ties with rationally devised ma-terial and status incentives (Coleman 1988b,1993).The function of social capital for social control is also evident whenever the concept is discussed in conjunction with the law (Smart 1993,Weede 1992).It is as well the central focus when it is defined as a property of collectivities such as cities or nations.This latter approach,associated mainly with the writings of political scientists,is discussed in a following section.The influence of Coleman’s writings is also clear in the second function of social capital,namely as a source of parental and kin support.Intact families and those where one parent has the primary task of rearing children possess 10PORTESA n n u . R e v . S o c i o l . 1998.24:1-24. D o w n l o a d e d f r o m a r j o u r n a l s .a n n u a l r e v i e w s .o r g b y S w i s s A c a d e m i c L i b r a r y C o n s o r t i a o n 03/24/09. F o r p e r s o n a l u s e o n l y .。
Evidence on the Trade-Off between Real Activities Manipulation and Accrual-Based Earnings Management
Evidence on the trade-off between real activities manipulation and accrual-based earningsmanagementAmy Y. ZangThe Hong Kong University of Science and TechnologyAbstract: I study whether managers use real activities manipulation and accrual-based earnings management as substitutes in managing earnings. I find that managers trade off the two earnings management methods based on their relative costs and that managers adjust the level of accrual-based earnings management according to the level of real activities manipulation realized. Using an empirical model that incorporates the costs associated with the two earnings management methods and captures managers’ sequential decisions, I document large sample evidence consistent with managers using real activities manipulation and accrual-based earnings management as substitutes.Keywords:real activities manipulation, accrual-based earnings management, trade-off Data Availability:Data are available from public sources indicated in the text.I am grateful for the guidance from my dissertation committee members, Jennifer Francis (chair), Qi Chen, Dhananjay Nanda, Per Olsson and Han Hong. I am also grateful for the suggestions and guidance received from Steven Kachelmeier (senior editor), Dan Dhaliwal and two anonymous reviewers. I thank Allen Huang, Moshe Bareket, Yvonne Lu, Shiva Rajgopal, Mohan Venkatachalam and Jerry Zimmerman for helpful comments. I appreciate the comments from the workshop participants at Duke University, University of Notre Dame, University of Utah, University of Arizona, University of Texas at Dallas, Dartmouth College, University of Oregon, Georgetown University, University of Rochester, Washington University in St. Louis and the HKUST. I gratefully acknowledge the financial support from the Fuqua School of Business at Duke University, the Deloitte Foundation, University of Rochester and the HKUST. Errors and omissions are my responsibility.I.INTRODUCTIONI study how firms trade off two earnings management strategies, real activities manipulation and accrual-based earnings management, using a large sample of firms over 1987–2008. Prior studies have shown evidence of firms altering real activities to manage earnings (e.g., Roychowdhury 2006; Graham et al. 2005) and evidence that firms make choices between the two earnings management strategies (Cohen et al. 2008; Cohen and Zarowin 2010; Badertscher 2011). My study extends research on the trade-off between real activities manipulation and accrual-based earnings management by documenting a set of variables that explain the costs of both real and accrual earnings management. I provide evidence for the trade-off decision as a function of the relative costs of the two activities and show that there is direct substitution between them after the fiscal year end due to their sequential nature.Real activities manipulation is a purposeful action to alter reported earnings in a particular direction, which is achieved by changing the timing or structuring of an operation, investment or financing transaction, and which has suboptimal business consequences. The idea that firms engage in real activities manipulation is supported by the survey evidence in Graham et al. (2005).1 They report that 80 percent of surveyed CFOs stated that, in order to deliver earnings, they would decrease research and development (R&D), advertising and maintenance expenditures, while 55 percent said they would postpone a new project, both of which are real activities manipulation.1 In particular, Graham et al. (2005) note that: “The opinion of many of the CFOs is that every companywould/should take actions such of these [real activities manipulation] to deliver earnings, as long as the real sacrifices are not too large and as long as the actions are within GAAP.” Graham et al. further conjecture that CFOs’ greater emphasis on real activities manipulation rather than accrual-based earnings management may be due to their reluctance to admit to accounting-based earnings management in the aftermath of the Enron and Worldcom accounting scandals.Unlike real activities manipulation, which alters the execution of a real transaction taking place during the fiscal year, accrual-based earnings management is achieved by changing the accounting methods or estimates used when presenting a given transaction in the financial statements. For example, changing the depreciation method for fixed assets and the estimate for provision for doubtful accounts can bias reported earnings in a particular direction without changing the underlying transactions.The focus of this study is on how managers trade off real activities manipulation and accrual-based earnings management. This question is important for two reasons. First, as mentioned by Fields et al. (2001), examining only one earnings management technique at a time cannot explain the overall effect of earnings management activities. In particular, if managers use real activities manipulation and accrual-based earnings management as substitutes for each other, examining either type of earnings management activities in isolation cannot lead to definitive conclusions. Second, by studying how managers trade off these two strategies, this study sheds light on the economic implications of accounting choices; that is, whether the costs that managers bear for manipulating accruals affect their decisions about real activities manipulation. As such, the question has implications about whether enhancing SEC scrutiny or reducing accounting flexibility in GAAP, for example, might increase the levels of real activities manipulation engaged in by firms.I start by analyzing the implications for managers’ trade-off decisions due to the different costs and timing of the two earnings management strategies. First, because both are costly activities, firms trade off real activities manipulation versus accrual-based earnings management based on their relative costliness. That is, when one activity is relatively more costly, firms engage in more of the other. Because firms face different costs and constraints for the twoearnings management approaches, they show differing abilities to use the two strategies. Second, real activities manipulation must occur during the fiscal year and is realized by the fiscal year end, after which managers still have the chance to adjust the level of accrual-based earnings management. This timing difference implies that managers would adjust the latter based on the outcome of real activities manipulation. Hence, there is also a direct, substitutive relation between the two: if real activities manipulation turns out to be unexpectedly high (low), managers will decrease (increase) the amount of accrual-based earnings management they carry out.Following prior studies, I examine real activities manipulation through overproduction and cutting discretionary expenditures (Roychowdhury 2006; Cohen et al. 2008; Cohen and Zarowin 2010). I test the hypotheses using a sample of firms that are likely to have managed earnings. As suggested by prior research, earnings management is likely to occur when firms just beat/meet an important earnings benchmark (Burgstahler and Dichev 1997; DeGeorge et al. 1999). Using a sample containing more than 6,500 earnings management suspect firm-years over the period 1987–2008, I show the empirical results that real activities manipulation is constrained by firms’ competitive status in the industry, financial health, scrutiny from institutional investors, and the immediate tax consequences of manipulation. The results also show that accrual-based earnings management is constrained by the presence of high-quality auditors; heightened scrutiny of accounting practice after the passage of the Sarbanes-Oxley Act (SOX); and firms’ accounting flexibility, as determined by their accounting choices in prior periods and the length of their operating cycles. I find significant positive relations between the level of real activities manipulation and the costs associated with accrual-based earnings management, and also between the level of accrual-based earnings management and the costs associated with realactivities manipulation, supporting the hypothesis that managers trade off the two approaches according to their relative costliness. There is a significant and negative relation between the level of accrual-based earnings management and the amount of unexpected real activities manipulation, consistent with the hypothesis that managers “fine-tune” accruals after the fiscal year end based on the realized real activities manipulation. Additional Hausman tests show results consistent with the decision of real activities manipulation preceding the decision of accrual-based earnings management.Two recent studies have examined the trade-off between real activities manipulation and accrual-based earnings management. Cohen et al. (2008) document that, after the passage of SOX, the level of accrual-based earnings management declines, while the level of real activities manipulation increases, consistent with firms switching from the former to the latter as a result of the post-SOX heightened scrutiny of accounting practice. Cohen and Zarowin (2010) show that firms engage in both forms of earnings management in the years of a seasoned equity offering (SEO). They show further that the tendency for SEO firms to use real activities manipulation is positively correlated with the costs of accrual-based earnings management in these firms.2 Compared to prior studies, this study contributes to the earnings management literature by providing a more complete picture of how managers trade off real activities manipulation and accrual-based earnings management. First, it documents the trade-off in a more general setting by using a sample of firms that are likely to have managed earnings to beat/meet various earnings targets. The evidence for the trade-off decisions discussed in this study does not depend on a specific period (such as around the passage of SOX, as in Cohen et al. 2008) or a significant corporate event (such as a SEO, as in Cohen and Zarowin 2010).2 Cohen and Zarowin (2010) do not examine how accrual-based earnings management for SEO firms varies based on the costs of real and accrual earnings management.Second, to my knowledge, mine is the first study to identify a set of costs for real activities manipulation and to examine their impact on both real and accrual earnings management activities. Prior studies (Cohen et al. 2008; Cohen and Zarowin 2010) only examine the costs of accrual-based earnings management. By including the costs of real activities manipulation, this study provides evidence for the trade-off as a function of the relative costs of the two approaches. That is, the level of each earnings management activity decreases with its own costs and increases with the costs of the other. In this way, I show that firms prefer different earnings management strategies in a predictive manner, depending on their operational and accounting environment.Third, I consider the sequential nature of the two earnings management strategies. Most prior studies on multiple accounting and/or economic choices implicitly assume that managers decide on multiple choices simultaneously without considering the sequential decision process as an alternative process (Beatty et al. 1995; Hunt et al. 1996; Gaver and Paterson 1999; Barton 2001; Pincus and Rajgopal 2002; Cohen et al. 2008; Cohen and Zarowin 2010). In contrast, my empirical model explicitly considers the implication of the difference in timing between the two earnings management approaches. Because real activities manipulation has to occur during the fiscal year, but accrual manipulation can occur after the fiscal year end, managers can adjust the extent of the latter based on the realized outcomes of the former. I show that, unlike the trade-off during the fiscal year, which is based on the relative costliness of the two strategies, there is a direct substitution between the two approaches at year end when real activities manipulation is realized. Unexpectedly high (low) real activities manipulation realized is directly offset by a lower (higher) amount of accrual earnings management.Section II reviews relevant prior studies. Section III develops the hypotheses. Section IV describes the research design, measurement of real activities manipulation, accrual-based earnings management and independent variables. Section V reports sample selection and empirical results. Section VI concludes and discusses the implications of my results.II.RELATED LITERATUREThe extensive literature on earnings management largely focuses on accrual-based earnings management (reviewed by Schipper 1989; Healy and Wahlen 1999; Fields et al. 2001). A smaller stream of literature investigates the possibility that managers manipulate real transactions to distort earnings. Many such studies examine managerial discretion over R&D expenditures (Baber et al. 1991; Dechow and Sloan 1991; Bushee 1998; Cheng 2004). Other types of real activities manipulation that have been explored include cutting advertising expenditures (Cohen et al. 2010), stock repurchases (Hribar et al. 2006), sales of profitable assets (Herrmann et al. 2003; Bartov 1993), sales price reductions (Jackson and Wilcox 2000), derivative hedging (Barton 2001; Pincus and Rajgopal 2002), debt-equity swaps (Hand 1989), and securitization (Dechow and Shakespeare 2009).The prevalence of real activities manipulation as an earnings management tool was not well understood until recent years. Graham et al. (2005) survey more than 400 executives and document the widespread use of real activities manipulation. Eighty percent of the CFOs in their survey stated that, in order to meet an earnings target, they would decrease expenditure on R&D, advertising and maintenance, while 55 percent said they would postpone a new project, even if such delay caused a small loss in firm value. Consistent with this survey, Roychowdhury (2006) documents large-sample evidence suggesting that managers avoid reporting annual losses ormissing analyst forecasts by manipulating sales, reducing discretionary expenditures, and overproducing inventory to decrease the cost of goods sold, all of which are deviations from otherwise optimal operational decisions, with the intention of biasing earnings upward.Recent research has started to examine the consequence of real activities manipulation. Gunny (2010) finds that firms that just meet earnings benchmarks by engaging in real activities manipulation have better operating performance in the subsequent three years than do firms that do not engage in real activities manipulation and miss or just meet earnings benchmarks. Bhojraj et al. (2009), on the other hand, show that firms that beat analyst forecasts by using real and accrual earnings management have worse operating performance and stock market performancein the subsequent three years than firms that miss analyst forecasts without earnings management.Most previous research on earnings management examines only one earnings management tool in settings where earnings management is likely to occur (e.g., Healy 1985; Dechow and Sloan 1991; Roychowdhury 2006). However, given the portfolio of earnings management strategies, managers probably use multiple techniques at the same time. A few prior studies (Beatty et al. 1995; Hunt et al. 1996; Gaver and Paterson 1999; Barton 2001; Pincus and Rajgopal 2002; Cohen et al. 2008; Cohen and Zarowin 2010; Badertscher 2011) examine how managers use multiple accounting and operating measures to achieve one or more goals.Beatty et al. (1995) study a sample of 148 commercial banks. They identify two accrual accounts (loan loss provisions and loan charge-offs) and three operating transactions (pension settlement transactions, miscellaneous gains and losses due to asset sales, and issuance of new securities) that these banks can adjust to achieve three goals (optimal primary capital, reported earnings and taxable income levels). The authors construct a simultaneous equation system, in which the banks minimize the sum of the deviations from the three goals and from the optimallevels of the five discretionary accounts.3 They find evidence that some, but not all, of the discretionary accounts (including both accounting choices and operating transactions) are adjusted jointly for some of the objectives identified.Barton (2001) and Pincus and Rajgopal (2002) study how firms manage earning volatility using a sample of Fortune 500, and oil and gas, firms respectively. Both studies use simultaneous equation systems, in which derivative hedging and accrual management are simultaneously determined to manage earnings volatility. Barton (2001) suggests that the two activities are used as substitutes, as evidenced by the negative relation between the two after controlling for the desired level of earnings volatility. Pincus and Rajgopal (2002) find similar negative relation, but only in the fourth quarter.There are two limitations in the approach taken by the above studies. First, in the empirical tests, they assume that the costs of adjusting discretionary accounts are constant across all firms and hence do not generate predictions or incorporate empirical proxies for the costs. In other words, they do not consider that discretion in some accounts is more costly to adjust for some firms. Hence, these studies fail to consider the trade-off among different tools due to their relative costs. Second, they assume all decisions are made simultaneously. If some decisions are made before others, this assumption can lead to misspecification in their equation system.Badertscher (2011) examines overvaluation as an incentive for earnings management. He finds that during the sustained period of overvaluation, managers use accrual earnings management in early years, real activities manipulation in later years, and non-GAAP earnings management as a last resort. He claims that the duration of overvaluation is an important determinant in managers’ choice of earnings management approaches, but he does not model the3Hunt et al. (1996) and Gaver and Paterson (1999) follow Beatty et al. (1995) and construct similar simultaneous equation systems.trade-off between real activities manipulation and accrual-based earnings management based on their relative costliness, nor does his study examine the implication of the sequential nature of the two activities during the year.Two recent studies examine the impact of the costs of accrual-based earnings management on the choice of earnings management strategies. Cohen et al. (2008) show that, on average, accrual-based earnings management declines, but real activities manipulation increases, after the passage of SOX. They focus on one cost of accrual-based earnings management, namely the heightened post-SOX scrutiny of accounting practice, and its impact on the levels of real and accrual earnings management. Using a sample of SEO firms, Cohen and Zarowin (2010) examine several costs of accrual-based earnings management and show that they are positively related to the tendency to use real activities manipulation in the year of a SEO. Neither study examines the costs of real activities manipulation or considers the sequential nature of the two strategies. Hence, they do not show the trade-off decision as a function of the relative costs of the two strategies or the direct substitution between the two after the fiscal year end.III.HYPOTHESES DEVELOPMENTConsistent with prior research on multiple earnings management strategies, I predict that managers use real activities manipulation and accrual-based earnings management as substitutes to achieve the desired earnings targets. Unlike prior research, however, I investigate the differences in the costs and timing of real activities manipulation and accrual-based earnings management, and their implications for managers’ trade-off decisions.Both real activities manipulation and accrual-based earnings management are costly activities. Firms are likely to face different levels of constraints for each strategy, which will leadto varying abilities to use them. A manager’s trade-off decision, therefore, depends on the relative costliness of the two earnings management methods, which is in turn determined by the firm’s operational and accounting environment. That is, given the desired level of earnings, when discretion is more constrained for one earnings management tool, the manager will make more use of the other. This expectation can be expressed as the following hypothesis: H1: Other things being equal, the relative degree of accrual-based earnings management vis-à-vis real activities manipulation depends on the relative costs of each action.Accrual-based earnings management is constrained by scrutiny from outsiders and the available accounting flexibility. For example, a manager might find it harder to convince a high-quality auditor of his/her aggressive accounting estimates than a low-quality auditor. A manager might also feel that accrual-based earnings management is more likely to be detected when regulators heighten scrutiny of firms’ accounting practice. Other than scrutiny from outsiders, accrual-based earnings management is constrained by the flexibility within firms’ accounting systems. Firms that are running out of such flexibility due to, for example, their having made aggressive accounting assumptions in the previous periods, face an increasingly high risk of being detected by auditors and violating GAAP with more accrual-based earnings management. Hence, I formulate the following two subsidiary hypotheses to H1:H1a: Other things being equal, firms facing greater scrutiny from auditors and regulators have a higher level of real activities manipulation.H1b: Other things being equal, firms with lower accounting flexibility have a higher level of real activities manipulation.Real activities manipulation, as a departure from optimal operational decisions, is unlikely to increase firms’ long-term value. Some managers might find it particularly costly because theirfirms face intense competition in the industry. Within an industry, firms are likely to face various levels of competition and, therefore, are under different amounts of pressure when deviating from optimal business strategies. Management research (as reviewed by Woo 1983) shows that market leaders enjoy more competitive advantages than do followers, due to their greater cumulative experience, ability to benefit from economies of scale, bargaining power with suppliers and customers, attention from investors, and influence on their competitors. Therefore, managers in market-leader firms may perceive real activities manipulation as less costly because the erosion to their competitive advantage is relatively small. Hence, I predict the following: H1c: Other things being equal, firms without market-leader status have a higher level of accrual-based earnings management.For a firm in poor financial health, the marginal cost of deviating from optimal business strategies is likely to be high. In this case, managers might perceive real activities manipulation as relatively costly because their primary goal is to improve operations. This view is supported by the survey evidence documented by Graham et al. (2005), who find that CFOs admit that if the company is in a “negative tailspin,” managers’ efforts to survive will dominate their reporting concerns. This reasoning leads to the following subsidiary hypothesis to H1: H1d: Other things being equal, firms with poor financial health have a higher level of accrual-based earnings management.Managers might find it difficult to manipulate real activities when their operation is being monitored closely by institutional investors. Prior studies suggest that institutional investors play a monitoring role in reducing real activities manipulation.4 Bushee (1998) finds that, when4 However, there is also evidence that “transient” institutions, or those with high portfolio turnover and highly diversified portfolio holdings, increase managerial myopic behavior (e.g., Porter 1992; Bushee 1998; Bushee 2001). In this study, I focus on the average effect of institutional ownership on firms’ earnings management activities without looking into the investment horizon of different institutions.institutional ownership is high, firms are less likely to cut R&D expenditure to avoid a decline in earnings. Roychowdhury (2006) also finds a negative relation between institutional ownership and real activities manipulation to avoid losses. Unlike accrual-based earnings management, real activities manipulation has real economic consequences for firms’ long-term value. Institutional investors, being more sophisticated and informed than other investors, are likely to have a better understanding of the long-term implication of firms’ operating decisions, leading to more effort to monitor and curtail real activities manipulation than accrual-based earnings management, as predicted in the following subsidiary hypothesis:H1e: Other things being equal, firms with higher institutional ownership have a higher level of accrual-based earnings management.Real activities manipulation is also costly due to tax incentives. It might be subject to a higher level of book-tax conformity than accrual-based earnings management, because the former has a direct cash flow effect in the current period, while the latter does not. Specifically, when firms increase book income by cutting discretionary expenditures or by overproducing inventory, they also increase taxable income and incur higher tax costs in the current period.5 In contrast, management of many accrual accounts increases book income without current-period tax consequences. For example, increasing the estimated useful lives of long-term assets, decreasing write-downs for impaired assets, recognizing unearned revenue aggressively, and decreasing bad debt expense can all increase book income without necessarily increasing current-year taxable income. Therefore, for firms with higher marginal tax rates, the net present value of the tax costs associated with real activities manipulation is likely to be higher than that of accrual-based earnings management, leading to the following prediction:5Other types of real activities manipulation, such as increasing sales by discounts and price cuts, and sale of long-term assets, are also book-tax conforming earnings management.H1f: Other things being equal, firms with higher marginal tax rates have a higher level of accrual-based earnings management.Another difference between the two earnings management strategies that will influence managers’ trade-off decisions is their different timing. H1 predicts that the two earnings management strategies are jointly determined and the trade-off depends on their relative costliness. However, a joint decision does not imply a simultaneous decision. Because real activities manipulation changes the timing and/or structuring of business transactions, such decisions and activities have to take place during the fiscal year. Shortly after the year end, the outcome of the real activities manipulation is revealed, and managers can no longer engage in it. Note that, when a manager alters real business decisions to manage earnings, s/he does not have perfect control over the exact amount of the real activities manipulation attained. For example, a pharmaceutical company cuts current-period R&D expenditure by postponing or cancelling development of a certain drug. This real decision can include a hiring freeze and shutting down the research site. The manager may be able to make a rough estimate of the dollar amount of the impact on R&D expenditure from these decisions, but s/he does not have perfect information about it.6 Therefore, managers face uncertainty when they execute real activities manipulation. After the fiscal year end, the realized amount of the real activities manipulation could be higher or lower than the amount originally anticipated.On the other hand, after the fiscal year end but before the earnings announcement date, managers can still adjust the accruals by changing the accounting estimates or methods. In addition, unlike real activities manipulation, which distorts earnings by executing transactions6 Another example is reducing travelling expenditures by requiring employees to fly economy class instead of allowing them to fly business class. This change could be suboptimal because employees might reduce the number of visit they make to important clients or because employees’ morale might be adversely impacted, leading to greater turnover. The manager cannot know for certain the exact amount of SG&A being cut, as s/he does not know the number of business trips taken by employees during the year.。
Hotelling模型
当 a=1-b 时,两个商店位于同一位置,我们走到另一个极端:
* * p1 (a,1 a) p2 (a,1 a) c
课后习题:P77 NO.7 (产品有差异时的价格竞争)现在假设两个企业的产品并不完全相同,企业 1 的需求函数为 q1 ( p1 , p2 ) a p1 p2 , 企业 2 的需求函数为 q2 ( p1 , p2 ) a p2 p1 。 求两个企业同时选择价格时的纳什均衡。
豪泰林(Hotelling)价格竞争模型 在古诺模型中 ,产品是同质的. 在这个假设下 ,如果企业的竞争战略是价格而 不是产量, 伯特兰德证明,即使只有两个企业 ,在均衡情况下 ,价格等于边际成本 , 企业的利润为零,与完全竞争市场均衡一样.这便是所谓的伯特兰德悖论. 解开这个悖论的办法之一是引入产品的差异性.如果不同企业生产的产品是 有差异的,替代弹性就不会是无限的,此时消费者对不同企业的产品有着不同的偏 好,价格不是他们感兴趣的唯一变量. 在存在产品差异的情况下,均衡价格不会等 于边际成本,垄断性提高. 产品差异有多种形式.我们现在考虑一种特殊的差异,即空间上的差异,这就是 经典的豪泰林模型 .在豪泰林模型中, 产品在物质性能上是相同的,但在空间位置 上有差异.因为不同位置上的消费者支付不同的运输成本,他们关心的是价格与运 输成本之和,而不单是价格.假定有一个长度为 1 的线性城市,消费者均匀地分布在 [0,1]区间里 ,分布密度为 1.假定有两个商店 , 分别位于城市的两端, 商店 1 在 x=0, 商店 2 在 x=1,出售物质性能相同的产品.每个商品提供单位产品的成本为 c,消费 者购买商品的旅行成本与离商店成比例,单位距离的成本为 t.这样,住在 x 的消费 者如果在商店 1 采购,要花费 tx 的旅行成本;如果在商店 2 采购,要花费 t(1-x).假定 消费者具有单位需求,即或者消费 1 个单位或者消费 0 个单位.消费者从消费中得 到的消费剩余为 s .
Hotelling-model以及两个扩展和多党竞选模型解析
基本思路: 假定两家厂商生产的同种产品具有质量差异, 及消费者对产品质量偏好服从 [d,e]
区间上的均匀分布,研究二次运输成本下两家厂商的选址和价格竞争的二阶段完全 信息动态博弈问题。 假设: (1)长度为1的“ 线性城市”; (2)交通费用为距离的二次函数,单位交通费用为t (3)消费者对产品质量的偏好 在 [d,e ]区间均匀分布,
w
x1 x2 m或x1 x2 m x1 = x2 = m
m x1 x2或m x1 x2
• 当x2 < m,党派1的最优策略x1为大于x2,小于2m-x2之间的任意数。 • 当x2 = m时,那么党派1的最优策略x1=x2。 • 当x2 > m,党派1的最优策略x1为大于2m-x2 ,小于x2之间的任意数。
1、来源:1838“伯特兰德悖论”———解决:引入产品差异性;特殊空间上的差异。 1929哈罗德·霍特林——经典Hotelling模型。
2、基本假设: (1)产品同质。 (2)决策变量:价格。 (3)成本函数相同,且AC=MC=C0; (4)长度为1的线性市场,两个厂商,消费者均匀分布,每个消费者购买一件商品。 (5)消费者购买商品的交通成本与离商店的距离成比例,单位距离的交通成本为t。
Hotelling模型中企业选址问题——转化成跨国企业选择经营何种品牌 产品的问题。
一、单一 A品牌经营下跨国公司的均衡利润 和基本模型 中 A、 B两企业 进行竞争的结果相同:
二、单一 D品牌经营下跨国公司的均衡利润 1、 先考虑给定 的情况下 ,B企业的两种选择 : 第一种, 在 [ x ,1 ] 区间 内达到利润最大化:
启示 1 跨国公司并购我国企业后,在大多数情况下并不会选择弃
用我国民族品牌,现有民族品牌被弃用的例子,可能是因为
基于Hotelling模型的双头竞争策略分析
华中科技大学硕士学位论文基于Hotelling模型的双头竞争策略分析姓名:魏贞申请学位级别:硕士专业:概率论与数理统计指导教师:杨明20061001华中科技大学硕士学位论文摘要双寡头竞争是市场竞争的主要形式,而且双寡头竞争已经包含了博弈论的主要内容,是博弈论早期研究的起点,也是博弈论在经济生活中的主要应用。
在现代经济竞争飞速的发展趋势下,企业的竞争策略研究面临改进和研究工具的更新,建立模型讨论新的策略效应和引进更好的评价工具不仅可以更好指导企业决策,也能更广泛地应用于实际。
本文在经典的竞争策略理论研究基础上,以博弈论、信息经济学、产业组织理论和实物期权方法为工具,集中深入地研究了基于Hotelling模型下的,可以根据顾客消费行为来进行价格歧视的企业定价策略和投资决策,并就不同的市场进行了分别的探讨。
本文的主要研究内容为:一、运用博弈论和产业组织理论知识分析了在能识别顾客并进行价格歧视的线性两期寡头市场中切换成本对公司均衡价格和对公司提供的折扣大小的影响,并简要讨论了公司理性与均衡价格的关系。
二、将两期模型扩展为公司生命无限期,顾客的生命期为两期的动态模型。
用博弈方法得出了两公司的均衡价格,并考察了作为状态变量的市场份额的动态演变过程和顾客的理性对公司的均衡价格的影响。
还分析了公司理性对于市场份额的收敛的作用。
三、将离散模型扩展成为连续时间模型,运用实物期权方法站在后进入公司的立场分别在市场是线性的和非线性的情况下评价了公司的投资决策。
并且通过图形描述了公司提供给新顾客的折扣对投资阈值的影响。
关键词:双寡头竞争;Hotelling模型;切换成本;均衡价格;折现因子华中科技大学硕士学位论文AbstractDuopoly competition, as the important form of market competition, contains the main contents of game theory. It is not only the jumping-off point of game theory but also the primary application. With the modern economic competition developing, the competition strategy needs amelioration and update as well as the research tools. The introduction of better approaches can help the corporation obtain preferable strategy. It can also be applied wider in practice.The paper, on the basis of the classical theory about competition strategy, concentrates on the pricing and investment strategy in different markets, in virtue of the game theory, information economics, industrial organization theory and real option approach.The main contents and conclusions are summarized as follows: First, company’s pricing strategy is studied in a two-period duopoly model with business practice of offering discounts to new customers in markets with switching costs, and equilibrium result shows how the existence of switching costs affects the price in both period. Then, such a situation is considered in which there is a duopoly with infinitely lived firms and overlapping generations of consumers where both firms can set different price for their previous customers and their new customers, and equilibrium results in two cases and the effect of the consumer patience on the price are analyzed. At the same time, the evolution trend of market shares affected by firm patience is obtained. Finally, Taking the place of discrete model into a continuous time one, the investment method in real option theory is applied to evaluate the new entrant’s strategy to invest respectively in linear market and non-linear one. Furthermore, the effect of the discount offered to new consumer on investment threshold has been displayed through figures.Key Words: duopoly competition; Hotelling model; switching costs; equilibrium price;discount factor独创性声明本人声明所呈交的学位论文是我个人在导师指导下进行的研究工作及取得的研究成果。
Dixit-Stiglitz1977垄断竞争模型工作论文
1
89
0521819911C04
June 20, 2003
18:10ຫໍສະໝຸດ Char Count= 090
Avinash K. Dixit and Joseph E. Stiglitz
and how much of each to produce, may differ. There are a number of effects at work. Whether a commodity is produced depends on revenues relative to total costs. Social profitability depends, on the other hand, on a number of factors. In deciding whether to produce a commodity, the government would look not only at the profitability of the project, but also at the consumer surplus (the profitability it could attain if it were acting as a completely discriminating monopolist), and the effect on other industries and sectors (on the consumer surplus, profitability and viability). The effects on other sectors result both from substitution and income effects. The whole problem hinges crucially on the existence of economies of scale. In their absence, it would be possible to produce infinitesimal amounts of every conceivable product that might be desired, without any additional resource cost. Private and social profitability would coincide given the other conventional assumptions, and the repercussions on other sectors would become purely pecuniary externalities. With nonconvexities, however, we shall see that all these considerations are altered. Moreover, given economies of scale in the relevant range of output, market realisation of the ‘unconstrained’ or first-best optimum, i.e. one subject to constraints of resource availability and technology alone, requires pricing below average cost, with lump-sum transfers to firms to cover losses. The conceptual and practical difficulties of doing so are clearly formidable. It would therefore appear that perhaps a more appropriate notion of optimality is a constrained one, where each firm must operate without making a loss. The government may pursue conventional regulatory policies, or combinations of excise and franchise taxes and subsidies, but the important restriction is that lump-sum subsidies are not possible. The permissible output and price configurations in such an optimum reflect the same constraints as the ones in the Chamberlinian equilibrium. The two solutions can still differ because of differences implicit in the objective functions. Consider first the manner in which the desirability of variety can enter into the model. Some such notion is already implicit in the convexity of indifference surfaces of a conventional utility function defined over quantities of all the varieties that might exist. Thus, a person who might be indifferent between the combinations of quantities (1, 0) and (0, 1) of two product types would prefer the combination ( 1 , 1 ) to either extreme. 2 2 If this is the only relevant consideration, we shall show that in one central case the Chamberlinian equilibrium and the constrained optimum coincide. In the same case, we shall also show that the first-best optimum has firms of the same size as in the other two solutions, and a greater number
Hotelling模型的拓展研究及博弈分析
① Torole,J.The Theory of Industrial Organization[M].中国人民大学出版社,1998. 1
Hotelling 模型的拓展研究及博弈分析
等。豪泰林(Hotelling)(1929)①是最早研究产品差异化情况下的价格竞争问题的,1929
年,Hotelling在《Economic Journal》发表“竞争的稳定性”一文首次提出基于线性空间
关键词:Hotelling 模型,博弈分析,成本差异,歧视定价,网络外部性
I
ABSTRACT
In real life, price competition and production competition are the main forms of firms competition in market. In short run, price changing can often be easily observed by people. Bertrand studied two oligopoly firms competing on price in the market under the assumption that the cost structure and product characteristics are rigid. We found that oligopoly firms price according to marginal cost as the same with fully competitive firms. That is, firms’ number does not matter for their pricing. This is the famous Bertrand paradox. But a crucial assumption of Bertrand paradox is all the firms producing the same type of products that we call it product with indifference and with complete-subsititution. Therefore, price is the only variable which consumers are interested in, and no one firm can raise the price above the marginal cost without losing its market share. But in reality, these assumptions are often unsatisfiable.
希尔顿收益管理英文版
Hilton WorldwideRevenue Management Standards – The Americas Updated: 2009Table of ContentsFinancial Review (7)Actual Monthly Performance (7)Systems Balancing (7)Weekly Forecast (8)Group Forecast Development Standards (8)Transient Forecast Development Standards (8)Exporting the Forecast (9)Forecast Accuracy (9)Market Conditions / Competition Analysis (10)Determining Competitive Set for STAR Reports (10)Value Assessments (10)Strengths, Weaknesses, Opportunities, Threats Analysis (SWOT) (10)Competitive Factors/ Assessment (11)Competitive Shopping Tools (11)Competitive Pressure Calendar (11)Pricing Strategy (12)Best Available Rates (BAR) (12)Seasonality (12)Length of Stay Controls (12)Price Resistance - Denials (13): Best Rate Strategy (13)SRP Placement (14)Best Rate Guarantee (14)Packages and Promotions – Administration/ Analysis (15)Premium Room Type Strategy (16)Distressed and Holiday Strategy (16)Inventory Management - (18)SRP Build and Auditing procedures (18)SRP Build- Cut-off Date Procedures (18)SRP Build- Room Types (19)SRP Build- SRP Types (19)Room Type Consistent Availability (19)53rd Week Controls (20)Overbooking (20)Hhonors- Inventory Management and reimbursement procedures (20)Rate Override (20)Validating Qualified Rates (20)Night Audit Activity -Market Category Audit (21)Kiosk Configuration (21)Revenue Management Business Practices (22)Revenue Management Meeting (22)Toolkit use (22)Toolkit Use- Minimum Tool Usage (22)Checklists (23)Document Retention (23)Report Usage- Systems (23)Booking Pace (23)OnQ RM Usage- (23)Revenue Management Support (24)Negotiated Account Review (24)Loaded and Bookable in the GDS and CRS (24)Account Evaluation Standards for renewal of existing accounts (24)Hotelligence- (24)Group Revenue Management (26)RM Training for Sales Managers (26)Communication with the Sales Department (26)Inventory Management- SRP Setup (28)Inventory Management Audit Systems Delphi vs. CRS (28)Group Complimentary and Staff Rooms (29)Inventory Management- Group Block (30)Cut-off dates procedures (30)Group Pick-up Meeting (31)eEvents (32)Group Rates/Selective Sell Guideline Rates (SSG)- (33)Transient Protected- (34)Displacement Analysis- Group MCATs (34)Displacement Analysis – Permanent, Contract or Extended Stay (35)Distribution-Channel Management (36)Voice Reservations - Selling Protocol- (36)Voice Reservations-Booking Messages (36)Voice Reservations - Shop Calls (37)Voice Reservations-Training and Communication with HRCC (37)Voice Reservations-Call Volume Statistics and Conversion Reports (38)City/Convention Visitors Bureau Web Sites (CVB) (38)PiM –Property Information Manager (39)Internet- Brand Web Sites (39)Internet – Approved Web Sites (39)Internet- 3rd Party Merchant Sales (40)Internet - Tracking Standards (40)Internet- Hotwire (41)Internet- Priceline (41)Internet - Online Channel Analysis & SRP Standards/ Settings Audits (41)TRAIL (42)Staffing Guidelines- (42)Staffing Guidelines Continued (43)Succession Planning (43)Microsoft Office Products (43)OnQ RM (Revenue Management System) (43)Delphi - DMPE (44)OnQ FM -DRM (44)OnQ FM- Other RM personnel (44)Key Hotel Marketing Reports (44)RMU (44)Using the Revenue Management Standards TemplateThis template is a document Revenue Management personnel and various members of the hotel team can use to understand what the required business practice is for all Revenue Management functions. Listed below are the subject headers and definitions contained within the standards template. Any items that may not be familiar to you can be found by selecting the Revenue Management glossary by.CategoryA ll standards are grouped together into 9 sections. The sections are as follows:1. Distribution Channel Management2. Financial Review3. Forecast Process4. Group Revenue Management5. Inventory Management6. Market Conditions / Competitive Analysis7. Negotiated Account Review8. Pricing Strategy9. Revenue Management Business Practices10. Training & DevelopmentTopicEach category will contain topics for review. In all there are over 100 topics for your review.Revenue Management StandardThe standard is the required business practice for each topic as well as the party responsible for its execution.FrequencyEach Revenue management standard will have an associated timeline for completion. Some are weekly others monthly and still others (such as training) as needed.Instructions, Reports and Tools to useThis last column provides links to tools and reports in efforts to assist RM personnel and other hotel team members in standard implementation efforts. Attention: for all links defined in this document to access the web pages you must have your Internet Explorer Browser open prior to clicking on the links.CategoryTopic Revenue Management Standard Frequency Instructions, Reports and Tools to useFinancial Review ActualMonthlyPerformanceDRM to analyze actual room night, rate, revenue and key indicatorperformance by market category compared to last year, the Budget,and the Monthly Forecast. Prepare Executive Summary critiquingforecast variances and forecast accuracy if accuracy is in the redzone. This analysis should also include STAR performance anddescribe reasons for past performance and address strategies forincreasing RevPAR Index.Monthly Reports to facilitate this analysis include: the OnQ FMActuals and OnQ FM Mix of Sales Reports as compared tothe Monthly, Budget and Last Year Performance orequivalent systems where ONQ FM is not offered. MonthlySTAR report, Hotelligence Report, market demandinformation; Key Hotel Marketing Reports, Sales ProgressSummary Report, Group Plug and group booking paceinformation from Delphi.Financial Review SystemsBalancingBy the 10th of each month the following should occur:1)Balancing Delphi actuals to OnQ FM actuals - monthlyevaluation to ensure these two systems match. If they do not,the hotel has not successfully linked all groups in OnQ FM.2)DRM to ensure the Front Office balances the month to theGeneral Ledger and then the DRM to close OnQ FM.As indicated inStandardOnQ FM balancing information can be found by selecting thefollowing link:OnQ Insider >Departments >RevenueManagement >Systems. Once on the Systems page, selectthe OnQ FM User Guide under OnQ FM- Rooms. Balancingprocedures located on page 25.Category Topic Revenue Management Standard Frequency Instructions, Reports and Tools to useForecast Process Weekly ForecastThe DRM should complete an operational forecastweekly and provide to operational departments.This forecast should include month-endprojections.Weekly Reports to use are the transient booking pace reportsprepared weekly by the DRM/Revenue Analyst, competitorshop reports, Group Rooms Control Log (GRC), SRP trendreports found in OnQ FM or equivalent system where OnQFM is not offered.The 14-Day Weekly Forecast and the Transient BookingPace tools are found in Revenue Management Toolkit byfollowing the path below.OnQ Insider >Departments >Revenue Management >ToolkitForecast Process Group ForecastDevelopmentStandardsGroup Forecast - DOSM is responsible for theproduction of the Group Forecast day by day inOnQ FM and be able to provide supportingdocumentation. This includes reviewing the GroupDemand Report inside OnQ FM as well as theSales Progress Summary Report and completingthe Group Forecast Analysis Tool.Ensure thatOnQ FM and Delphi group forecasts for definitegroup bookings match. DRM is responsible forvalidation of the group forecast.Monthly Reports to use are: Post Event Reports, Group Pick-upManager or pace equivalent by group, historical plug trendanalysis, Sales Progress Report and OnQ FM DemandReports. Group shop calls and Lost Business Reportslocated in Delphi or kept manually for non-Delphi hotels canalso be reviewed.The Group Pick Up Manager Tool can be found in theRevenue Management Toolkit by following the path below.OnQ Insider >Departments >Revenue Management >Groupand Catering.Forecast Process TransientForecastDevelopmentStandardsTransient Forecast - DRM is responsible for theproduction of the transient forecast at the SRPlevel ** in OnQ FM, day by day by market category,including other and non-revenue market categoriesand provide supporting documentation. DRM tocoordinate the total hotel rooms forecast once allchanges have been communicated.** When MCAT room nights represent 5% or moreof overall transient business, SRPs that represent20% or more within that MCAT must be forecasted.Monthly Reports to use to assist in forecasting are: OnQ RM DailyDetail Merge Tool, Transient Booking Pace Report, OnQ RM-SRP evaluations, SRP Category Reports and the MonthlyForecast Comparison Report, competitive shop information,historical actuals, compression calendar, economic trendpredictions, demand analysis reports found in OnQ RM orOnQ FM, future denials and OnQ FM reports.The Transient Booking Pace and Daily Detail Merge tools arelocated in the Revenue Management Toolkit by following thepath below.OnQ Insider >Departments >Revenue Management >ToolkitCategory Topic Revenue Management Standard Frequency Instructions, Reports and Tools to useForecast Process Exporting theForecastOnce a monthly and rolling forecast have beenpublished in OnQ FM, the forecast should beexported to OnQ RM and HLBFS. If for any reasonthe forecast is changed in HLBFS (which shouldnever happen), OnQ FM must be updated so bothsystems match.Weekly/Monthly Instructions on how to export forecasts to these systems canbe found in the OnQ FM Users Guide by following the pathbelow.OnQ Insider >Departments >RevenueManagement >Systems. Once on the Systems page, selectthe OnQ FM User Guide under OnQ FM- Rooms. Then,select On FM Administration and exporting procedureslocated on page 1.Forecast Process ForecastAccuracyThe total hotel room revenue forecast should bewithin 3% accurate. Green zone = within 3%;Yellow zone = within 5% and Red Zone= more than5%.Absolute variance of actual revenue forecast vs.rolling forecast revenue taken from HLBFS andPeoplesoft from the prior month.Monthly Reports to assist in accurate forecasting include:Transient Booking Pace Reports, OnQ RM- SRP evaluations,competitive shop information, historical actuals, group plugreports, compression calendar, economic trend predictions,demand analysis reports found in OnQ RM or OnQ FM,future denials and OnQ FM reports where available.Category Topic Revenue Management Standard Frequency Instructions, Reports and Tools to useMarket Conditions / Competition Analysis DeterminingCompetitive Setfor STAR ReportsCompetitive set should be the 5-8 hotels which thecustomer would choose if our hotel were notavailable. Hotels are determined and comp set’schanged based on Market segmentation, SWOTanalysis, Value Assessment, proximity to thecustomers desired location/area attractions/airport,customer surveys and Competitive Set ValidationTool. Customer Survey samples should total atleast 100 to be statistically significant. HiltonFamily of Brands cannot represent more than 40%of total rooms in the competitive set (no more than3 hotels in the family, excluding your hotel). DRM isresponsible for facilitating this process with a cross-functional team and report findings to RMcommittee, RDRM, RDSM and AVP, especiallywhen the findings show a sister hotel in the familyof brands should be included in the competitive set.Annual or inconjunction withSupply Change(Qualitative orQuantitative)Reports to use: Infrastructure Competitive Analysis-Supplyscreen from InFocus, Value Assessments, shop callreports, all monthly and weekly available STAR reports,competition collateral, trend publications indicating newconstruction or changes, and the Competitive SetValidation Tool and SWOT Analysis found in the RevenueManagement Toolkit by following the path below.OnQ Insider >Departments >RevenueManagement >ToolkitMarket Conditions / Competition Analysis ValueAssessmentsA detailed Value Assessment by market segmentvs. competitors in each segment, (regardless ifthey are on the hotels STAR Report), is critical forunderstanding and identifying what attributescontribute to hotel’s success and what areas needto be a focus to improve RevPAR performance. Anannual physical inspection of the competitorsshould take place and DRM is responsible forfacilitating this process with the DOSM and across- functional team and report findings to RMcommittee, RDRM and RDSM. Upon completion,have Sales Dept add to InFocus.Pricing should beevaluated once peryear or inconjunction withsupply change orsignificant pricingchanges in themarket.Reports to use: Shop call reports, competition collateral,Group Lost Business Reports, customer surveyinformation, and the SWOT analysis.The Value Assessment Tool is located in the RevenueManagement Toolkit by selecting the following path.OnQ Insider >Departments >RevenueManagement >ToolkitMarket Conditions / Competition Analysis Strengths,Weaknesses,Opportunities,Threats Analysis(SWOT)Upon completion of the Value Assessment, adetailed SWOT analysis should be completed withsame cross functional team that completes theValue Assessment. This analysis is critical tounderstanding, improving,and l everaging yourcompetitive advantages and minimizing yourdisadvantages within your market.Upon completionof the ValueAssessmentReports to use: Value Assessment, shop call reports, allmonthly and weekly available STAR reports, competitioncollateral, CVB information, Group Lost Business Reports,customer survey information, and trend publicationsindicating new construction or changes.The SWOT and Value Assessment Tool can be found inthe Revenue Management Toolkit by following the pathbelow.OnQ Insider >Departments >RevenueManagement >ToolkitCategory Topic Revenue Management Standard Frequency Instructions, Reports and Tools to useMarket Conditions/ Competition Analysis CompetitiveFactors/AssessmentUnderstand the competitive set in terms offacilities, pricing, appearance on hotels site andrecommend changes for your hotel based on thisinformation.Seasonally andannually at aminimum.The Competitive Assessment/Factors Form can be foundby following the path below:OnQ Insider >Departments >RevenueManagement >Library. Double click on Library >Doubleclick on RM Resource Center >Transient Pricing.Market Conditions / Competition Analysis CompetitiveShopping ToolsHotels will subscribe to Market Vision Shop CallReports as these reports are integrated with OnQRM. Hotel to also receive training in how to usethe on-line shopping tool to query reports outsidethe reports that are emailed weekly.Review shops dailywithin the bookingwindow.Weekly- for highdemand dates anddistressed datesfor 365 days.Information regarding how to subscribe to Market VisionReports, how to request/change reports and to receivetraining for the on-line tool, can be found by following thepath below:OnQ Insider >Departments >RevenueManagement >Library. Double click on Library >Doubleclick on RM Resource Center >Transient Pricing. Go toMarket Vision Announcement and Release Notes.For information on how to set up and view shop call andexception reports in OnQ RM, follow the path below.OnQ Insider >Departments >RevenueManagement >Systems. Once on the Systems page,select the OnQ RM User Guide under OnQ RevenueManagement. Then select On RM Maintenance and go topage 5.Market Conditions / Competition Analysis CompetitivePressureCalendarEvery DRM should have a document outliningdemand drivers and daily competitive pressure fora minimum of 1 year from arrival. This shouldinclude groups at the competitive set hotels thatcould increase demand for your hotel. Forconvention hotels, this should be created for aminimum of 3 years from arrival. Placing holidays,special events and noteworthy groups in OnQ FMis optional.Completedmonthly for aminimum of a yearor as long as thehotels longestgroup bookingwindow.Reports to understand demand are: OnQ RM Reports,CVB Information, Lost Business Reports, GRC (GroupRooms Control) Log from Delphi, historical actuals fromOnQ FM and the OnQ FM Daily Transient DemandAnalysis report.Instructions on how to place events in OnQ FM can befound in the OnQ FM Users Guide by following the pathbelow.OnQ Insider >Departments >RevenueManagement >Systems. Once on the Systems page go tothe OnQ FM User Guide inside OnQ FM- Rooms. Thenselect “How Does OnQ FM work” and go to page 18.Adding Special Events in OnQ RM can be foundOnQ Insider >Departments >RevenueManagement >Systems. Once on the Systems page,select the OnQ RM User Guide under OnQ RevenueManagement. Then select On RM Maintenance and go topage 14.Category Topic Revenue Management Standard Frequency Instructions, Reports and Tools to usePricing Strategy Best AvailableRates (BAR)Nine BAR Rate Levels will exist for each season.Rate Level 0 will be the highest possible price pointand RL8 the lowest. Price points will be indicativeof rates the customers are willing to pay based onthe competitive environment. Most of the bookingswill normally occur in Rate Levels 3-5.The Best Available Rate found on each hotelsbrand website is controlled by the SRPs (DJ/DJ1)and is equal to BAR. No lower unqualified ratesshould be offered to customers. All hotels MUSThave this SRP available (DJ/DJ1 SRPs) in order tohave BAR rates available for sale on .Annually and moreoften as dictatedby yourperformancemeasurementsReports to use to validate rate strategies and view dailyrates are: OnQ RM Daily Detail Merge Tool, Market Visionshop call reports, OnQ RM recommendation screen whereall rates from Market Vision appear, transient bookingpace reports, transient demand, GRC log, competitivepressure calendar, Value Assessment and Key HotelMarketing Reports.To understand more about Hilton’s Pricing Principles andPractices can be found by following the path below:OnQ Insider >Departments >RevenueManagement >Library. Double click on Library > RMResource Center > go to Hilton Full Service PricingStrategy.Pricing Strategy SeasonalityHotels will determine their need for multipleseasons by evaluating hotel monthly and marketRevPAR. If there is a sustained RevPARdifferential (driven by occupancy or ADR) or specialevent, then multiple seasons should beestablished.Annually andadjusted moreoften as neededReports used to validate seasonality are: STAR Reports(Trend), ResMAX information, historic demand informationfrom OnQ FM, OnQ RM reports, market history andhistorical shop call information.Pricing Strategy Length of StayControlsAll hotels should utilize full pattern length of staypricing (Min/Max controls for DT Hotels)consistently in conjunction with 9 BAR to targetdifferent market segments by selling multiple rateson a single arrival date based on a guest’s lengthof stay.Daily review withinthe transientbooking windowand for periods ofhigh demand.Weekly for entire365 day windowCompetitive set shop calls should be reviewed by arrivaldate and length of stay as well as understanding marketdemand and the hotels own internal compression prior tosetting LOS controls.The BAR Pricing Length of Stay Worksheet can assist RMpersonnel in learning to use LOS controls and can befound in the Revenue Management Toolkit by following thepath below.OnQ Insider >Departments >RevenueManagement >ToolkitInformation on how to set Length of Stay Controls in OnQRM can be found in the OnQ RM User Guide by followingthe path below.OnQ Insider >Departments >RevenueManagement >Systems. Once on the Systems page,select the OnQ RM User Guide under OnQ RevenueManagement. Then select On RM Recommendations andgo to page 25.Category Topic Revenue Management Standard Frequency Instructions, Reports and Tools to usePricing Strategy Price Resistance -DenialsDRM must review the 6-week rolling average asindicated in Booking Pace Tool or ensure theyhave this data if using another tool. DRM shouldanalyze current 7-Day variance of transient roomssold compared to rate denials for 120 future daysand compare to 6-week rolling average by DOW. Ifweek over week trends indicate higher / lower priceresistance for a day of week in question, the DRMmust investigate underlying reasons . marketdemand, multi-hotel groups, competitor rates) andmake any necessary changes to maximizerevenues, including pricing changes if appropriate.Daily for bookingwindow andweekly for 365 daywindowReports to use: OnQ RM Daily Detail Merge Tool, OnQRM Alternate Daily Detail Report, OnQ RM CompetitiveException Reports, transient booking pace analysis, OnQFM Daily Transient Demand Analysis, and OnQ FM End ofMonth-Demand analysis.For further detail on using competitive exception reports,follow the path below.OnQ Insider >Departments >RevenueManagement >Systems. Once on the Systems page,select the OnQ RM User Guide under OnQ RevenueManagement. Then select On RM Recommendations andgo to page 5.The booking pace analysis and Daily Detail Merge toolsare found in the Revenue Management Toolkit by followingthe path below.OnQ Insider >Departments >RevenueManagement >ToolkitPricing Strategy : Best RateStrategyThe Best Available Rate found on each hotelsbrand website is controlled by the SRPs (DJ/DJ1)and is equal to BAR. No lower unqualified ratesshould be offered to customers. All hotels MUSThave this SRP available (DJ/DJ1 SRPs) in order tohave BAR rates available for sale on .Ongoing Reports to understand bookings are: OnQ RM SRPevaluations/SRP Category Reports, OnQ FM actuals,Market Vision shop call reports, Competitor brand sites, 3rdParty web sites, Mix of Sales through eMix On-lineChannel Analysis Tool, Booking Pattern Reports found inKey Hotel Marketing Reports.The eMix On-line Channel Tool is found in the RevenueManagement Toolkit by following the path below.OnQ Insider >Departments >RevenueManagement >ToolkitCategory Topic Revenue Management Standard Frequency Instructions, Reports and Tools to usePricing Strategy SRP PlacementSRP’s can be placed in levels that have logicalprice points with respect to the best available ratein each rate level. For example: if the BAR in RateLevel 4 is $150 and the BAR for Rate Level 5 is$125, a non-LRA negotiated account priced at$140 would be placed in Rate Level 4. However,you must consider your selling strategy withrespect to the SRPs and rates in each rate levelalong with the hotel's projected occupancy levelwhen assigning quali fied SRP’s to rate levels. Oneoption for Hilton Hotels is to use the SRP exceptionfield, which allows the SRP to stay in the rate levelthat corresponds to the Best Available Rate butfollow RL0 restrictions.Quarterly, whenBAR price pointschange and whenany SRP ratechangesReports to use: SRP Mapping screen/report in OnQ RMfor select dates; SRP evaluation/SRP Category Reportsfor Rate Levels 0-8 and all SRPs, SRP Quick List listing allrate level assignments which can be requested byemailing. For further detail on this topic, follow the pathbelow.OnQ Insider >Departments >RevenueManagement >Systems. Once on the Systems page,select the OnQ RM User Guide under OnQ RevenueManagement. Then select On RM Menu/Screens and goto page 3.Explanation of OnQ RM rate functionality can be found byfollowing the path below.OnQ Insider >Departments >RevenueManagement >Systems. Once on the Systems page,select the OnQ RM User Guide under OnQ RevenueManagement. Then select On RM Menu/Screens and goto page 5.Pricing StrategyBest Rate At no time will hotels allow rates to be sold throughany non-Hilton website or any other channelOngoing Parity with can be validated by reviewing the OnQ RMMarket Vision Shops, OnQ RM Competitive ExceptionGuarantee (including 3rd Party resellers/wholesalers, MerchantModel Websites, GDS) that are lower than thosedispl ayed in the brand’s reservation system andhotel site. This is to ensure that the consumer canalways find the lowest online rate through thebrand web sites, and to drive that online customerto our web site rather than those 3rd partychannels to which we provide wholesale rates andavailability. Reports.The Internet Pricing Tool located on the Revenue Management Toolkit can also assist DRMs when managing channels on 3rd party sites when an extranet is involved.OnQ Insider >Departments >Revenue Management >ToolkitCategory Topic Revenue Management Standard Frequency Instructions, Reports and Tools to usePricing Strategy Packages andPromotions –Administration/AnalysisAll promotions and packages must adhere to HiltonHotels Corporation Core Pricing Strategy and usethe approved SRP codes. "Rate only non-fenced"packages/promotions are not acceptable. Allpackages/ promotions must have an appropriatefence or feature. The definition of a properlyfenced qualified package or promotion is a bookingvehicle designed to generate incremental revenuewithout creating net trade down or displacement ofthe existing customer base. Hotels must completethe web based Package/Promotion form. Once thepackage/promotion has been approved by theRDRM, they will send to GDM for loading in theapplicable booking channels. All new Transientpackage rates need to be built as limit room typeand total for the Hilton brand. Must conduct pre-promotion breakeven analysis and post profitanalysis to justify/analyze cost of any promotions.Ensure the package rates reflect a roomdistribution equal to BAR.As SRP's are builtin the CRSThe Package/Promotion Form is located on the GDMwebsite and found by following the links below.OnQ Insider >Departments >Global Distribution Services.Double click on Global Distribution Services >GlobalDistribution Management > Under the options list selectGDM Forms >select your hotel brand and go to thePackage/Promo form..。
基于Hotelling模型的国有银行和股份制商业银行竞争策略研究
基于Hotelling模型的国有银行和股份制商业银行竞争策略研究作者:蔡莼莼来源:《时代经贸》2014年第07期摘要:本文研究的主要目的在于用Hotelling模型探索国有银行和股份制商业银行的竞争策略,并由线性模型拓展到更为一般的豪泰林平面模型。
关键词:国有银行股份制商业银行 hotelling模型差异化竞争1.豪泰林模型1.1 假设条件(1)假设存在一个长度为1的线性城市,该城市中有两个厂商,厂商1位于点x1=a上,厂商2位于点x2=1-b上,其中a≥0,b≥0。
其中,1-b-a≥0,即a的左侧和(1-b)的右侧分别为厂商1和厂商2的占地,a与(1-b)的中间部分(1-b-a)为两厂商的竞争区域。
(2)消费者均匀分布在该线性城市,且消费者对产品的需求完全无弹性,无论产品价格多高,消费者都会购买一单位产品。
(3)消费者购买两厂商的产品,获得的总效用相同设为U。
(4)消费者购买商品的旅行成本与距离之间存在一次正比关系,单位距离的旅行成本是t,t>0,位于x(0≤x≤1)地的消费者购买厂商i(i=1,2)的产品获得的净效用为ui=U-pi-t∣x-xi∣,(i=1,2)。
(5)两产商之间的价格差小于等于两厂商的运输成本,即∣p1-p2∣≤t,且价格不会高得以至于所有的消费者从两个厂商消费获得的净效用小于等于零。
(6)两厂商生产相同的产品,该产品的价格为pi(i=1,2),单位生产成本为零。
(7)两厂商以利润最大化为目标,进行一个两阶段的静态博弈。
第一阶段,两厂商同时确定位置;第二阶段,两厂商同时确定价格。
1.2豪泰林模型的构建不难推测,一定有位于x*处的消费者,对于两种产品的选择是无差异的,即u1=u2,则有p1+t∣x*-a∣=p2+t∣1-b-x*∣解得x*=当xx*时,位于x的消费者是厂商2的顾客,由此可得需求函数Di=即Di=设两家厂商的利润函数分别为∏1,∏2,由于生产成本为零,故利润函数为∏1=p1·D1= (1-b+a)p1- +∏2=p2·D2=2存在于银行业务间的豪泰林模型2.1 市场特征假设长期以来,空间竞争和价格竞争都是传统银行业竞争的主要策略。
toward a theory of stakeholder identification and salience (definitive dependent stakeholders)
Since Freeman (1984) published his landmark book. Strategic Management: A Stakeholder Approach, the concept oí "stakeholders" has become embedded in management scholarship and in managers' thinking. Yet, as popular as the term has become and as richly descriptive as it is, there is no agreement on what Freeman (1994) calls "The Principle oí Who or What Really Counts." That is, who (or what) are the stakeholders of the firm? And to whom (or what) do managers pay attention? The first question calls for a normaiive iheory of stakeholder identiiication, to explain logically why managers should consider certain classes oí entities as stakeholders. The second question calls for a descriptive theory oí stakeholder salience, to explain the conditions under which managers do consider certain classes of entities as stakeholders. Stakeholder theory, reviewed in this article, oífers a maddening variety oí signals on how questions oí stakeholder identification might be answered. We will see stakeholders identified as primary or secondary
The Visible Hand
Chandler’s Living History: The Visible Hand of Vertical Integration in 19th Century America Viewed Under a 21st Century Transaction Costs Economics LensMarcelo BucheliUniversity of Illinois at Urbana−Champaign, College of Business Joseph T. Mahoney Paul M. VaalerUniversity of Illinois at Urbana−Champaign, Collegeof Business University of Minnesota, Carlson School ofManagementAbstractAlfred Chandler’s recent passing is cause to review and celebrate his many contributions to business history. It also presents an opportunity to highlight links between his rich historical analyses concerning organizational and industrial innovation and contemporary management studies of the firm and industrial organization. We illustrate this point by applyingtransaction costs theory to several case studies from his 1977 masterwork narrating theemergence of vertically−integrated firms in nineteenth−century America, The Visible Hand.Vertical integration, organizational control and innovation in manufacturing at McCormick Harvester and Singer Sewing Machines, in transportation and distribution at Swift and United Fruit reflect managerial responses to classic transaction costs considerations includingcommercial relationships requiring the creation of specialized equipment and knowledge.Transaction costs analysis provides complementary historical insight on organizationalinnovation at these and other firms in the nineteenth century, and suggests when and where we might expect vertical integration strategies in emerging industries of the twenty−firstcentury. Chandler’s Visible Hand transcends business history to provide timeless insights and fundamental lessons on how innovative firms re−draw organizational boundaries andstructures for efficient and effective innovation.Published: 2007URL: /Working_Papers/papers/07−0111.pdfChandler’s Living History: The Visible Hand of Vertical Integration in 19th Century America Viewed Under a 21st Century Transaction Costs Economics Lens*Marcelo BucheliAssistant Professor of Business HistoryDepartment of Business AdministrationCollege of BusinessUniversity of Illinois at Urbana-Champaign339 Wohlers Hall1206 South Sixth StreetChampaign, IL 61820(217) 244-0208E-mail: mbucheli@Joseph T. MahoneyProfessor of Strategic ManagementDepartment of Business AdministrationCollege of BusinessUniversity of Illinois at Urbana-Champaign140C Wohlers Hall1206 South Sixth StreetChampaign, IL 61820(217) 244-8257 (telephone)E-mail: josephm@&Paul M. VaalerAssociate Professor of International BusinessStrategic Management & Organization DepartmentCarlson School of ManagementUniversity of Minnesota321 – 19th Avenue SouthMinneapolis, MN 55455(217) 333-4504E-mail: pvaaler@*Please contact Joe Mahoney with any questions regarding this paper.Chandler’s Living History: The Visible Hand of Vertical Integration in 19th Century America Viewed Under a 21st Century Transaction Costs Economics LensAbstractAlfred Chandler’s recent passing is cause to review and celebrate his many contributions to business history. It also presents an opportunity to highlight links between his rich historical analyses concerning organizational and industrial innovation and contemporary management studies of the firm and industrial organization. We illustrate this point by applying transaction costs theory to several case studies from his 1977 masterwork narrating the emergence of vertically-integrated firms in nineteenth-century America, The Visible Hand. Vertical integration, organizational control and innovation in manufacturing at McCormick Harvester and Singer Sewing Machines, in transportation and distribution at Swift and United Fruit reflect managerial responses to classic transaction costs considerations including commercial relationships requiring the creation of specialized equipment and knowledge. Transaction costs analysis provides complementary historical insight on organizational innovation at these and other firms in the nineteenth century, and suggests when and where we might expect vertical integration strategies in emerging industries of the twenty-first century. Chandler’s Visible Hand transcends business history to provide timeless insights and fundamental lessons on how innovative firms re-draw organizational boundaries and structures for efficient and effective innovation.Key words: Business history, transaction costs theory, vertical integration, contractingINTRODUCTIONThe enduring legacy of Alfred D. Chandler (1918-2007) is first-rate scholarship that integrates the “logic-in-use” of business decision makers with “reconstructed logic” (Kaplan, 1964) taken from theoretical frameworks in economics, management studies, organization theory, and sociology. Porter (1992) maintains that Chandler’s influence in business history has been enormous and that virtually every contemporary work on the history of the large-scale business enterprise must come to grips with Chandler’s analytical frameworks. Similarly, Galambos tells us that “[t]he dominant paradigm in business history has for many years been the synthesis developed by Alfred D. Chandler” (1997: 287). In our judgment, Chandler’s distinctive competence is best expressed in his own words. He was interested in understanding“how the historian can take what he needs from the concepts of the other disciplines without in any sense of being captured by them” (McCraw, 1988: 1). Of course, Chandler did much more than simply apply theory to the historical record. Williamson (1985: 11) made that clear in his assessment of Chandler’s first masterwork, Strategy and Structure:In many respects [Chandler’s] historical account of the origins, diffusion, nature, and importance of the multidivisional form of organization ran ahead of contemporary economic and organization theory. Chandler clearly established that organization form hadimportant business performance consequences, which neither economics nor organizationtheory had done (nor, for the most part, even attempted) before. The mistaken notion thateconomic efficiency was substantially independent of internal organization was no longertenable after the book appeared.Williamson (1985) understood Chandler’s (1977) deeper genius. Chandler (1977) recast business history in the crucible of social science theory. The resulting product was more than just narrative about the organizational evolution of American business. It was critical analysis ofsocial science theories and theorists, as this excerpt from Chandler’s second masterwork, The Visible Hand, conveys (1977: 490):Economists have often failed to relate administrative coordination to the theory of the firm.For example, far more economies result from the careful coordination of flow through the processes of production and distribution than from increasing the size of producing or distributing units in terms of capital facilities or number of workers. Any theory of the firm that defines the enterprise merely as a factory or even a number of factories, and therefore fails to take into account the role of administrative coordination, is far removed from reality.Chandler drew on a range of theoretical perspectives to understand the past in Strategy and Structure, in The Visible Hand, and in his third masterwork, Scale and Scope. The current paper complements Chandler’s theoretical pluralism with a review of organizational evolution in American business viewed under a single theoretical lens, namely transaction costs economics (Coase, 1937; Williamson, 1996). This paper utilizes transaction costs theory to review, explain, and predict a particular phenomenon --- vertical integration in the United States in the time period from 1840 -- that marks the beginning of a great wave of organizational innovation -- to 1920, which concluded the focal time period of Chandler’s (1977) The Visible Hand.We are not the first to use transaction costs analysis on Chandler’s work. Williamson (1975), Hill (1988), Mahoney (1992), Poppo (2003) and Mayer and Whittington (2004) have also drawn on transaction costs perspectives to re-consider Chandler’s work, but that work has been Strategy and Structure (1962), Chandler’s history of organizational evolution in the early to mid-twentieth century. There has been comparatively little transaction costs analysis of The Visible Hand, although the more general influence of the book has been substantial (John, 1997). Such an undertaking would likely receive at least partial support from Chandler, who also appreciated transaction costs theory and theorists: “Because of his concern with firm-specific assets and skills, I, as an economic historian, have learned much from Williamson” (Chandler,1992: 85). Our analysis of Visible Hand, therefore, extends recent research efforts to illuminate and inform business history with transaction costs perspectives that Chandler himself read broadly and incorporated.To be clear at the outset, focusing on a single theoretical lens may have the advantage of highlighting and revealing certain insights from business history that Chandler has provided in his masterworks and other writings. However, focusing on a single theoretical lens may also underplay aspects of the history of vertical integration, including strategic issues concerning path dependencies, goals of antitrust, entrepreneurial talents, domination, market power, (internal) capital market, and life-cycle explanations (Argyres and Liebeskind, 1999; Bittlingmayer, 1996; Livesay, 1989; Marglin, 1974; O’Sullivan, 2006; Stack, 2003; Stigler, 1951). That said, the current paper pushes hard on transaction costs theory for interpreting the historical record of Chandler’s (1977) Visible Hand.If we need theory to make sense of history, we also need history to make sense of theory (Gourvish, 1995; Lazonick, 1992). In important ways, transaction costs theory and Chandler’s (1977) account of business history are complementary. While transaction costs analysis provides a coherent conceptual model for interpreting the historical evidence, Chandler (1977) provides transaction cost economics with much relevant and needed institutional data (Robertson, 2003).Mirroring the structure of The Visible Hand (1977), the current paper is organized as follows. Section 1 briefly discusses the time period preceding the 1840s in the United States in which the putting-out and inside contracting systems were in place. Advantages and (transaction costs) disadvantages are emphasized.Section 2 considers the transformation of many businesses toward the modern vertically integrated firm. Particular attention is given both to technologically complex goods and toperishable products. This section first considers machinery firms (e.g., Singer Sewing Machine and McCormick Harvester), which produced machines through the fabrication and assembly of interchangeable parts. Such high volume products required careful scheduling that was enabled by managerial hierarchy. These firms marketed their products via vertically-integrated value chains in which the machines were demonstrated and installed. These machines also required continuing after-sales service and repairs. Since wholesalers rarely had the capabilities to provide these services effectively, the vertically integrated enterprise enabled both the generation and the sustainability of competitive advantage.This section then considers the production and distribution of perishable foods, such as meats and bananas. Here we learn other historical lessons concerning vertical integration. The effective scheduling of flows by managerial hierarchy was critical to prevent spoilage. Further, in order to prevent spoilage, effective distribution required massive (highly-specific asset) invest- ments in refrigerated railroad cars, ships, and warehouses. Such data, as we shall see, provide grist for the transaction costs economics mill.Finally, section 3 provides our analysis of the selective nature of vertical integration. Specifically, a transaction costs lens is brought to bear on the historical record for why vertical integration was observed in some industries but not in others in the United States, and provides discussion and conclusions concerning the historical and economic significance of vertical integration in the United States from 1840 to 1920, in which rapid change was experienced in the processes of production and distribution. Suggestions for future research and concluding remarks are provided.Section 1: The Putting-Out and Inside Contracting SystemsOrganizational innovation is an important factor in economic development. It may be argued --- and it was certainly the case before the scholarly contributions of Alfred D. Chandler --- that the social sciences under-appreciated organizational innovation. As the business historian Arthur Cole observed: “If changes in business procedure and practice were patentable, the contribution of business change to the economic growth of the nation would be (more) widely recognized … (1968: 61-62). Refinements in cost accounting, collective bargaining procedures, and organizational form changes qualify as examples. This section focuses on a particular change: the substitution of more market-like mechanisms such as the putting-out and inside contracting systems by the Visible Hand of managerial hierarchy via vertical integration of manufacturers and distributors (Yao, 1988). We begin with an analysis of the putting-out system.The Putting-Out System: The putting-out system existed in the United States from 1790 to 1840, and was a response to expanding market demand. In this business system, merchants purchased materials, delivered these materials to the workers in their homes and arranged for the sale of the completed articles. In contrast to handicraft manufacturing, the putting-out system was characterized by a separation of tasks --- a classic example of Adam Smith’s (1776) theory that “the division of labor was limited by the “extent of the market” (i.e., by demand). In the 1790s, metal goods, furniture, clothing, hats, and shoes were produced through the putting-out system (Gras and Larson, 1939; Hudson, 1981; Ware, 1931; Zakim, 1999).The history of the shoe industry illustrates how the putting-out system developed in the United States from the 1790s to the 1840s. Blanche E. Hazard describes the role of the shoe- maker, “who was simply to manufacture the boots and shoes, which a capitalist-entrepreneur marketed at his own risk and profit, supplying in whole or in part the tools and materials” (1913:244). The central shop system rapidly developed after 1820 (Thomson, 1989). In this particular arrangement, workers did the fitting and when this process was completed, they were returned to the central shop and given to the “makers” who would sew the boots and shoes. The makers had to wait for their work to be inspected at the central shop.The putting-out system had an attractive characteristic of preserving substantial worker autonomy. Moreover, the worker used the equipment properly since owners and operators were one and the same. However, because of the separate location of workers, inventory accumulation and transportation expenses were high. Further, this system encountered several transactional difficulties including: irregular production, loss of materials in transit and through embezzlement, slowness of manufacture, lack of uniformity, and uncertainty about the quality of the product (Babbage, 1835; Braverman, 1974; Kirkland, 1961).In terms of quality-shading problems, Sidney Pollard (1965) maintained that the putting- out system inhibited the development of high quality, and consistent with this assessment, Faler (1981) notes that the system, which was centered in Lyon, Massachusetts, was mostly used to produce cheaper shoes for growing southern and western markets rather than those specializing in high quality shoes for the more traditional eastern markets. Similarly, Goldin (1986) maintains that, in general, the risk of quality shading for low quality goods is less than for high quality goods and finds that high quality coats were made by wages on time rates, while lower quality coats were made by piece rates. One can presumably monitor output quality less expensively for lower quality goods (Cheung, 1983). In sum, due to transactional problems, by 1860 the putting- out system was almost completely eliminated in the United States, with clothing in the larger cities providing the one remaining vestige of the older putting-out system (Chandler, 1977: 246).The Inside Contracting System: Another method of coordinating activities utilized in the nineteenth century was the system of inside contracting, which was widely used by New England and Middle Atlantic manufacturers, especially among the metal fabricators and machine-tool builders. Inside contracting was common in such trades as typography, watch-making, mule- spinning, paper- making, glass-blowing, boiler-making, coal-mining, iron-molding, pipe-fitting, stoves, ship-building, locomotives, machine tools, firearms, military equipment, iron-rolling and steel-making (Clark, 1984; Clawson, 1980; Englander, 1987; Gillette, 1988; Nelson, 1975; Stone, 1981).Harold Williamson (1952: 87) describes the development of the inside contracting system at the Winchester Repeating Arms Company:The operation involved in manufacturing gun components and ammunition were delegated to super-foremen who hired and fired their own workers, set their wages, managed the job, and turned over the finished parts to the company for assembly. The company supplied raw materials, the use of floor space and machinery, light, heat, and power, special tools, and patterns for the job. The management credited the account of the contractor so much for every hundred pieces of finished work that passed inspection, and debited his amount for the wages paid to his men and the cost of oil, files, waste, and so on, used in production.Anything left over was paid to the contractor as a profit. In addition, the company paid him day wages as a foreman’s rate as a guarantee of minimum income.This arrangement in the Winchester Repeating Arms Company was the general business model utilized by those in the inside contract system. The one exception being, that while sub- contractors sometimes earned wages, they more often survived solely on the profits of the sub- contract (Edwards, 1979: 32).The inside contracting systems illustrates that managerial hierarchy cannot be explained simply by a central power source. In the case of inside contracting, space within factories was rented to individual entrepreneurs before (as well as after) the development of a central power source. The impetus of the factory system has transaction costs origins (Williamson, 1980). Anadvantage of the inside contracting system was that the firm was less burdened with increasingly difficult problems associated with production, process improvement, and labor supervision. The worker supervisor or master mechanic could obtain substantial independence and could avoid problems of marketing and finance. Further, according to Felicia Johnson Deyrup, the inside contracting system also played a fundamental role in the development of master tool building: “inside contractors were paid by the piece and hired their own labor, they benefited directly from increases in production or reductions in labor cost brought about by mechanization” (1948: 149).Although the insider contracting system had its economic advantages, there were several economic disadvantages including: the high economic costs of inspecting work to safeguard against quality shading, contracting workers using machinery carelessly, and the problems of discipline and high absenteeism (Lane, 1973; Navin, 1950). Contractual problems incurred in the inside contracting system provide historical evidence to support transaction costs theory that to merely transfer a transaction from the market and to organize it internally, without more, does not fully harmonize the exchange (Eccles, 1981; Mahoney, 2005; Williamson, 1980).That the inside contractor did not use the machinery properly is not surprising since the contractor is concerned with economic benefits generated until the contract termination date. Equipment repairs will therefore be deferred as contract termination dates approach. Pollard notes that: “In mines or quarries [using the inside contracting system], permanent damage was done to property by men interested in short-term returns only” (1965: 38).In addition, the inside contracting system gave contractors economic incentives and opportunities to strategically withhold or distort information. Harold Williamson notes that at the Winchester Repeating Arms Company: “any discovery of how to speed up operations or to substitute unskilled labor for skilled labor by the use of some new jig or fixture could becarefully guarded from management” (1952: 89). The concern of changing pay standards generates incentives to withhold details of the production process from the performance evaluator. Labor-saving innovation was delayed until after contract renewal. Capital-saving innovation is also likely to be low since the firm can appropriate a large share of the economic gain. Moreover, it was difficult to regulate the flow of components from each contractor and inventory control procedures were inadequate (Buttrick, 1952). There were no specific economic incentives for inside contractors to economize on inventory accumulation. Coordination was left to informal cooperation by the foremen of the departments. Also, the quality shading problems that accompanied the putting-out system continued to plague internal organization under the inside contracting system. North (1981: 168) notes that:[W]here quality was costly to measure, hierarchical organization would replace market transactions, the putting-out system was in effect a “primitive firm’ in which the merchant- manufacturer attempted to enforce constant quality standards at each step in the manufacturing process. By retaining ownership of the materials throughout the manufacturing process, the merchant-manufacturer was able to exercise this quality control at a cost lower than the cost of simply selling and buying at successive stages of the production process.The gradual move toward central workshops (inside contracting) was a further step in efforts at greater quality control and presaged the development of the factory system (hierarchy) that was in effect the direct supervision of quality throughout the production processes.In summary and utilizing transaction costs terms, managerial hierarchy mitigates the problems of excessive buffer inventories, improper equipment utilization, inadequate innovation disclosure incentives, and systemic quality shading that occurred in inside contracting systems. Thus, transaction costs analysis suggests displacement of the inside contracting system by a managerial hierarchy. The employment relationship mitigates bilateral monopoly contracting and permits fiat to be used in settling disputes. Adaptation may be achieved within a “zone of acceptance” (Simon, 1947). Furthermore, when the inside contractors become managers, they nolonger necessarily have claims to the semi-independent profit streams and consequently greater cooperation is realistically anticipated.The authority relationship of managerial hierarchy provides good equipment maintenance incentives compared to inside contracting since asymmetric information problems are lessened since the firm can continuously monitor production, and operations can be subject to internal audits. Removal of semi-independent profit streams and improved auditing serve to attenuate inside contracting hazards, which are derived from small-numbers bargaining, asymmetric information, and free-rider behavior (Williamson, 1975).High uncertainty and the subsequent need to coordinate successive value chain stages in the production process, and contractual problems inherent under conditions of asymmetric information and small-numbers bargaining between the firm and the contractor contributed to the demise of the inside contracting system, beginning in the late 1870s. For example, Singer Sewing Machine ended the inside contracting system in 1883 (Hounshell, 1984). Also, the Winchester Repeating Arms Company moved to reduce the number of contractors, and by 1914 the inside contracting system had been all but eliminated in gun production (Williamson, 1952: 136). Transaction costs pressures were leading to a competition among organizational forms in which the business landscape of the United States would fundamentally change in the 1840-1920 period (Chandler, 1977). The next section considers the transformation of these traditional enterprises into the modern vertically integrated firm.Section 2: The Transformation of Traditional Enterprise to the Modern Vertically Integrated FirmThe 1840s mark the beginning of a dynamic institutional competition that saw rapid changes in the processes of production and distribution in the United States, which evolved into the modern corporation. In 1840, markets were small, there was a lack of large scale power sources, and transportation was expensive (Taylor, 1968), which kept the volume of transactions low. Therefore, as late as 1840, there were no middle-level managers in the United States, and the most advanced accounting methods were still those of Italian double-entry bookkeeping --- techniques that were similar to those used five hundred years earlier in 1340 (Chandler, 1977).However, the rapid expansion of railroad networks in the 1840 to 1850 period induced a dramatic decrease in unit transportation costs. Standard microeconomic theory indicates that such a reduction in transportation costs will result in a rise in the least-cost scale of production, and, indeed, industrial enterprises after 1850 began to build new plants of unprecedented size. The dramatic change in production economies of scale (Atack, 1986; O’Brien, 1988), economies of scope (Chandler 1990, Teece, 1980) and changes in demand required an adaptive response in distribution (Barger, 1955; Higgs, 1971). Furthermore, the telegraph systems, which achieved commercial practicability by 1845 had blanketed the east and west and had reached Chicago, St. Louis, New Orleans, as well as other principal northern cities by 1850 (Du Boff, 1980). Thus, the essential infrastructure necessary for geographic expansion was in place by 1850, and the availability of coal, domestic iron and steam power enabled rapid growth of the manufacturing sector in the 1840-1870 period (Rosenberg, 1972).Vertical Integration by Firms Producing Technologically Complex Goods: In the 1870 to 1895 period, as the basic transportation and communication infrastructure was nearly completion,enterprises began to integrate mass production with mass distribution.1 Here we focus on technologically complex goods, and begin with the Singer Sewing Machine, which was in many respects a pioneer in developing the channels of distribution. They were a pioneer consumer appliance, the first product to be sold under a consumer installment plan, and the first product tobe sold through a fully developed franchised agency system (Jack, 1957), which enabled greater adaptation operating in markets with different social and economic characteristics (Carstensen, 1984). The reason for their organizational innovation is summarized by Mira Wilkins: “The independent agent did not pay sufficient attention to the product; he did not bother to instruct the buyer on how to use the machine; he did not know how to service it; he failed to demonstrate it effectively, and he did not seek new customers aggressively. Independent agents were not prepared to risk their capital to sell goods on installment, nor would they risk carrying large stocks” (1970: 43).The new product required distributional innovation in order to demonstrate, instruct and assist the sewing-machine user (Hennart, 1994). By the mid-1950s, the Singer Sewing Machine Company had its own salesrooms to market the product, deliver the machines, assist consumers with trained personnel, maintain attractive outlets, carry on adequate stock of machines, parts1Many large companies either merged or built their own systems to distribute their goods to the national market (Chandler, 1977; Schmitz, 1995). A partial list includes: (a) Makers of crude chemicals: Du Pont, General Chemical, International Agricultural Chemical and National Carbon; (b) Food processing enterprises: Del Monte, Nabisco, Quaker Oats and United Fruit; (c) Meat- packing firms: Armour, Cudahy, Morris, Swift, and Wilson (d) Metal firms: Alcoa; Bethlehem Steel, International Nickel and U.S. Steel; (e) Rubber companies: Firestone, Goodrich, Goodyear and United States Rubber; (f) Automobile firms: Ford, General Motors, Packard, and Studebaker;(g) Petroleum firms: Gulf, Shell, Standard Oil and Texaco; and (h) a variety of technologically complex products produced by such firms as: Allis-Chalmers, American Radiator, Babcock and Wilcox, Burroughs Adding Machine, Computer-Tabulating-Recording (the forerunner of IBM), Deere and Company, Eastman Kodak, Electric Storage Battery, General Electric, Ingersoll-Rand, International Harvester, J.I. Case, Johnson Company, Link-Belt Machinery, National Cash Register, Pittsburgh Plate Glass, Remington Typewriter, Singer Sewing Machine, Underwood Typewriter, Western Electric, Westinghouse Air Brake, Westinghouse Electric; Worthington Pump and Machinery, and the Otis Elevator Company, with their vertical integration of vertical transportation.。
酒店让客人睡得比家好
酒店让客人睡得比家好作者:暂无来源:《环境与生活》 2014年第1期酒店让客人睡得比家好在美国,精明的酒店管理者们为了吸引客人,打出了“舒睡”牌,宣称要让客人睡得比在家还好。
位于纽约曼哈顿岛的本杰明酒店在如何让客人睡得更好方面下了大功夫。
酒店雇用了一位睡眠专家培训员工,同时还为客人们提供失眠咨询服务。
为了不让失眠的客人感到紧张,酒店房间内不摆放任何钟表。
另外,他们还提供高低和软硬度不同的枕头,并给每个枕头都取了好听的名字,如“瑞典记忆”等。
亚利桑那州蒙特露西亚度假村在“睡眠战”中走得更远。
住客们可以到这里的水疗中心享受“复原睡眠仪式”,或是尝试“神圣睡眠冥想”。
水疗中心经理埃琳·斯图尔特说:“不少客人感到压力大、睡眠不足,我们可以帮助他们更深层地放松。
”那些睡眠困难的顾客还可以在度假村尝试静脉注射疗法——注射维生素B等有助睡眠的补剂和抗氧化物。
还有不少美国酒店十分注重细节。
他们曾进行过一次“睡床升级大比拼”,现在又把关注点集中在光照、空气质量等细节,希望营造出更适宜的睡眠氛围。
类似行动获得了医学界的支持。
在很多专业人士看来,一夜好眠对不少美国人来说已经成了奢侈的事情。
全美睡眠基金会的调查显示,三分之二的美国人表示在最近一周内睡眠不足,最影响睡眠的是各种电子产品发出的光。
巴黎地铁戒律:禁盯看美女巴黎交通管理局近日在网上公布了《现代旅客礼貌手册》,列举了12条“戒律”,每条还配有草图说明,以确保民众文明守礼。
这份礼貌手册共分4大类,包括乐于助人、礼貌用语、礼貌行为以及保持文雅。
礼貌手册认为,巴黎人必须向那些“身穿百慕大衬衫,一手拿地图一手挠头”的游客提供帮助。
手册中还幽默地指出,希望大家能花两分钟时间,帮助无法听明白站名的游客,这是相当值得做的好事。
同时,旅客应为后面的乘客留门、问候司机。
旅客也应该理解月台上的禁烟标志并不仅仅是艺术品。
此外,礼貌手册还建议游客不要通过耳机大声放音乐。
要抵制住“诱惑”,不要长时间盯着美女看。
Seven Types of Ambiguity
Seven Types of Ambiguity -by William EmpsonWilliam Empson has been Professor of English Literature at Sheffield University since 1953. Born in Yorkshire in 1906, he was educated at Winchester and Magdalene College, Cambridge. In 1931 he was appointed to the Chair of English Literature, Bunrika Dargaku, Tokyo, for three years, and in 1937 became professor of English Literature in the Peking National University, when it formed part of the South-Western Combined Universities, in Hunan and Yunnan. After a year spent as a B.B. C. monitor, he was appointed Chinese Editor to the B. B. C. in 1941 and remained in that post until 1946. He returned, as Professor in the western Languages Department, to Peking National University in 1947. His publications include Poems (1935), Some Versions of Pastoral (1935), The Gathering Storm (poems) (1940), the Structure of Complex Words (1951), and Milton’s God (1961).Chapter IThe sorts of meaning to be considered; the problems of Pure Sound and of Atmosphere. First-type ambiguities arise when a detail is effective in several ways at once, e.g., by comparisons with several points of likeness, antitheses with several points of differences, ‘comparative’ adjectives, subdued metaphors, and extra meanings suggested by rhythm. Annex on Dramatic Irony. Chapter IIIn second-type ambiguities two or more alternative meanings are fully resolved into one. Double grammar in Shakespeare Sonnets. Ambiguities in Chaucer, the eighteenth century, T. S. Eliot. Digressions on emendations of Shakespeare and on his form ‘The A and B of C.’Chapter IIIThe condition for third-type ambiguity is that two apparently unconnected meanings are given simultaneously. Puns from Milton, Marvell, Johnson, Pope, Hood. Generalised from when there is reference to more than one universe of discourse; allegory, mutual comparison, and pastoral. Examples from Shakespeare, Nash, Pope, Herbert, Gray. Discussion of the criterion for this type. Chapter IVIn the fourth type the alternative meanings combine to make clear a complicate state of mind in the author. Complete poems by Shakespeare and Donne considered. Examples of alternative possible emphases in Donne and Hopkins.Chapter VThe fifth type is a fortunate confusion, as when the author is discovering his idea in the act of writing (examples from Shelley) or not holding it all in mind at once (examples from Swinburne). Argument that later metaphysical poets were approaching nineteenth-century technique by this route;Chapter VIIn the six type what is said is contradictory or irrelevant and the reader is forced to invent interpretations. Examples from Shakespeare, Fitzgerald, Tennyson, Herbert, Pope, Yeats. Discussions of the criterion for this type and its bearing on nineteenth-century technique. Chapter VIIThe seventh type is that of full contradiction, marking a division in the author’s mind. Freud invoked. Examples of minor confusions in negation and opposition.Chapter VIIIGeneral discussion of conditions under which ambiguity is valuable and the means ofapprehending it. Argument that theoretical understanding of it is needed now more than previously. Not all ambiguities are relevant to criticism; Discussion of how verbal analysis should be carried out and what it can hope to achieve.。
酒店经营观念指导酒店经营活动的思想意识
(二)Motto
• At The Ritz-Carlton Hotel Company, L.L.C., “We are Ladies and Gentlemen serving Ladies and Gentlemen.” This motto exemplifies the anticipatory service provided by all staff members. (淑女与绅 士为淑女与绅士服务 )
company's confidential information and assets. • I am responsible for uncompromising levels of cleanliness and creating a safe
and accident-free environment.
• We pledge to provide the finest personal service and facilities for our guests who will always enjoy a warm, relaxed, yet refined ambience.
• The Ritz-Carlton experience enlivens the senses, instills well-being, and fulfills even the unexpressed wishes and needs of our guests.
Community footprints and creating The Ritz-Carlton Mystique. • I continuously seek opportunities to innovate and improve The Ritz-Carlton