from5.7.2011 ipb

合集下载

DEVELOPMENT AND PRELIMINARY VALIDATION OF A PARAMETRIC PEDIATRIC HEAD FINITE ELEMENT MODEL FOR

DEVELOPMENT AND PRELIMINARY VALIDATION OF A PARAMETRIC PEDIATRIC HEAD FINITE ELEMENT MODEL FOR

Proceedings of the ASME Bioengineering Conference Proceedings of the 2011 ASMESummer 2011 Summer Bioengineering Conference SBC2011 SBC2011 June 22-25, Nemacolin Woodlands Resort, Famington, Pennsylvania, USA USA June 22-25, 2011, Farmington, Pennsylvania,SBC2011-53166 SBC2011-53166DEVELOPMENT AND PRELIMINARY VALIDATION OF A PARAMETRIC PEDIATRIC HEAD FINITE ELEMENT MODEL FOR POPULATION-BASED IMPACT SIMULATIONSJingwen Hu , Zhigang Li , and Jinhuan Zhang1 2 1 1,2 2University of Michigan Transportation Research Institute, Ann Arbor, MI, USA, State Key Laboratory of Automotive Safety and Energy, Tsinghua University, Beijing, ChinaINTRODUCTIONHead injury is the leading cause of pediatric fatality and disability in the United States (1). Although finite element (FE) method has been widely used for investigating head injury under impact, there are only a few 3D pediatric head FE models available in the literature, including a 6-month-old child head model developed by Klinich et al (2), a newborn, a 6-month-old and a 3-year-old child head model developed by Roth et al. (3, 4, 5), and a 1.5-month-old infant head model developed by Coats et al (6). Each of these models only represents a head at a single age with single head geometry. Nowadays, populationbased simulations are getting more and more attention. In populationbased injury simulations, impact responses for not only an individual but also a group of people can be predicted, which takes into account variations among people thus providing more realistic predictions. However, a parametric pediatric head model capable of simulating head responses for different children at different ages is currently not available. Therefore, the objective of this study is to develop a fast and efficient method to build pediatric head FE models with different head geometries and skull thickness distributions. The method was demonstrated by morphing a 6-month-old infant head FE model into three newborn infant head FE models and by validating three morphed head models against limited cadaveric test data.MODEL CONSTRUCTIONMethod Overview As shown in Figure 1, the basic concept to develop a parametric pediatric head FE model is to morph a baseline model into different subject-specific geometries using mesh morphing techniques based on landmarks selected on both baseline model and CT images. Different material properties also need to be assigned based on the age of the child. Because this method may be programmed and run automatically, it is a very efficient tool to generate a group of FE head models for children with different ages and head geometries.Mesh Morphing In this study, a 6-month-old child head FE model originally developed by Klinich et al (2) was remeshed and used as the baseline model. It included skull, suture, brain, dura mater, cerebrospinal fluid (CSF), and scalp. Only hexahedral solid elements and quadrilateral shell elements were used in the baseline model. Radial Basis Function (RBF) was utilized to morph the baseline model into any subjectspecific head geometry based on the corresponding landmarks in both the baseline model and CT images (7). Landmarks were evenly distributed throughout the skull surface and special landmarks were also selected along the suture-skull connections. Skull/suture thickness at each landmark was measured from the CT segmentation data, so that morphed model sustained realistic skull/suture thickness. The morphing process included two steps: geometry morphing and thickness morphing. In geometry morphing, the baseline model was morphed to the target head geometry; while in the thickness morphing, skull/suture thickness distribution was adjusted to match the target thickness measured from CT images. The thickness of scalp was assumed to be constant throughout the head and was calculated by averaging measured data from CT images. The baseline model as well as three subject-specific models (8day-old, 10-day-old, and 27-day-old) morphed by the RBF are shown in Figure 2. The skull shape and suture size varied significantly, even though all three infants were less than 1-month-old. There are 38912 solid elements and 7680 shell elements in each model.Baseline model8-day-old model10-day-old model 27-day-old model Figure 2. Baseline and three subject-specific infant head FE models Material Properties Because the infants selected in this study were all less than 1month-old, only one set of material properties of pediatric head 1 Copyright © 2011 by ASMEFigure 1. Procedure to develop a parametric pediatric head FE modelDownloaded From: / on 06/15/2014 Terms of Use: /termscomponents (see Table 1) was applied in all three models. The material parameters were based on those used in previous pediatric head FE models in the literature with similar age (5, 6). Table 1 Material properties used in the infant head FE modelsComponents Skull Suture Dura CSF Scalp Brain Density (kg/m3) 2150 1130 1140 1040 1200 1040 Young’s Modulus (MPa) Poisson's Ratio500 0.22 8.1 0.49 31.5 0.45 0.012 0.499 16.7 0.42 K=2.11Gpa, Gሺtሻ ൌ G∞ ൅ ሺG଴ െ G∞ ሻeିβ୲ G0=5.99Kpa, Gஶ =2.32Kpa, β=0.09248/sMODEL VALIDATIONThe three infant head models developed in this study were validated against infant cadaver test data reported by Prange’s et al (8). During those tests, three newborn infant heads (1-day-old, 3-day-old, and 11-day-old) were subjected to two different loading conditions, compression and drop impact. Compression Test Validation In compression tests, the whole infant head was placed between two parallel rigid plates. One plate is fixed and the other is moved along the anterior-posterior direction of the head at 50mm/s. Forcedeformation time history was recorded. The simulation setup is shown in Figure 3. Three infant head models were simulated under the same compression condition. Good correlations were achieved between the test and simulations as shown in Figure 4.applied to develop pediatric head FE models, in which suture size and skull thickness are crucial. In this study, RBF, traditionally used in image processing, was utilized to morph a six-month-old child head FE model into different head geometries with different skull/suture thickness distributions. Because the landmarks were selected not only throughout the skull, but also along the skull-suture connections, the morphed model sustained accurate suture size with accurate skull/suture thickness distribution. The mesh qualities for all three infant head models were comparable to the baseline model. However, future investigations are needed to further validate the robustness of RBF method for FE mesh morphing. Parametric Pediatric Head FE Model The purpose of a parametric pediatric head FE model is to account for the variations of age, head size and shape, suture size, and skull/suture thickness on the head impact responses among the population. The method developed in this study make it possible to automatically generate a group of pediatric head models for performing population-based impact simulations, which is a significant improvement from previous individual-based pediatric head impact simulations. The proposed method can be widely used for future investigations on pediatric head impact response, injury mechanism, and injury tolerance. Limitations and Future Work This study is limited by the fact that only 3 less than 1-month-old infant head models were constructed, same set of material properties was used for all three models, and model validation was performed against limited cadaveric test results. Nevertheless, this study demonstrated the feasibility of using a parametric pediatric head FE model to conduct population-based simulations. In order to improve this model, future work should include statistical head geometry data for children based on age, the relationship between age and material properties for different head components, and more pediatric head cadaver tests for model validation.ACKNOWLEDGMENTSThe authors would like to thank Dr. Matthew Reed and Dr. Jonathan Rupp in University of Michigan Transportation Research Institute for their help on method development and CT data acquisition.Figure 3. Compression test setup Figure 4. Test-simulation comparisonREFERENCES1. Kraus, J.F., Rock, A., and Hemyari, P.,1990, “Brain injuries among infants, children, adolescents, and young adults”. American Journal of Diseases in Children, 144, pp.684-691. 2. Klinich, K.D., Hulbert, G.M., Schneider, L.W., 2002, “Estimating infant head injury criteria and impact response using crash reconstruction and finite element modeling”, Stapp Car Crash J, 46, pp.165-194. 3. Roth, S., Raul, J.S., Ludes, B., Willinger, R., 2007, “Finite element analysis of impact and shaking inflicted to a child”, Int. J. Leg.Med, 121, pp. 223-228. 4. Roth, S., Raul, J.S., Ludes, B., Willinger, R., 2009, “Child head injury criteria investigation through numerical simulation of real world trauma”, Computer methods and programs in biomedicine, 93, pp.32-45. 5. Roth, S., Raul, J.S., Willinger, R., 2010, “Finite element modeling of pediatric head impact: global validation against experimental data”, Computer methods and programs in biomedicine, 99, pp.2533. 6. Coats, B, Margulies, S, S., 2007, “Parametric Study of Head Impact in the Infant”, Stapp Car Crash Journal, 51, pp.1-15. 7. Bennink, H,E, Korbeeck, J,M, Janssen, B, J et al,2007, “Warping a Neuro-Anatomy Atlas on 3D MRI data with Radial Basis Function”, IFMBE proceedings, 15, pp.28-32 8. Prange, M, T., Luck, J, F., Dibb, A et al, 2004, “Mechanical Properties and Anthropometry of the Human Infant Head”, Stapp Car Crash Journal, 48, pp. 279-299. 2 Copyright © 2011 by ASMEDrop Test Validation In drop tests, infant heads were dropped onto an anvil plate under five different directions (vertex, occipital, forehead, left and right parietal) at 15cm and 30cm heights. The acceleration time history was reported. In this study, 15cm and 30cm drop tests were simulated using all three FE models. Because the models were symmetric, test results in left and right parietal directions were combined, and consequently accelerations of four impact directions were compared between the tests and simulations. The means and standard deviations of the peak head accelerations at different impact directions from both tests and simulations are shown in Figure 5. Good matches in both 15cm and 30cm drop conditions were achieved for all four impact directions.a) 15 cm drop height b) 30 cm drop height Figure 5. Model validations under different drop conditionsDISSCUSSIONMesh Morphing Using mesh morphing method to fast develop subject-specific FE models has been reported before in the literature, but has never beenDownloaded From: / on 06/15/2014 Terms of Use: /terms。

Informal finance A theory of moneylenders.

Informal finance A theory of moneylenders.

Informalfinance:A theory of moneylenders☆Andreas MadestamDepartment of Economics,Stockholm University,Swedena b s t r a c ta r t i c l e i n f oArticle history:Received3May2011Received in revised form14October2013 Accepted7November2013JEL classifications:O12O16O17D40Keywords:Credit marketsFinancial developmentInstitutionsMarket structure I present a model that analyzes the coexistence of formal and informalfinance in underdeveloped credit markets. Formal banks have access to unlimited funds but are unable to control the use of rmal lenders can pre-vent non-diligent behavior but often lack the needed capital.The theory implies that formal and informal credit can be either complements or substitutes.The model also explains why weak legal institutions increase the prev-alence of informalfinance in some markets and reduce it in others,whyfinancial market segmentation persists, and why informal interest rates can be highly variable within the same sub-economy.©2013The Author.Published by Elsevier B.V.1.IntroductionFormal and informalfinance coexist in markets with weak legal in-stitutions and low levels of income(Germidis et al.,1991;Nissanke and Aryeetey,1998).Poor people either obtain informal credit or bor-row from bothfinancial sectors at the same time.Banerjee and Duflo (2007)document that95%of all borrowers living below$2a day in Hyderabad,India access informal sources even when banks are present.1Meanwhile,Das-Gupta et al.(1989)provide evidence from Delhi,India where70%of all borrowers get credit from both sectors at the same time.2Suchfinancing arrangements raise a number of issues. Why do some borrowers take informal loans despite the existence of formal banks,while others obtain funds from bothfinancial sectors simultaneously?Also,is there a causal link between institutional devel-opment,level of income,and informal lending?If so,precisely what is the connection?Although empirically important,the coexistence of formal and infor-malfinance has not received as much attention as recent theoretical work on microfinance(Banerjee et al.,1994;Ghatak and Guinnane, 1999;Rai and Sjöström,2004).In this paper,I provide a theory of infor-malfinance,whose main assumptions can be summarized as follows.First,in line with the literature on the effect of institutions on economic performance(Djankov et al.,2007;La Porta et al.,1997,1998;Visaria, 2009),I view legal protection of banks as essential to ensure availability of credit.To this end,I assume that borrowers may divert their bank loan (ex ante moral hazard)and that weaker contract enforcement increases the value of such diversion,which limits the supply of funds.By contrast,in-formal lenders are able to monitor borrowers by offering credit to a group of known clients where social ties and social sanctions induce investment (Aleem,1990;Ghate et al.,1992;Udry,1990).3Second,while banks have access to unlimited funds,informal lenders can be resource constrained.In a survey offinancial marketsJournal of Development Economics107(2014)157–174E-mail address:andreas.madestam@ne.su.se.1See Siamwalla et al.(1990)for similarfindings from Thailand.2See Conning(2001)and Giné(2011)for related support from Chile and Thailand.3For further evidence of the personal character of informal lending see Udry(1994), Steel et al.(1997),and La Ferrara(2003)for the case of Africa and Bell(1990)for the case of Asia.As in Besley and Coate(1995),my aim is not to explain informal lenders'monitor-ing ability,but to understand itsimplications.0304-3878©2013The Author.Published by Elsevier B.V./10.1016/j.jdeveco.2013.11.001Contents lists available at ScienceDirectJournal of Development Economics j o u r n a l h o me p a g e:ww w.e l s e v i e r.c o m/l o c a t e/d e v e cOpen access under CC BY-NC-ND license.Open access under CC BY-NC-ND license.☆I am grateful to Tore Ellingsen and Mike Burkart for their advice and encouragement.I also thank Abhijit Banerjee,ChloéLe Coq,Avinash Dixit,Giovanni Favara,Maitreesh Ghatak,Bård Harstad,Eliana La Ferrara,Patrick Legros,Rocco Macchiavello,Matthias Messner,Elena Paltseva,Fausto Panunzi,Tomas Sjöström,David Strömberg,Jakob Svensson,Jean Tirole,Robert Townsend,Adel Varghese,Fabrizio Zilibotti,and two anony-mous referees for valuable comments,as well as the seminar participants at Bocconi University(Milan),CEPR workshop on Globalization and Contracts:Trade,Finance and Development(Paris),EEA Congress2004(Madrid),ENTER Jamboree2004(Barcelona), EUDN conference2007(Paris),Financial Intermediation Research Society's Conference on Banking,Corporate Finance and Intermediation2006(Shanghai),IIES(Stockholm), IUI(Stockholm),Lawless Finance:Workshop in Economics and Law(Milan),LSE (London),NEUDC Conference2004(Montréal),Nordic Conference in Development Economics(Gothenburg),SITE(Stockholm),Stockholm School of Economics,Swedish Central Bank(Stockholm),and University of Amsterdam.in developing countries,Conning and Udry(2007)write that“financial intermediation may be held up not for lack of locally informed agents…but for lack of local intermediary capital”(Conning and Udry,2007, p.2892).Consequently,landlords,professional moneylenders,shop-keepers,and traders who offer informal credit frequently acquire bank funds to service borrowers'financing needs.Ghate et al.(1992),Rahman (1992),and Irfan et al.(1999)remark that formal credit totals three quar-ters of the informal sector's liabilities in many Asian countries.4 Third,less developed economies are often characterized as uncom-petitive.In particular,formal sector banks typically have some market power(see Barth et al.,2004;Beck et al.,2004for contemporary support and Rajan and Ramcharan,2011;Wang,2008for historical evidence).5 Within this framework,I show that informalfinance affects poor people's access to credit in two main ways.In the model,formal banks are restrained by borrowers'inability to commit to using funds for pro-ductive purposes.The agency problem is more acute for the poor as the benefit of diversion increases in the size of the loan.While informal lenders'monitoring advantage allows them to lend to bank-rationed borrowers they may not have the necessary resources in which case they also turn to the formal sector for additional funds.Afirst set offindings considers how informal credit may improve borrowers'relationship with the rmal loans increase the re-turn to productive activities as they cannot be diverted.This lowers the relative gain of misusing formal funds,allowing banks to extend more rmalfinance thus complements the banks by permit-ting for larger formal loans to poor borrowers.Second,informal lenders'monitoring ability also helps banks to re-duce agency cost by letting them channel formal credit through the in-formal sector.When lending directly to poor people,banks share part of the surplus with the borrowers to keep them from diverting.Extending credit through informal lenders that are rich enough to have a stake in the outcome minimizes the surplus that banks need to share.In contrast to thefirst result,the credit market becomes segmented as informalfi-nance substitutes for banks and limits borrowers'direct bank access.Ifind that the extent to which informalfinance complements or substitutes for bank credit depends on banks'bargaining power.If formal banks are competitive,borrowers obtain capital from bothfi-nancial sectors,with poor informal lenders accessing banks for extra funds.By contrast,if formal lenders have some market power,suffi-ciently rich(bank-financed)informal lenders are borrowers'only source of credit.This is because borrowers'and informal lenders' joint return is maximized if both take competitive bank loans, while bank market power and subsequent credit market segmenta-tion allows the formal monopoly to reduce agency costs.The predictions are broadly consistent with existing data on formal–informal sector interactions.(See Section5for an extensive discussion.) The characterization of the aggregate demand for and supply of formal and informal credit also allows me to address some additional issues. For example,weaker legal institutions increase the prevalence of informal credit if borrowers obtain money from bothfinancial sectors,while the opposite is true if informal lenders supply all capital.Moreover,the inter-est rates of informal lenders rise as credit markets become segmented.Persistence offinancial underdevelopment,in the form of market segmentation,can also be understood within the model.Wealthier in-formal lenders(and banks)prefer the segmented outcome that arises with bank market power,as it softens competition between thefinan-cial sectors.Finally,my analysis sheds some light on credit market pol-icy by distinguishing between the efficiency effects of wealth transfers, credit subsidies,and legal reform.The paper relates to several strands of the literature.First,it adds to work that views informal lenders either as bank competitors(Bell et al., 1997;Jain,1999;Jain and Mansuri,2003)or as a channel of bank funds (Bose,1998;Floro and Ray,1997;Hoff and Stiglitz,1998).While these papers share the notion that informal lenders hold a monitoring advan-tage over banks,there are a number of important differences.First,in earlier work it is not clear whether informal lenders compete with banks or primarily engage in channeling funds.Second,competition theories cannot account for bank lending to the informal sector.Third, channeling theories fail to address the agency problem between the for-mal and the informal lender.The present paper explains why informal lenders take bank credit in each of these instances,making competition and channeling a choice variable in a framework where monitoring problems exist between banks,informal lenders,and borrowers.Allowing for both competition and channeling thus extends and reconciles existing approaches.By de-riving endogenous constraints on informal lending,I am able to account for the empirical regularity that informal credit complements as well as substitutes for formalfinance.Finally,an advantage over earlier work is the tractability of the basic agency model which delivers the simple insight that less lever-aged borrowers are better credit risks(as in the costly effort setup).6 The framework presented is well suited to take on additional character-istics relevant to understand formal and informal sector interactions such as differences in enforcement capacity,the importance of legal in-stitutions,and market power;features which are missing in earlier contributions.The second line of related literature studies the interaction between modern and traditional sectors to rationalize persistence of personal ex-change(Banerjee and Newman,1998;Besley et al.,2012;Kranton, 1996;Rajan,2009).7My results also match Biais and Mariotti's(2009) and von Lilienfeld-Toal et al.'s(2012)findings of heterogeneous effects of improved creditor rights across rich and poor agents.Finally,the paper links to research emphasizing market structure as an important cause of contractual frictions in less developed economies(Kranton and Swamy,2008;Mookherjee and Ray,2002;Petersen and Rajan, 1995).8The model builds on Burkart and Ellingsen's(2004)analysis of trade credit in a competitive banking and input supplier market.9The bank and the borrower in their model are analogous to the competitive for-mal lender and the borrower in my setting.However,their input suppli-er and my informal lender differ substantially.10Also,in contrast to Burkart and Ellingsen,by considering credit-rationed informal lenders and bank market power,the model distinguishes whether informal lenders compete with banks or engage in channeling formal bank funds.Section2introduces the model and Section3presents equilibrium outcomes.Section4deals with cross-sectional predictions,persistence4Conning and Udry(2007)further write that“the trader-intermediary usually employs a combination of her own equity together with funds leveraged from less informed out-side intermediaries such as banks…[leading]to the development of a system of bills of ex-change…[used by the]outside creditor…as security”(Conning and Udry,2007,pp.2863–2864).See Harriss(1983),Bouman and Houtman(1988),Graham et al.(1988),Floro and Yotopoulos(1991),and Mansuri(2006)for additional evidence of informal lenders accessing the formal sector in India,Niger,Pakistan,Philippines,and Sri Lanka.See also Haney(1914),Gates(1977),Biggs(1991),Toby(1991),Teranishi(2005,2007),and Wang(2008)for historical support from Japan,Taiwan,and the United States.5Beck et al.report a positive and significant relation between measures of bank compe-tition and GDP per capita.6See Banerjee(2003)for a discussion of the similarity across different moral hazard models of credit rationing.7While Kranton and Banerjee and Newman focus on how market imperfections give rise to institutions that(may)impede the development of markets,Besley et al.and Rajan (like this paper)show how rent protection can hamper reform.8As in Petersen and Rajan and Mookherjee and Ray,I study the effects of market power on credit availability,while Kranton and Swamy investigate the implications on hold-up between exporters and textile producers.9Burkart and Ellingsen assume that it is less profitable for the borrower to divert inputs than to divert cash.Thus,input suppliers may lend when banks are limited due to poten-tial agency problems.10While the input supplier and the(competitive)bank offer a simple debt contract,the informal lender offers a more sophisticated project-specific contract,where the invest-ment and the subsequent repayment are determined using Nash Bargaining.More impor-tantly,the informal lender is assumed to be able to ensure that investment is guaranteed, something that the trade creditor is unable to do.158 A.Madestam/Journal of Development Economics107(2014)157–174of market segmentation,and informal interest rates.Section5examines empirical evidence.Section6explores economic policy.I conclude by discussing robustness issues and point to possible extensions.Formal proofs are in the Appendix.2.ModelConsider a credit market consisting of risk-neutral entrepreneurs (for example,farmers,households,or smallfirms),banks(who provide formalfinance),and moneylenders(who provide informalfinance). The entrepreneur is endowed with observable wealthωE≥0.She has access to a deterministic production function,Q(I),where I is the in-vestment volume.The production function is concave,twice continu-ously differentiable,and satisfies Q(0)=0and Q′(0)=∞.In a perfect credit market with interest rate r,the entrepreneur would like to attainfirst-best investment given by Q′(I⁎)=1+r.However,she lacks sufficient wealth,ωE b I⁎(r),and thus turns to the bank and/or the moneylender for the remaining funds.11While banks have an excess supply of funds,credit is limited as the entrepreneur is unable to commit to invest all available resources into her project.Specifically,I assume that she may use(part of)the assets to generate nonverifiable private benefits.Non-diligent behavior resulting in diversion of funds denotes any activity that is less productive than investment,for example,using available resources for consumption orfinancial saving.The diversion activity yields benefitϕb1for every unit diverted.Creditor vulnerability is captured byϕ(where a higherϕimplies weaker legal protection of banks).While investment is unverifi-able,the outcome of the entrepreneur's project in terms of output and/or sales revenue may be verified.The entrepreneur thus faces the following trade-off:either she invests and realizes the net benefit of production after repaying the bank(and possibly the moneylender),or she profits directly from diverting the bank funds(the entrepreneur still pays the moneylender if she has taken an informal loan).In the case of partial di-version,any remaining returns are repaid to the bank in full.The bank does not to derive any benefit from resources that are diverted.Informal lenders are endowed with observable wealthωM≥0and have a monitoring advantage over banks such that credit granted is fully invested.To keep the model tractable,I restrict informal lenders'oc-cupational choice to lending(additional sources of income do not alter the main insights).For simplicity,monitoring cost is assumed to zero.12 The moneylender's superior knowledge of local borrowers grants him exclusivity(but not necessarily market power,see below).13In the absence of contracting problems between the moneylender and the en-trepreneur,the moneylender maximizes the joint surplus derived from the investment project and divides the proceeds using Nash Bargaining.A contract is given by a pair(B,R)∈ℝ+2,whereB is the amount borrowed by the entrepreneur and R the repayment obligation.Finally,if the mon-eylender requires additional funding he turns to a bank.Following the same logic as above,I assume that the moneylender cannot commit to lend his bank loan and that diversion yields private benefits equivalent ofϕb1for every unit diverted.While lending is un-verifiable,the outcome of the moneylender's operation may be verified. The moneylender thus faces the following trade-off:either he lends the bank credit to the entrepreneur,realizing the net-lending profit after compensating the bank,or he benefits directly from diverting the bank loan.Banks have access to unlimited funds at a constant unit cost of zero. They offer a contract(L i,D i),where L i is the loan and D i the interest pay-ment,with subscripts i∈{E,M}indicating entrepreneur(E)and money-lender(M).Whenϕis equal to zero,legal protection of banks is perfect and even a penniless entrepreneur and/or moneylender could raise an amount supportingfirst-best investment.To make the problem inter-esting,I assume thatϕNϕ≡Q IÃ0ðÞðÞ−IÃ0ðÞIÃ0ðÞ:ð1ÞIn words,the marginal benefit of diversion yields higher utility than the average rate of return tofirst-best investment at zero rate of interest [henceforth I⁎(0)=I⁎].In the competitive benchmark case,I follow Burkart and Ellingsen (2004)by assuming that formal banks offer overdraft facilities of the form L E;1þrðÞL EðÞf gL E≤L E,where L E is the loan,(1+r)L E the repay-ment,and L E the credit limit.The contract implies that a borrower may withdraw any amount of funds until the credit limit binds.14 To distinguish formal from informalfinance,I assume that banks are unable to condition their contracts on the moneylender's contract offer, an assumption empirically supported by Giné(2011).15If not,the entre-preneur could obtain an informal loan and then approach the bank. Bank credit would then depend on the informal loan and the subse-quent certain investment.16The timing is as follows:1.Banks offer a contract,(L i,D i),to the entrepreneur and the money-lender,respectively.2.The moneylender offers a contract,(B,R),to the entrepreneur,whereR is settled through Nash Bargaining.3.The moneylender makes his lending/diversion decision.4.The entrepreneur makes her investment/diversion decision.5.Repayments are made.Notefinally that the informal sector contains a variety of lenders in-cluding input suppliers,landlords,merchants,professional money-lenders,and traders.Through their occupation,they attract different borrowers(for example,trader/farmer and landlord/tenant)that may give some lenders a particular enforcement advantage.The important and uniting feature,however,is the ability to induce diligent behavior irrespective of the quality of the legal system.In the analysis that fol-lows,the moneylender represents all informal lenders with this trait.3.EquilibriumI begin by analyzing eachfinancial sector in isolation.This helps un-derstand how the agency problem in the formal bank market generates credit rationing.It also highlights how the provision of incentives and the quality of the legal system affect lending across the two sectors.3.1.BenchmarkThere is free entry in the bank market.Following a Bertrand argu-ment,competition drives equilibrium bank profit to zero.17Nonetheless,11I assume that the entrepreneur accepts thefirst available contract if indifferent be-tween the contracts offered.12This is not to diminish the importance of informal lenders'monitoring cost(see Banerjee,2003).However,the cost is set to zero as it makes no difference in the analysis that follows(unless sufficiently prohibitive to prevent banks or entrepreneurs from deal-ing with the informal sector altogether).13The assumption that borrowers obtain funds from at most one informal source has empirical support see,for example,Aleem(1990),Siamwalla et al.(1990),and Berensmann et al.(2002).14As showed by Burkart and Ellingsen(2004),this restriction is without loss of general-ity as no other contract can upset an equilibrium in overdraft facilities.15This is in contrast to Burkart and Ellingsen(2004),who assume that banks and trade credit suppliers offer simultaneous contracts.Allowing the informal sector to contract on the bank provides informal lenders a more active intermediary role,similar to the monitor in hierarchical agency(principal–monitor–agent)models.See Mookherjee(2012)for an overview of this literature.16See also Bell et al.(1997)for evidence in support of the assumed sequence of events. 17Some developing credit markets have a sizable share of state-owned banks.I make no assumption on bank ownership but do assume that profit maximization governs bank be-havior.While state ownership can be less efficient(La Porta et al.,2002)this does not bar profit maximization as a useful approximation.In Sapienza's(2004)study of Italian banks, state-owned enterprises charge less but increase interest rates when markets become more concentrated,consistent with profit-maximizing behavior.159A.Madestam/Journal of Development Economics107(2014)157–174credit is limited since investment of bank funds cannot be ensured.To see this,suppose first that the entrepreneur abstains from diversion.She then draws on the overdraft facility up to the point L E u ,where L uE ¼min I Ãr ðÞ−ωE ;L E ÈÉ:ð2ÞEither the entrepreneur borrows and invests ef ficiently,I ⁎(r ),or she exhausts the credit limit extended by the bank,L E .In the case when the entrepreneur intends to divert resources,the return from diversion is ϕ(ωE +L E −I ).If she plans to repay the loan in full while diverting,the investment yields at least 1+r on every dollar of the available assets,which exceeds the diversion bene fit of ϕb 1.By contrast,if the entrepreneur invests an amount not suf fi-cient to repay in full,there is no reason to invest either borrowed,L E ,or internal funds,ωE ,since the bank would claim all of the returns upon default.18Hence (solving for the subgame-perfect equilibrium outcome),the entrepreneur chooses the amount of funds to invest,I ,and the amount of credit,L E ,by maximizing U E ¼max 0;Q I ðÞ−1þr ðÞL E f g subject toQ I ðÞ−1þr ðÞL E ≥ϕωE þL E ÀÁ;ωE þL E ≥I ;E ≥L E :The objective function shows the pro fit from investing,accounting for limited liability.The first constraint is the incentive-compatibility condition versus the bank,which prevents the entrepreneur from di-verting the internal funds as well as the maximum credit raised.The second condition requires that investment cannot exceed available funds,while the third inequality states that bank borrowing is constrained by the credit limit.In sum,the entrepreneur acts diligently if the contract satis fiesQ ωE þL u E ÀÁ−1þr ðÞL uE ≥ϕωE þL E ÀÁ;ð3Þwhere L E u is given by Eq.(2).As there is no default in equilibrium,the only equilibrium interest rate consistent with zero pro fit is r =0.At low wealth,the temptation to divert resources is too large to allow a loan in support of first best.In this case,the credit limit is given by the binding incentive constraint Q ωE þL E ÀÁ−L E ¼ϕωE þL E ÀÁ:ð4ÞAs an increase in wealth improves the return to investment for a given loan size,the credit line and the investment rise with wealth.Sim-ilarly,better creditor protection (a lower ϕ)increases the opportunity costs of diversion,making larger repayment obligations and thus higher credit limits incentive compatible.When the entrepreneur is suf ficient-ly wealthy the constraint no longer binds and the first-best outcome is obtained.Proposition 1.For all ϕN ϕthere is a threshold ωE c N 0such that entre-preneurs with wealth below ωEc invest I b I ⁎,credit (L E )and investment (I )increase in ωE and decrease in creditor vulnerability (ϕ).If ωE ≥ωE c then I ⁎is invested.If the entrepreneur borrows from the informal sector,the moneylend-er maximizes the surplus of the investment project,Q (ωE +B )−B .Let B ⁎denote the loan size that solves the first-order condition Q ′(ωE +B )−1≥0,where B ⁎=min{I ⁎−ωE ,ωM }.Absent contracting frictions,the ef ficient outcome is obtained if the money-lender is suf ficiently wealthy,while the outcome is constrainedef ficient otherwise.19Given B ⁎,the entrepreneur and the money-lender bargain over how to share the project gains using available re-sources ωE +B .If they disagree,investment fails and each party is left with her/his wealth or potential loan.The assets represent the disagreement point of each respective agent.By remaining liquid throughout the bargaining they can start the project if they agree or decide to stop negotiating and take their wealth to pursue other alternatives.In case of agreement,the moneylender offers a contract where the equilibrium repayment,using the Nash Bargaining solu-tion,isR B ðÞüarg max Q ωE þB ðÞ−t −ωE f g αt −B f g1−α¼1−αðÞQ ωE þB ðÞ−ωE ½ þαB ;where αrepresents the degree of competition in the informal sector (competition increases if αis high).Following Binmore et al.(1986)and Binmore et al.(1989),I assume that the entrepreneur's option of investing her own money only becomes a constraint when her share of the bargaining outcome is less than the value of pursuingthe project on her own.20For simplicity,αsatis fies αN e α,where e αsolvesαQ ωE þB ðÞ−B ½ þ1−αðÞωE ¼Q ωE ðÞ;ð5Þwith α∈e α;1ðÞ.21The left-hand side of the equality is the entrepreneur's utility of borrowing from the moneylender,while the right-hand side denotes the value of the stand-alone investment.As the empirical evi-dence on the extent of informal lenders'market power is inconclusive,no a priori assumption is made on αother than that.223.2.Formal and informal financeFinancial sector coexistence not only allows poor borrowers to raise funds from two sources,but it also permits informal lenders to access banks.This introduces additional trade-offs.On the one hand,(agen-cy-free)informal credit improves the incentives of the entrepreneur as informal finance increases the residual return to the entrepreneur's project,with the end effect equivalent to a boost in internal funds.On the other hand,banks now have to consider the possibility of diversion on part of the entrepreneur and the moneylender.Solving backwards and starting with the entrepreneur's incentive constraint yieldsQ ωE þL uE þB ÀÁ−L u E −R B ðÞ≥ϕωE þL E ÀÁ;ð6Þwhere L u E ¼min I ∗−ωE −B ;L E ÈÉ.The only modi fication from above is that the amount borrowed from the moneylender,B ,is prudently invested.23If the moneylender needs extra funds,he turns to a bank and chooses the amount to lend to the entrepreneur,B ,and the amount of credit,L M ,to satisfy the following incentive constraint R ωM þL u M ÀÁ−L uM ≥ϕωM þL M ÀÁ;ð7Þ18Because output is observable,the bank captures any return from production.19Excess moneylender funds are deposited in the bank earning a zero rate of interest.20The rationale is that only threats that are credible will have an effect on the outcomes.The outside options are only used as constraints on the range of the validity of the Nash Bargaining solution,with the disagreement point placed on the impasse point (ωE ,B ).That is,the entrepreneur can only threaten to proceed with her stand-alone investment,or deal herself out of the bargaining,if it gives her a bigger pay-off than dealing herself in.See Sutton (1986)for a further discussion of how to specify the outside option in non-cooperative bargaining models.21From concavity and Q ′(I )≥1it follows that e α∈0;1ðÞ.22Informal finance has been documented as competitive (Adams et al.,1984),monopo-listically competitive (Aleem,1990),and as a monopoly (Bhaduri,1977).23Since returns are claimed by the bank even if the bank's credit has been diverted,it is never optimal for the entrepreneur to borrow from the moneylender while diverting bank funds.160 A.Madestam /Journal of Development Economics 107(2014)157–174。

【精品】翻译综合

【精品】翻译综合

一个抑制肿瘤的连续模型-------艾丽斯H伯杰,阿尔弗雷德G. Knudson 与皮埃尔保罗潘多尔菲今年,也就是2011 年,标志着视网膜母细胞瘤的统计分析的第四十周年,首次提供了证据表明,肿瘤的发生,可以由两个突变发起。

这项工作提供了“二次打击”的假说,为解释隐性抑癌基因(TSGs)在显性遗传的癌症易感性综合征中的作用奠定了基础。

然而,四十年后,我们已经知道,即使是部分失活的肿瘤抑制基因也可以致使肿瘤的发生。

在这里,我们分析这方面的证据,并提出了一个关于肿瘤抑制基因功能的连续模型来全方位的解释肿瘤抑制基因在癌症过程中的突变。

虽然在1900 年之前癌症的遗传倾向已经被人认知,但是,是在19 世纪曾一度被忽视的孟德尔的遗传规律被重新发现之后,癌症的遗传倾向才更趋于合理化。

到那时,人们也知道,肿瘤细胞中的染色体模式是不正常的。

接下来对癌症遗传学的理解做出贡献的人是波威利,他提出,一些染色体可能刺激细胞分裂,其他的一些染色体 a 可能会抑制细胞分裂,但他的想法长期被忽视。

现在我们知道,这两种类型的基因,都是存在的。

在这次研究中,我们总结了后一种类型基因的研究历史,抑癌基因(TSGs),以及能够支持完全和部分失活的肿瘤抑制基因在癌症的发病中的作用的证据。

我们将抑制肿瘤的连续模型与经典的“二次打击”假说相结合,用来说明肿瘤抑制基因微妙的剂量效应,同时我们也讨论的“二次打击”假说的例外,如“专性的单倍剂量不足”,指出部分损失的抑癌基因比完全损失的更具致癌性。

这个连续模型突出了微妙的调控肿瘤抑制基因表达或活动的重要性,如微RNA(miRNA)的监管和调控。

最后,我们讨论了这种模式在┲⒌恼锒虾椭瘟乒 讨械挠跋臁!岸 未蚧鳌奔偎?第一个能够表明基因的异常可以导致癌症的发生的证据源自1960 年费城慢性粒细胞白血病细胞的染色体的发现。

后来,在1973 年,人们发现这个染色体是是第9 号和第22 号染色体异位的结果,并在1977 年,在急性早幼粒细胞白血病患者中第15 号和第17 号染色体易位被识别出来。

最新《ssd1_ introduction to information systems》1.1

最新《ssd1_ introduction to information systems》1.1
In 1995, the Internet began large-scale(大规模的) commercial application in the field.
8
Characteristics of the Internet
A network of networks Informative and resourceful Fast and convenient Endless online activities Information diversity(多样性) …
《ssd1_ introduction to information systems》1.1
1.1 Using the Web
Surfing the Web Your Web Pages Client, Server, and URL Searching the Web Commerce on the Web Some Ethical Considerations
In 1983, ARPAnet split into(分为) ARPAnet and military MILNET, they are referred to as the early Internet backbone(脊骨,骨干) network.
This year in January, Transport Control Protocol/Internet Protocol (TCP/IP) officially become the standard protocol(规章,制度) for ARPAnet.
9
The main Internet service
WWW (World Wide Web)
WWW is used for Information collection(收集) and releasing(释放)

2011年世界发展报告 事实 数据 CHINESE_WDR2011_FACTS AND FIGURES

2011年世界发展报告 事实 数据 CHINESE_WDR2011_FACTS AND FIGURES

低收入的脆弱国家以及受冲突干扰的国家之中还没有国家完成任何千年发展目标。

20个百分点在过去的三十年中,暴力反复发生的国家贫困率增长了20个百分点。

每年暴力使国家的扶贫进程延缓了进1个百分点。

15亿生活在受政治冲突或高度凶杀等有组织暴力影响的国家的15 亿人,患上营养不良的几率是其他发展中国家居民的两倍。

陷入贫困的几率高出百分之五十。

儿童辍学的几率是其他国家的三倍。

4200万由于冲突、暴力或侵犯人权造成4200万人(几乎等同于加拿大或波兰全国的人口)被迫移居。

这些人中,1500万人成为跨国难民,2700万人为国内难民。

15至30年实际时间表:二十世纪改革最快的20个国家经过15至20年——一代人——的时间改进了制度,从脆弱国家,比如海地,发展成为制度完善的国家,比如加纳。

特别是,平均需要17年时间减少对军队对政治的干涉,需要27年时间为发展将腐败降为最低。

244 避免从2001年至2009年间,阿富汗政府通过了244项法律、法令、规章和修正案、法律法规的增补案。

另外,政府签订了19项许可权、公约、协议和议定书。

千年发展目标未提到公民安全和正义的问题,不过它们确实脆弱和受冲突干扰的国家人民所重点关注并渴求的。

加倍的动荡援助的不稳定性是制度建设中的一个主要问题:在过去的20多年中,经历了20年暴利冲突的国家所受援助的不稳定性是未经受暴力国家的二倍。

收入的不稳定性大大增加了政府的成本,对那些面临脆弱局势,改革措施被削减,制度建设遭受打击的国家尤为如此。

项目的短期性可能会阻碍建设高抵御性制度的进程根据欧盟委员会的研究,在柬埔寨有63% 的捐助项目为期都不超过三年,同时又超过三分之一的项目为期不超过一年。

17 相对于455制定法律,定义负责任的国家当权者,在长期内变得更加的复杂。

联合国1948年通过的《防止及惩办灭绝种族罪公约》包含17个条款,而2003年的《反腐败公约》则有455个条款。

近期发生侵犯人权时间的国家,相比有着良好的尊重人权历史的国家更有可能发生暴力冲突事件。

ip made simple(sort of)

ip made simple(sort of)

1.6
Definitions
– One set of definitions specific to all activities covered under this IP.
IP Made Simple 7 June 2006
12
12
2. Design Approval Procedures
• General application processes for all design approvals (para. 2.0.9) • Other requirements specific to a particular design approval (e.g. TC, STC data package)
• • The New U.S./EC picture Structure of the Implementation Procedures Content
– General, but important… – What's new…

IP Made Simple 7 June 2006
2
2
Technical Implementation Procedures
The Technical Implementation Procedures for Airworthiness and Environmental Certification (IP) are the third level of the agreement between the European Community and the United States
1.3
Confidence Building Process for Environmental Certification

A general equilibrium model for industries with price and service competition

A general equilibrium model for industries with price and service competition

Fernando Bernstein Awi Federgruen
This paper develops a stochastic general equilibrium inventory model for an oligopoly, in which all inventory constraint parameters are endogenously determined. We propose several systems of demand processes whose distributions are functions of all retailers’ prices and all retailers’ service levels. We proceed with the investigation of the equilibrium behavior of infinite-horizon models for industries facing this type of generalized competition, under demand uncertainty. We systematically consider the following three competition scenarios. (1) Price competition only: Here, we assume that the firms’ service levels are exogenously chosen, but characterize how the price and inventory strategy equilibrium vary with the chosen service levels. (2) Simultaneous price and service-level competition: Here, each of the firms simultaneously chooses a service level and a combined price and inventory strategy. (3) Two-stage competition: The firms make their competitive choices sequentially. In a first stage, all firms simultaneously choose a service level; in a second stage, the firms simultaneously choose a combined pricing and inventory strategy with full knowledge of the service levels selected by all competitors. We show that in all of the above settings a Nash equilibrium of infinite-horizon stationary strategies exists and that it is of a simple structure, provided a Nash equilibrium exists in a so-called reduced game. We pay particular attention to the question of whether a firm can choose its service level on the basis of its own (input) characteristics (i.e., its cost parameters and demand function) only. We also investigate under which of the demand models a firm, under simultaneous competition, responds to a change in the exogenously specified characteristics of the various competitors by either: (i) adjusting its service level and price in the same direction, thereby compensating for price increases (decreases) by offering improved (inferior) service, or (ii) adjusting them in opposite directions, thereby simultaneously offering better or worse prices and service. Subject classifications : inventory/production policies: marketing/pricing; games/group decisions: noncooperative. Area of review : Manufacturing, Service, and Supply Chain Operations. History : Received July 2001; revisions received June 2002, May 2003; accepted November 2003.

1999.Multilevel Hypergraph Partitioning__Applications in VLSI Domain

1999.Multilevel Hypergraph Partitioning__Applications in VLSI Domain

Multilevel Hypergraph Partitioning:Applications in VLSI DomainGeorge Karypis,Rajat Aggarwal,Vipin Kumar,Senior Member,IEEE,and Shashi Shekhar,Senior Member,IEEE Abstract—In this paper,we present a new hypergraph-partitioning algorithm that is based on the multilevel paradigm.In the multilevel paradigm,a sequence of successivelycoarser hypergraphs is constructed.A bisection of the smallesthypergraph is computed and it is used to obtain a bisection of theoriginal hypergraph by successively projecting and refining thebisection to the next levelfiner hypergraph.We have developednew hypergraph coarsening strategies within the multilevelframework.We evaluate their performance both in terms of thesize of the hyperedge cut on the bisection,as well as on the runtime for a number of very large scale integration circuits.Ourexperiments show that our multilevel hypergraph-partitioningalgorithm produces high-quality partitioning in a relatively smallamount of time.The quality of the partitionings produced by ourscheme are on the average6%–23%better than those producedby other state-of-the-art schemes.Furthermore,our partitioningalgorithm is significantly faster,often requiring4–10times lesstime than that required by the other schemes.Our multilevelhypergraph-partitioning algorithm scales very well for largehypergraphs.Hypergraphs with over100000vertices can bebisected in a few minutes on today’s workstations.Also,on thelarge hypergraphs,our scheme outperforms other schemes(inhyperedge cut)quite consistently with larger margins(9%–30%).Index Terms—Circuit partitioning,hypergraph partitioning,multilevel algorithms.I.I NTRODUCTIONH YPERGRAPH partitioning is an important problem withextensive application to many areas,including very largescale integration(VLSI)design[1],efficient storage of largedatabases on disks[2],and data mining[3].The problemis to partition the vertices of a hypergraphintois definedas a set ofvertices[4],and the size ofa hyperedge is the cardinality of this subset.Manuscript received April29,1997;revised March23,1998.This workwas supported under IBM Partnership Award NSF CCR-9423082,by theArmy Research Office under Contract DA/DAAH04-95-1-0538,and by theArmy High Performance Computing Research Center,the Department of theArmy,Army Research Laboratory Cooperative Agreement DAAH04-95-2-0003/Contract DAAH04-95-C-0008.G.Karypis,V.Kumar,and S.Shekhar are with the Department of ComputerScience and Engineering,Minneapolis,University of Minnesota,Minneapolis,MN55455-0159USA.R.Aggarwal is with the Lattice Semiconductor Corporation,Milpitas,CA95131USA.Publisher Item Identifier S1063-8210(99)00695-2.During the course of VLSI circuit design and synthesis,itis important to be able to divide the system specification intoclusters so that the inter-cluster connections are minimized.This step has many applications including design packaging,HDL-based synthesis,design optimization,rapid prototyping,simulation,and testing.In particular,many rapid prototyp-ing systems use partitioning to map a complex circuit ontohundreds of interconnectedfield-programmable gate arrays(FPGA’s).Such partitioning instances are challenging becausethe timing,area,and input/output(I/O)resource utilizationmust satisfy hard device-specific constraints.For example,ifthe number of signal nets leaving any one of the clustersis greater than the number of signal p-i-n’s available in theFPGA,then this cluster cannot be implemented using a singleFPGA.In this case,the circuit needs to be further partitioned,and thus implemented using multiple FPGA’s.Hypergraphscan be used to naturally represent a VLSI circuit.The verticesof the hypergraph can be used to represent the cells of thecircuit,and the hyperedges can be used to represent the netsconnecting these cells.A high quality hypergraph-partitioningalgorithm greatly affects the feasibility,quality,and cost ofthe resulting system.A.Related WorkThe problem of computing an optimal bisection of a hy-pergraph is at least NP-hard[5].However,because of theimportance of the problem in many application areas,manyheuristic algorithms have been developed.The survey byAlpert and Khang[1]provides a detailed description andcomparison of such various schemes.In a widely used class ofiterative refinement partitioning algorithms,an initial bisectionis computed(often obtained randomly)and then the partitionis refined by repeatedly moving vertices between the twoparts to reduce the hyperedge cut.These algorithms oftenuse the Schweikert–Kernighan heuristic[6](an extension ofthe Kernighan–Lin(KL)heuristic[7]for hypergraphs),or thefaster Fiduccia–Mattheyses(FM)[8]refinement heuristic,toiteratively improve the quality of the partition.In all of thesemethods(sometimes also called KLFM schemes),a vertex ismoved(or a vertex pair is swapped)if it produces the greatestreduction in the edge cuts,which is also called the gain formoving the vertex.The partition produced by these methodsis often poor,especially for larger hypergraphs.Hence,thesealgorithms have been extended in a number of ways[9]–[12].Krishnamurthy[9]tried to introduce intelligence in the tie-breaking process from among the many possible moves withthe same high gain.He used a Look Ahead()algorithm,which looks ahead uptoa move.PROP [11],introduced by Dutt and Deng,used a probabilistic gain computation model for deciding which vertices need to move across the partition line.These schemes tend to enhance the performance of the basic KLFM family of refinement algorithms,at the expense of increased run time.Dutt and Deng [12]proposed two new methods,namely,CLIP and CDIP,for computing the gains of hyperedges that contain more than one node on either side of the partition boundary.CDIP in conjunctionwithand CLIP in conjunction with PROP are two schemes that have shown the best results in their experiments.Another class of hypergraph-partitioning algorithms [13]–[16]performs partitioning in two phases.In the first phase,the hypergraph is coarsened to form a small hypergraph,and then the FM algorithm is used to bisect the small hypergraph.In the second phase,these algorithms use the bisection of this contracted hypergraph to obtain a bisection of the original hypergraph.Since FM refinement is done only on the small coarse hypergraph,this step is usually fast,but the overall performance of such a scheme depends upon the quality of the coarsening method.In many schemes,the projected partition is further improved using the FM refinement scheme [15].Recently,a new class of partitioning algorithms was devel-oped [17]–[20]based upon the multilevel paradigm.In these algorithms,a sequence of successively smaller (coarser)graphs is constructed.A bisection of the smallest graph is computed.This bisection is now successively projected to the next-level finer graph and,at each level,an iterative refinement algorithm such as KLFM is used to further improve the bisection.The various phases of multilevel bisection are illustrated in Fig.1.Iterative refinement schemes such as KLFM become quite powerful in this multilevel context for the following reason.First,the movement of a single node across a partition bound-ary in a coarse graph can lead to the movement of a large num-ber of related nodes in the original graph.Second,the refined partitioning projected to the next level serves as an excellent initial partitioning for the KL or FM refinement algorithms.This paradigm was independently studied by Bui and Jones [17]in the context of computing fill-reducing matrix reorder-ing,by Hendrickson and Leland [18]in the context of finite-element mesh-partitioning,and by Hauck and Borriello (called Optimized KLFM)[20],and by Cong and Smith [19]for hy-pergraph partitioning.Karypis and Kumar extensively studied this paradigm in [21]and [22]for the partitioning of graphs.They presented new graph coarsening schemes for which even a good bisection of the coarsest graph is a pretty good bisec-tion of the original graph.This makes the overall multilevel paradigm even more robust.Furthermore,it allows the use of simplified variants of KLFM refinement schemes during the uncoarsening phase,which significantly speeds up the refine-ment process without compromising overall quality.METIS [21],a multilevel graph partitioning algorithm based upon this work,routinely finds substantially better bisections and is often two orders of magnitude faster than the hitherto state-of-the-art spectral-based bisection techniques [23],[24]for graphs.The improved coarsening schemes of METIS work only for graphs and are not directly applicable to hypergraphs.IftheFig.1.The various phases of the multilevel graph bisection.During the coarsening phase,the size of the graph is successively decreased;during the initial partitioning phase,a bisection of the smaller graph is computed,and during the uncoarsening and refinement phase,the bisection is successively refined as it is projected to the larger graphs.During the uncoarsening and refinement phase,the dashed lines indicate projected partitionings and dark solid lines indicate partitionings that were produced after refinement.G 0is the given graph,which is the finest graph.G i +1is the next level coarser graph of G i ,and vice versa,G i is the next level finer graph of G i +1.G 4is the coarsest graph.hypergraph is first converted into a graph (by replacing each hyperedge by a set of regular edges),then METIS [21]can be used to compute a partitioning of this graph.This technique was investigated by Alpert and Khang [25]in their algorithm called GMetis.They converted hypergraphs to graphs by simply replacing each hyperedge with a clique,and then they dropped many edges from each clique randomly.They used METIS to compute a partitioning of each such random graph and then selected the best of these partitionings.Their results show that reasonably good partitionings can be obtained in a reasonable amount of time for a variety of benchmark problems.In particular,the performance of their resulting scheme is comparable to other state-of-the art schemes such as PARABOLI [26],PROP [11],and the multilevel hypergraph partitioner from Hauck and Borriello [20].The conversion of a hypergraph into a graph by replacing each hyperedge with a clique does not result in an equivalent representation since high-quality partitionings of the resulting graph do not necessarily lead to high-quality partitionings of the hypergraph.The standard hyperedge-to-edge conversion [27]assigns a uniform weightofisthe of the hyperedge,i.e.,thenumber of vertices in the hyperedge.However,the fundamen-tal problem associated with replacing a hyperedge by its clique is that there exists no scheme to assign weight to the edges of the clique that can correctly capture the cost of cutting this hyperedge [28].This hinders the partitioning refinement algo-rithm since vertices are moved between partitions depending on how much this reduces the number of edges they cut in the converted graph,whereas the real objective is to minimize the number of hyperedges cut in the original hypergraph.Furthermore,the hyperedge-to-clique conversion destroys the natural sparsity of the hypergraph,significantly increasing theKARYPIS et al.:MULTILEVEL HYPERGRAPH PARTITIONING:APPLICATIONS IN VLSI DOMAIN 71run time of the partitioning algorithm.Alpert and Khang [25]solved this problem by dropping many edges of the clique randomly,but this makes the graph representation even less accurate.A better approach is to develop coarsening and refinement schemes that operate directly on the hypergraph.Note that the multilevel scheme by Hauck and Borriello [20]operates directly on hypergraphs and,thus,is able to perform accurate refinement during the uncoarsening phase.However,all coarsening schemes studied in [20]are edge-oriented;i.e.,they only merge pairs of nodes to construct coarser graphs.Hence,despite a powerful refinement scheme (FM with theuse oflook-ahead)during the uncoarsening phase,their performance is only as good as that of GMetis [25].B.Our ContributionsIn this paper,we present a multilevel hypergraph-partitioning algorithm hMETIS that operates directly on the hypergraphs.A key contribution of our work is the development of new hypergraph coarsening schemes that allow the multilevel paradigm to provide high-quality partitions quite consistently.The use of these powerful coarsening schemes also allows the refinement process to be simplified considerably (even beyond plain FM refinement),making the multilevel scheme quite fast.We investigate various algorithms for the coarsening and uncoarsening phases which operate on the hypergraphs without converting them into graphs.We have also developed new multiphase refinement schemes(-cycles)based on the multilevel paradigm.These schemes take an initial partition as input and try to improve them using the multilevel scheme.These multiphase schemes further reduce the run times,as well as improve the solution quality.We evaluate their performance both in terms of the size of the hyperedge cut on the bisection,as well as on run time on a number of VLSI circuits.Our experiments show that our multilevel hypergraph-partitioning algorithm produces high-quality partitioning in a relatively small amount of time.The quality of the partitionings produced by our scheme are on the average 6%–23%better than those produced by other state-of-the-art schemes [11],[12],[25],[26],[29].The difference in quality over other schemes becomes even greater for larger hypergraphs.Furthermore,our partitioning algorithm is significantly faster,often requiring 4–10times less time than that required by the other schemes.For many circuits in the well-known ACM/SIGDA benchmark set [30],our scheme is able to find better partitionings than those reported in the literature for any other hypergraph-partitioning algorithm.The remainder of this paper is organized as follows.Section II describes the different algorithms used in the three phases of our multilevel hypergraph-partitioning algorithm.Section III describes a new partitioning refinement algorithm based on the multilevel paradigm.Section IV compares the results produced by our algorithm to those produced by earlier hypergraph-partitioning algorithms.II.M ULTILEVEL H YPERGRAPH B ISECTIONWe now present the framework of hMETIS ,in which the coarsening and refinement scheme work directly with hyper-edges without using the clique representation to transform them into edges.We have developed new algorithms for both the phases,which,in conjunction,are capable of delivering very good quality solutions.A.Coarsening PhaseDuring the coarsening phase,a sequence of successively smaller hypergraphs are constructed.As in the case of mul-tilevel graph bisection,the purpose of coarsening is to create a small hypergraph,such that a good bisection of the small hypergraph is not significantly worse than the bisection di-rectly obtained for the original hypergraph.In addition to that,hypergraph coarsening also helps in successively reducing the sizes of the hyperedges.That is,after several levels of coarsening,large hyperedges are contracted to hyperedges that connect just a few vertices.This is particularly helpful,since refinement heuristics based on the KLFM family of algorithms [6]–[8]are very effective in refining small hyperedges,but are quite ineffective in refining hyperedges with a large number of vertices belonging to different partitions.Groups of vertices that are merged together to form single vertices in the next-level coarse hypergraph can be selected in different ways.One possibility is to select pairs of vertices with common hyperedges and to merge them together,as illustrated in Fig.2(a).A second possibility is to merge together all the vertices that belong to a hyperedge,as illustrated in Fig.2(b).Finally,a third possibility is to merge together a subset of the vertices belonging to a hyperedge,as illustrated in Fig.2(c).These three different schemes for grouping vertices together for contraction are described below.1)Edge Coarsening (EC):The heavy-edge matching scheme used in the multilevel-graph bisection algorithm can also be used to obtain successively coarser hypergraphs by merging the pairs of vertices connected by many hyperedges.In this EC scheme,a heavy-edge maximal 1matching of the vertices of the hypergraph is computed as follows.The vertices are visited in a random order.For eachvertex are considered,and the one that is connected via the edge with the largest weight is matchedwithandandofsize72IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION(VLSI)SYSTEMS,VOL.7,NO.1,MARCH1999Fig.2.Various ways of matching the vertices in the hypergraph and the coarsening they induce.(a)In edge-coarsening,connected pairs of vertices are matched together.(b)In hyperedge-coarsening,all the vertices belonging to a hyperedge are matched together.(c)In MHEC,we match together all the vertices in a hyperedge,as well as all the groups of vertices belonging to a hyperedge.weight of successively coarser graphs does not decrease very fast.In order to ensure that for every group of vertices that are contracted together,there is a decrease in the hyperedge weight in the coarser graph,each such group of vertices must be connected by a hyperedge.This is the motivation behind the HEC scheme.In this scheme,an independent set of hyperedges is selected and the vertices that belong to individual hyperedges are contracted together.This is implemented as follows.The hyperedges are initially sorted in a nonincreasing hyperedge-weight order and the hyperedges of the same weight are sorted in a nondecreasing hyperedge size order.Then,the hyperedges are visited in that order,and for each hyperedge that connects vertices that have not yet been matched,the vertices are matched together.Thus,this scheme gives preference to the hyperedges that have large weight and those that are of small size.After all of the hyperedges have been visited,the groups of vertices that have been matched are contracted together to form the next level coarser graph.The vertices that are not part of any contracted hyperedges are simply copied to the next level coarser graph.3)Modified Hyperedge Coarsening(MHEC):The HEC algorithm is able to significantly reduce the amount of hyperedge weight that is left exposed in successively coarser graphs.However,during each coarsening phase,a majority of the hyperedges do not get contracted because vertices that belong to them have been contracted via other hyperedges. This leads to two problems.First,the size of many hyperedges does not decrease sufficiently,making FM-based refinement difficult.Second,the weight of the vertices(i.e.,the number of vertices that have been collapsed together)in successively coarser graphs becomes significantly different,which distorts the shape of the contracted hypergraph.To correct this problem,we implemented a MHEC scheme as follows.After the hyperedges to be contracted have been selected using the HEC scheme,the list of hyperedges is traversed again,and for each hyperedge that has not yet been contracted,the vertices that do not belong to any other contracted hyperedge are contracted together.B.Initial Partitioning PhaseDuring the initial partitioning phase,a bisection of the coarsest hypergraph is computed,such that it has a small cut, and satisfies a user-specified balance constraint.The balance constraint puts an upper bound on the difference between the relative size of the two partitions.Since this hypergraph has a very small number of vertices(usually less than200),the time tofind a partitioning using any of the heuristic algorithms tends to be small.Note that it is not useful tofind an optimal partition of this coarsest graph,as the initial partition will be sub-stantially modified during the refinement phase.We used the following two algorithms for computing the initial partitioning. Thefirst algorithm simply creates a random bisection such that each part has roughly equal vertex weight.The second algorithm starts from a randomly selected vertex and grows a region around it in a breadth-first fashion[22]until half of the vertices are in this region.The vertices belonging to the grown region are then assigned to thefirst part,and the rest of the vertices are assigned to the second part.After a partitioning is constructed using either of these algorithms,the partitioning is refined using the FM refinement algorithm.Since both algorithms are randomized,different runs give solutions of different quality.For this reason,we perform a small number of initial partitionings.At this point,we can select the best initial partitioning and project it to the original hypergraph,as described in Section II-C.However,the parti-tioning of the coarsest hypergraph that has the smallest cut may not necessarily be the one that will lead to the smallest cut in the original hypergraph.It is possible that another partitioning of the coarsest hypergraph(with a higher cut)will lead to a bet-KARYPIS et al.:MULTILEVEL HYPERGRAPH PARTITIONING:APPLICATIONS IN VLSI DOMAIN 73ter partitioning of the original hypergraph after the refinement is performed during the uncoarsening phase.For this reason,instead of selecting a single initial partitioning (i.e.,the one with the smallest cut),we propagate all initial partitionings.Note that propagation of.Thus,by increasing the value ofis to drop unpromising partitionings as thehypergraph is uncoarsened.For example,one possibility is to propagate only those partitionings whose cuts arewithinissufficiently large,then all partitionings will be maintained and propagated in the entire refinement phase.On the other hand,if the valueof,many partitionings may be available at the coarsest graph,but the number of such available partitionings will decrease as the graph is uncoarsened.This is useful for two reasons.First,it is more important to have many alternate partitionings at the coarser levels,as the size of the cut of a partitioning at a coarse level is a less accurate reflection of the size of the cut of the original finest level hypergraph.Second,refinement is more expensive at the fine levels,as these levels contain far more nodes than the coarse levels.Hence,by choosing an appropriate valueof(from 10%to a higher value such as 20%)did not significantly improve the quality of the partitionings,although it did increase the run time.C.Uncoarsening and Refinement PhaseDuring the uncoarsening phase,a partitioning of the coarser hypergraph is successively projected to the next-level finer hypergraph,and a partitioning refinement algorithm is used to reduce the cut set (and thus to improve the quality of the partitioning)without violating the user specified balance con-straints.Since the next-level finer hypergraph has more degrees of freedom,such refinement algorithms tend to improve the solution quality.We have implemented two different partitioning refinement algorithms.The first is the FM algorithm [8],which repeatedly moves vertices between partitions in order to improve the cut.The second algorithm,called hyperedge refinement (HER),moves groups of vertices between partitions so that an entire hyperedge is removed from the cut.These algorithms are further described in the remainder of this section.1)FM:The partitioning refinement algorithm by Fiduccia and Mattheyses [8]is iterative in nature.It starts with an initial partitioning of the hypergraph.In each iteration,it tries to find subsets of vertices in each partition,such that moving them to other partitions improves the quality of the partitioning (i.e.,the number of hyperedges being cut decreases)and this does not violate the balance constraint.If such subsets exist,then the movement is performed and this becomes the partitioning for the next iteration.The algorithm continues by repeating the entire process.If it cannot find such a subset,then the algorithm terminates since the partitioning is at a local minima and no further improvement can be made by this algorithm.In particular,for eachvertexto the other partition.Initially allvertices are unlocked ,i.e.,they are free to move to the other partition.The algorithm iteratively selects an unlockedvertex is moved,it is locked ,and the gain of the vertices adjacentto[8].For refinement in the context of multilevel schemes,the initial partitioning obtained from the next level coarser graph is actually a very good partition.For this reason,we can make a number of optimizations to the original FM algorithm.The first optimization limits the maximum number of passes performed by the FM algorithm to only two.This is because the greatest reduction in the cut is obtained during the first or second pass and any subsequent passes only marginally improve the quality.Our experience has shown that this optimization significantly improves the run time of FM without affecting the overall quality of the produced partitionings.The second optimization aborts each pass of the FM algorithm before actually moving all the vertices.The motivation behind this is that only a small fraction of the vertices being moved actually lead to a reduction in the cut and,after some point,the cut tends to increase as we move more vertices.When FM is applied to a random initial partitioning,it is quite likely that after a long sequence of bad moves,the algorithm will climb74IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI)SYSTEMS,VOL.7,NO.1,MARCH1999Fig.3.Effect of restricted coarsening .(a)Example hypergraph with a given partitioning with the required balance of 40/60.(b)Possible condensed version of (a).(c)Another condensed version of a hypergraph.out of a local minima and reach to a better cut.However,in the context of a multilevel scheme,a long sequence of cut-increasing moves rarely leads to a better local minima.For this reason,we stop each pass of the FM algorithm as soon as we haveperformedto be equal to 1%of the number ofvertices in the graph we are refining.This modification to FM,called early-exit FM (FM-EE),does not significantly affect the quality of the final partitioning,but it dramatically improves the run time (see Section IV).2)HER:One of the drawbacks of FM (and other similar vertex-based refinement schemes)is that it is often unable to refine hyperedges that have many nodes on both sides of the partitioning boundary.However,a refinement scheme that moves all the vertices that belong to a hyperedge can potentially solve this problem.Our HER works as follows.It randomly visits all the hyperedges and,for each one that straddles the bisection,it determines if it can move a subset of the vertices incident on it,so that this hyperedge will become completely interior to a partition.In particular,consider ahyperedgebe the verticesofto partition 0.Now,depending on these gains and subject to balance constraints,it may move one of the twosets .In particular,if.III.M ULTIPHASE R EFINEMENT WITHR ESTRICTED C OARSENINGAlthough the multilevel paradigm is quite robust,random-ization is inherent in all three phases of the algorithm.In particular,the random choice of vertices to be matched in the coarsening phase can disallow certain hyperedge cuts,reducing refinement in the uncoarsening phase.For example,consider the example hypergraph in Fig.3(a)and its two possible con-densed versions [Fig.3(b)and (c)]with the same partitioning.The version in Fig.3(b)is obtained by selectinghyperedgesto be compressed in the HEC phase and then selecting pairs ofnodesto be compressed inthe HEC phase and then selecting pairs ofnodesand apartitioningfor ,be the sequence of hypergraphsand partitionings.Given ahypergraphandorpartition,,are collapsedtogether to formvertexof,thenvertex belong。

Land Fragmentation and Consolidation in Albania

Land Fragmentation and Consolidation in Albania

Fatbardh Sallaku Agricultural University of Tirana, AlbaniaLand Fragmentation and Consolidation in AlbaniaOVERVIEWCountry ProfileLand ReformDegree of FragmentationLand Consolidation activity in AlbaniaLessons LearnedStrategy for the futureConclusions¾28 000 km2¾3.8 million inhabitants¾50 % Rural Population¾24 % Agricultural Land¾28-30 % Share of Agricultural in GDP ¾The lowest amount of agricultural land per capita (0.22 hectares) in the region.¾There are 387 930 farms in totalLand Reform in AlbaniaThere are two outstanding characteristics of the development of land relations since 19919The first is the creation of a nation of smallholders-owners of small farms held in freehold tenure brought about by Law 7501. freehold tenure brought about by Law 75019The second characteristic and one that is directly related to the first is the exuberant urban development and rapid growth of land market that has taken placePrivatization of Rural LandThe land privatization process began in 1991 with the approval of Law 7501 (dated 19.07.1991), On LandMain CriteriaÎEquity Principle according to quality and productivity of the soil and the number of people in the family registered in the civil registry in August, 1991.Using a per capita basis, each family received equal amounts of arable and non-arable land, fruit trees, vineyards and olive trees.Results of this privatization process a)Over 90% of agricultural land is now in private ownershipb)On ex-co-operative land, 353,718 families owned 439,139 ha of land with over 90% granted via a tapi.c)On ex-state farm land, the figures are 91000 families owning 123.334 hectares of landd)On average each family owns 4-6 parcels of land, sometimes separated quite widely.A nation of family smallholding has beencreated.Degree of Fragmentation•up to 1.8 million parcels•4-6 parcels per Owner as an average•Average family land surface 1.17 ha •Average parcel size 0.55-0.2 ha •Average farmer’s distance to the parcel from 1 to 6 kmMain reasons of land fragmentation •Content of law itself •Demographic development of villages and population concentration in particular areas •Land privatization during several stages•Different interpretations of law on land•Natural fragmentation •InheritanceLand Consolidation Activity in Albania In the year 2002, the World Bank and GoA has started the implementation of a Pilot Agricultural Land Consolidation Program in four pilot communes in order to:… facilitate and encourage land consolidation by facilitating market transactions in land with a focus on parcel exchanges, rentals and purchase/sales.The specific objectives were:9to respond to the perceived need and expressed desire for consolidationactivities by rural residents9to overcome the information constraint about land consolidation9to address questions of costs and benefits of a policy intervention;9to create a body of experience, techniques and procedures to guide a national level program and to judge the usefulness of such a program.The main activities of this project: a.Facilitation of transactions through thesponsoring of a participatory process ofnegotiated exchange and rental;b.Subsidies to the transaction costsinvolved in formal land markettransactions;c.Public information and educational activityabout parcel grouping, formation ofassociations and land market transactions d.Legal assistance when needed.The importance of the project…9The project played an important role and had a significant impact both directly forthe beneficiaries in the pilot areas and toa lesser extent indirectly on the politicallevel among decision makers;9The simple fact that the project was the first of its kind in Albania it had a crucialkick-off effect not only in terms of givingland fragmentation a much higher priority and attention but more notably inproviding concrete alternatives to overcome unsustainable farm structures.9By looking at the gradual increase of transactions over the project period it may indicate that the adopted methodology is indeed practical, hands-on, simple and efficient and suits the farmers’requirements;9Through the transactions made an agricultural land size of 94 ha has been consolidated;9From all this land size, 49 ha has been transferred from one owner to the other through sales, exchanges and leasing adding to the initial land size of 45 Ha.9Although initially farmers showed reluctance to the whole concept of land consolidation the fears to ‘loose out’ or ‘give up’ became gradually less.The main achievementsThe findings from the pilot areas in Albania provided reliable evidence that the projecthas been instrumental to give way and stimulate the following processes:•The project provided guidance for farmers on how to use the new asset land in a more productive and efficient way;9It raised awareness and brought the issue of land fragmentation to the attention of decision-makers and other stakeholders and it provided concrete alternatives to current unsustainable agriculture structures;9It designed and implemented simple, local, hands-on, cost-effective and easy multipliable solutions to overcome land fragmentation;9It stimulated local land market development, contributed to farm intensification as well as it contributed totrust and confidence building both horizontally between farmers and verticallyamong farmers and (local) authorities;9It provided guidance and an operating manual for the land consolidation component which provides farmers andother stakeholders an insight into theconcept of land consolidation and clearindications on how to proceed with therelevant transactions.9As the land fragmentation remains a very important constraint with negativeimpact in Albanian agriculture, one of the factors, which could positively affect the situation, is the development of landmarket.9The normal procedures to complete an agricultural land transaction was too long (from 2 to 4 weeks).9Another serious hindrance fortransaction finalization is that adultpersons are not always present at thetime of land transactions. (According to “The Heritage Law”, persons over 18 years of age should be present to sign for transactions or provide aPower of Attorney authorizing otherpeople to act on their behalf)9Mistakes made in the landownership documentationowned by farmers and the IPRSsystem impede the process ofcarry out of the transactions and artificially increase the cost of a transaction;The Strategy for the futureThe following issues should be integrated into the long termland consolidation scheme in Albania;The Strategy for the future9Give preference and put more emphasize on land exchangesand/or amalgamation of plots and a cost sharing scheme involving all relevant stakeholders;9Review and update the current methodology integrating lessons learned from the pilot sites and best practices from other elsewhereThe Strategy for the future9Preparation of an appropriate, easy-understandable and differentiated landvaluation scheme for both agriculture land and other land use systems;9Moratorium to prohibit changes in land use in rural areas with immediate effect and theestablishment of a clear and transparent land market/land value information system9Analyse models and possibilities to ease access to financial services such as mortgage-secured credit schemes using land ascollateral and the development of acomprehensive rural-regional developmentstrategyConclusions:Although Albania is faced with political, economic and social problems, important steps have been achieved.A legal framework for land management has been created since 1991 but it consist of too many ‘reactive”laws; too many laws dealing with just one issue; laws which do not have a common philosophy9In order to impede the further fragmentarization of the agricultural land, a review of the existinglegislation is required, aiming tomarginate parcel borders out ofwhich the division of the parcelswould be impeded by law, even incases when a family member isseparated by the family tree.9It is recommended the review of the current legislation and theestablishment of a permanentinter-institutional/agency working group including representativesfrom different line ministries andagencies, local authorities, farmer associations, the private sectorand civil society.Conclusions:9Critical pre-conditions such as the establishment of an easy-understandable and differentiated land valuation scheme for both agricultureland and other land use purposes,clear and transparent land priceinformation systems, restrictions onland sale and land lease, potential land conflicts and disputes, minimumtraining needs for staff members etc.are the main issues to be consideredin the future.9The Pilot ConsolidationProgram was not enough tocover the broad spectre of the agricultural land consolidation issues.So, it’s recommended that other similar initiatives on landconsolidation should continue.9The agricultural land consolidation in larger areas could be unrealistic for themoment and need a very strong financial support to improve rural infrastructure,mechanization, irrigation, agroprocessing and marketing.9The Albanian government and donors must commit time andresources and coordinate efforts to overcome these impediments andallow for security of tenure and afully functioning land marketOK, that’s it.I hope you enjoyed it andThank YOU_____________________________© Fatbardh Sallaku(sallaku@) Agricultural University of Tirana。

The 2011 Oxford CEBM Levels of Evidence Introductory Document

The 2011 Oxford CEBM Levels of Evidence Introductory Document

This must be read before using the Levels: no evidence ranking system or decision tool can be used without a healthy dose of judgment and thought.What the 2011 OCEBM Levels of Evidence IS1. A hierarchy of the likely best evidence.2.Designed so that it can be used as a short-cut for busy clinicians, researchers, orpatients to find the likely best evidence. To illustrate you may find the followinganalogy useful (Figure 1). Imagine making a decision about treatment benefits in ‘real time’ (a few minutes, or at most a few hours). There are five boxes eachcontaining a different type of evidence: which box would you open first? Fortreatment benefits and harms, systematic reviews of randomized trials have been shown to provide the most reliable answers (1), suggesting we begin by searching for systematic reviews of randomized trials. If we didn’t find any evidence in thesystematic review box, you would go onto search for individual randomized trials,and so on across the OCEBM Levels of Evidence.Figure 1. If you have limited time, where do you begin searching for evidence?In an ideal world we would conduct our own systematic review of all the primary evidence if the systematic review box were empty. But we rarely have time for this.In searching for evidence about the benefits and harms of many ailments we often encounter thousands of articles. For example, a PubMed search of the words "atrial fibrillation AND warfarin" finds 2,175 hits, (see Table 1). You will not have time to filter all of these, let alone assess and review these, so it is rational to begin with the next best evidence – such as one of the seven randomized trials.Table 1. Results of a PubMed search for “atrial fibrillation AND warfarin” plus some filtersType Term used Number of articlesAll articles (no filter)2175RCT"random allocation" [MeSH]7Cohort"cohort studies" [MeSH]366Case-control"Case-Control Studies"[Mesh]234Case report Case Reports [Publication Type]1963.The OCEBM Levels assists clinicians to conduct their own rapid appraisal. Pre-appraisedsources of evidence such as Clinical Evidence, NHS Clinical Knowledge Summaries, Dynamed, Physicians’ Information and Education Resource (PIER), and UpToDatemay well be more comprehensive, but risk reliance on expert authority.What the OCEBM Levels is NOT1.The Levels are NOT dismissive of systematic reviews. On the contrary, systematicreviews are better at assessing strength of evidence than single studies(2, 3) and should be used if available. On the other hand clinicians or patients might have to resort to individual studies if systematic reviews are unavailable. The one exception is for questions of local prevalence, where current local surveys are ideal.2.The Levels is NOT intended to provide you with a definitive judgment about thequality of evidence. There will inevitably be cases where ’lower level’ evidence – say from an observational study with a dramatic effect – will provide stronger evidence than a ‘higher level’ study – say a systematic review of few studies leading to an inconclusive result (see Background Document).3.The Levels will NOT PROVIDE YOU WITH A RECOMMENDATION(4). Even if atreatment’s effects are supported by best evidence, you must consider at least the following questions before concluding that you should(5, 6) use the treatment:a.Do you have good reason to believe that your patient is sufficientlysimilar to the patients in the studies you have examined? Informationabout the size of the variance of the treatment effects is often helpful here: thelarger the variance the greater concern that the treatment might not be usefulfor an individual.b.Does the treatment have a clinically relevant benefit that outweighsthe harms? It is important to review which outcomes are improved, as astatistically significant difference (e.g. systolic blood pressure falling by1mmHg) may be clinically irrelevant in a specific case. Moreover, any benefitmust outweigh the harms. Such decisions will inevitably involve patients`valuejudgments, so discussion with the patient about their views and circumstancesis vital (see (d) below)(7).c.Is another treatment better? Another therapy could be ‘better’ with respectto both the desired beneficial and adverse events, or another therapy maysimply have a different benefit/harm profile (but be perceived to be morefavourable by some people) . A systematic review might suggest that surgeryis the best treatment for back pain, but if if exercise therapy is useful, thismight be a more acceptable to the patient than risking surgery as a firstoption.d.Are the patient’s values and circumstances compatible with thetreatment? (8, 9). If a patient’s religious beliefs prevent them from agreeingto blood transfusions, knowledge about the benefits and harms of bloodtransfusions is of no interest to them. Such decisions pervade medical practice,including oncology, where sharing decision making in terms of the dose ofradiation for men opting for radiotherapy for prostate cancer is routine (10). 4.The Levels will NOT tell you whether you are asking the right question. If youinterpret meningitis as the common flu, then consulting the Table to find the best treatment for flu, will not help.Differences between the 2011 Levels and other evidence-ranking schemes Different evidence ranking schemes (11-14) are geared to answer different questions(5). The current OCEBM Levels is an improvement over the older Table in that the structure reflects clinical decision-making; moreover it is simpler (fewer footnotes) and is accompanied by an extensive glossary. Then, unlike GRADE, it explicitly refrains from making definitive recommendations, and it can be used even if there are no systematicreviews available.Jeremy Howick, Iain Chalmers, Paul Glasziou, Trish Greenhalgh, Carl Heneghan, Alessandro Liberati, Ivan Moschetti, Bob Phillips, and Hazel Thornton. "The 2011 Oxford CEBM Evidence Levels of Evidence (Introductory Document)". Oxford Centre for Evidence-Based Medicine. /index.aspx?o=5653References1. Lacchetti C, Ioannidis JP, Guyatt G. Surprising results of randomized, controlledtrials. In: Guyatt G, Rennie D, editors. The Users' Guides to the MedicalLiterature: A Manual for Evidence-Based Clinical Practice. Chicago, IL: AMAPublications; 2002.2. Chalmers I. The lethal consequences of failing to make full use of all relevantevidence about the effects of medical treatments: the importance of systematicreviews. In: Rothwell PM, editor. Treating individuals: from randomised trials topersonalized medicine. London: The Lancet; 2007.3. Lane S, Deeks J, Chalmers I, Higgins JP, Ross N, Thornton H. Systematic Reviews.In: Science SA, editor. London2001.4. Guyatt GH, Oxman AD, Kunz R, Falck-Ytter Y, Vist GE, Liberati A, et al. Going fromevidence to recommendations. BMJ. 2008 May 10;336(7652):1049-51.5. Guyatt GH, Oxman AD, Vist GE, Kunz R, Falck-Ytter Y, Alonso-Coello P, et al.GRADE: an emerging consensus on rating quality of evidence and strength ofrecommendations. BMJ. 2008 Apr 26;336(7650):924-6.6. Hume D, Norton DF, Norton MJ. A treatise of human nature. Oxford: OxfordUniversity Press; 2000.7. Hewlett SA. Patients and clinicians have different perspectives on outcomes inarthritis. J Rheumatol. 2003 Apr;30(4):877-9.8. Haynes RB, Devereaux PJ, Guyatt GH. Clinical expertise in the era of evidence-based medicine and patient choice. ACP J Club. 2002 Mar-Apr;136(2):A11-4.9. Howick J. The Philosophy of Evidence-Based Medicine. Oxford: Wiley-Blackwell;2011.10. van Tol-Geerdink JJ, Stalmeier PF, van Lin EN, Schimmel EC, Huizenga H, vanDaal WA, et al. Do patients with localized prostate cancer treatment really wantmore aggressive treatment? J Clin Oncol. 2006 Oct 1;24(28):4581-6.11. Canadian Task Force on the Periodic Health Examination. The periodic healthexamination. Can Med Assoc J. 1979;121:1193-254.12. Sackett DL. Rules of evidence and clinical recommendations on the use ofantithrombotic agents. Chest. 1986 Feb;89(2 Suppl):2S-3S.13. Sackett DL. Rules of evidence and clinical recommendations on the use ofantithrombotic agents. Chest. 1989 Feb;95(2 Suppl):2S-4S.14. Cook DJ, Guyatt GH, Laupacis A, Sackett DL. Rules of evidence and clinicalrecommendations on the use of antithrombotic agents. Chest. 1992 Oct;102(4Suppl):305S-11S.。

世界卫生组织儿童标准处方集

世界卫生组织儿童标准处方集

WHO Model Formulary for ChildrenBased on the Second Model List of Essential Medicines for Children 2009世界卫生组织儿童标准处方集基于2009年儿童基本用药的第二个标准目录WHO Library Cataloguing-in-Publication Data:WHO model formulary for children 2010.Based on the second model list of essential medicines for children 2009.1.Essential drugs.2.Formularies.3.Pharmaceutical preparations.4.Child.5.Drug utilization. I.World Health Organization.ISBN 978 92 4 159932 0 (NLM classification: QV 55)世界卫生组织实验室出版数据目录:世界卫生组织儿童标准处方集基于2009年儿童基本用药的第二个标准处方集1.基本药物 2.处方一览表 3.药品制备 4儿童 5.药物ISBN 978 92 4 159932 0 (美国国立医学图书馆分类:QV55)World Health Organization 2010All rights reserved. Publications of the World Health Organization can be obtained fromWHO Press, World Health Organization, 20 Avenue Appia, 1211 Geneva 27, Switzerland (tel.: +41 22 791 3264; fax: +41 22 791 4857; e-mail: ******************). Requests for permission to reproduce or translate WHO publications – whether for sale or for noncommercial distribution – should be addressed to WHO Press, at the aboveaddress(fax:+41227914806;e-mail:*******************).世界卫生组织2010版权所有。

欧洲规范NA to BS EN 1993-1-1

欧洲规范NA to BS EN 1993-1-1

• 6.3.2.4(2)B • 6.3.3(5) • 6.3.4(1) • 7.2.1(1)B • 7.2.2(1)B • 7.2.3(1)B • BB.1.3(3)B
b) decisions on the status of BS EN 1993-1-1:2005 informative annexes; and
Contents
introduction 1 NA.1 Scope 1 NA.2 Nationally determined Parameters 1 NA.3 decisions on the status of informative annexes 7 NA.4 references to non-contradictory complementary information 7 Bibliography 8
• 2.3.1(1) • 3.1(2) • 3.2.1(1) • 3.2.2(1) • 3.2.3(1) • 3.2.3(3)B • 3.2.4(1)B • 5.2.1(3) • 5.2.2(8)
• 5.3.2(3) • 5.3.2(11) • 5.3.4(3) • 6.1(1) • 6.1(1)B • 6.3.2.2(2) • 6.3.2.3(1) • 6.3.2.3(2) • 6.3.2.4(1)B
NA.2.6
Fracture toughness [BS EN 1993-1-1:2005, 3.2.3(1)]
For buildings and other quasi-statically loaded structures the lowest service temperature in the steel should be taken as the lowest air temperature which may be taken as −5°C for internal steelwork and −15°C for external steelwork. For bridges the lowest service temperature in the steel should be determined according to the NA to BS EN 1991-1-5 for the bridge location. For structures susceptible to fatigue it is recommended that the requirements for bridges should be applied. in other cases (e.g. the internal steelwork in cold stores) the lowest service temperature in the steel should be taken as the lowest air temperature expected to occur within the intended design life of the structure.

Economic development and carbon dioxide emissions in China

Economic development and carbon dioxide emissions in China

Abstract: This paper investigates the driving forces, emission trends and reduction potential of China’s carbon dioxide (CO2) emissions based on a provincial panel data set covering the years 1995 to 2009. A series of static and dynamic panel data models are estimated, and then an optimal forecasting model selected by out-of-sample criteria is used to forecast the emission trend and reduction potential up to 2020. The estimation results show that economic development, technology progress and industry structure are the most important
specific effect term in the regression model, thus improving the estimation performance (Hsiao, 2003; Baltagi, 2005). A number of previous studies on international CO2 emissions based on cross-country panel data models have taken advantage of panel data econometric models, such as Holtz-Eakin and Selden (1995), Tucker (1995), Schmalensee et al. (1998), Lantz and Feng (2006), Maddison (2006), Aldy (2007). Most recently, Auffhammer and Carson (2008) attempt to forecast China’s CO2 emissions path for the foreseeable future by using provincial panel data models, and they find evidence of underestimation in previous studies which are conducted based on time series or cross-sectional data. Their estimation results, however, are not based on panel data set of CO2 emissions but waste gas emissions.

environmental impacts of a north american free trade agreement

environmental impacts of a north american free trade agreement
the size of pollution abatement costs in the U.S. industry influences
the pattern of international trade and investment. Finally, in
Section 3, we use the results from a computable general equilibrium model to study the likely compositional effect of a NAFTA on pollution
of scarce natural resources. A more pointed argument recognizes that
pollution already is a severe problem in Mexico and that the country's weak
regulatory infrastructure is strained to the breaking point. Under these
were required to locate within a 20-kilometer strip along the U.S. -
Mexico
border in order to qualify for special customs treatment. The sector grew -
quite
NBER WORKING PAPERS SERIES
ENVIRONMENTAL IMPACTS OF A NORTH AMERICAN FREE TRADE AGREEMENT
Gene M. Grossman

PrFQ Probabilistic Fair Queuing

PrFQ Probabilistic Fair Queuing
Two alternative approaches to approximate fair bandwidth allocation are Core-Stateless Fair Queueing (CSFQ [9]) and Rainbow Fair Queueing (RFQ [10]). Both algorithms achieve a rough approximation for the bandwidth allocation without maintaining a per flow state, albeit relying on a tight coupling between edge and core routers.
Deterministic approaches to packet fair queuing have evolved from the work of Nagle [8] on isolating misbehaving flows. Nagle proposed maintaining a separate FIFO for each flow and serving these queues in a round robin fashion. This scheme suffers from two major drawbacks: (a) Flow with larger packet sizes gets a larger fraction of the link bandwidth. (b) The scheme does not allow a weighted sharing of the link resources. Both problems were addressed by the WFQ algorithm and its successors. WFQ introduced the notion of weighted fair queuing [1] by assigning different flows with different weights and serving flows by allocating resources in proportion to their weights.

Analysis of Steve Jobs's leadership

Analysis of Steve Jobs's leadership

SIAS INTERNATIONAL UNIVERSITYINFLUENTIALLEADER/FOLLOWER PAPERName: Shi Ge Student NO: 20101816131 Major: Business English Class: FH 1 Department: School of International EducationTime: June 18, 2014Analysis of Steve Jobs's LeadershipIntroductionJust as what people usually say, history always gets the first row in truth. So we have been told by history that a leader’s brilliant le adership behaviors is an important guarantee for the success of an enterprise. In the following part, I will illustrate my view more clearly through my analysis of Steve Jobs, a big name and iconic figure in our modern IT industry.Personal profile“Steve Paul Jobs, American former Apple CEO, founder. In 1976, when he was 21-year-old Jobs and 26-year-old Wo Zini Ike formed Apple Computer. Jobs created the Macintosh computer has led, ipad, iPod, iTunes Store, iPhone and many other well-known digital product. In 1985, Jobs received a grant from the Ronald Reagan Presidential Medal of state-level technology; 1997 as "Times" on the cover; 2007, Steve Jobs was "Fortune" magazine as the greatest businessman of the year. 2009 by Fortune magazine as the best decade. American CEO, the same year was elected one of Time magazine Person of the Year.”This is the universally recognized Steve Jobs, his whole life was fulfilled with marvelous legends, which are tied closely to his individual leadership abilities. Also, we can gain much more valuable inspiration from his leadership behavior traits.Visionary Leading and Strategic PlansVisionary leaders work through a vision that appeals to followers’ needs and motivations (Avery, 2004). That is, visionary leaders are expected to provide a clear vision of the future, develop a road map for the journey ahead, and motivate followers to perform and achieve goals beyond normal expectations. This involves the emotional commitment of followers. (Bass, 1985; Kantabutra, 2003).According to the definition above, we know that vision can be considered as a dream or a beautiful blueprint of the future. At the initial stage of the company establishment, Steve Jobs has already designed a ideal vision--To change the world. From the first Macintosh computer to iPhone4s, every digital product launched by Jobs could leave a deep impression on consumer, surprised the world market greatly, and won the popularity with the massive people as well. Certainly, having a vision alone is not enough to ensure the success of Apply Company. With his being back to Apple company, Jobs immediately made bold reforms as a new start and designed several strategic plans. At the same time, Steve Jobs firmly decided to shake hands with Bill Gates--his old opponent, and developed a friendly and cooperative relationship with Microsoft company, which became worldwide hit news. Consequently, Apple company got rid of the long-time unsolvable risks and the market capitalization also increased sharply.When a leader combined vision and strategic plan together, the future of an organization tends to appear a completely new image. The greatest achievement of Steve Jobs is that he integrated his ideal vision with Apple’s strategic plan, turning the idea--changing the world by Apple’s products into reality and changing it intoeffort-being actions rather than only keeping it in his mind.Personality TraitThe theory of personality trait in leadership suggests that effective personalities and trains of a leader are not the decisive element of cultivating a successful leader, but to a large extent, they can decide t he leader’s behaviors effective or not.As for Steve Jobs, he is described as widely arrogant and extremely conceited, as well as a standard perfectionist and elitist. Moreover, he even regarded the majoritypeople as idiots. However, these remarkable personality traits are exactly the business philosophy that developing Apple company into a classic without copies. Only hiring the smartest ones, never having a plan B, pursuing cruel perfection, all of them are more requirements and rules of creating Apple products than personal traits of Steve Jobs. Having a look at these unique Apple products, Mini iPod Nano with size of a matchbox, iPod with video function, FrontRow with the function of distant controlling television from your sofa, then you may have a better understanding of how these products give reflection of Jobs’s personality.In general, the spirits of extreme perfection and crazy pursuing of innovation have become Apple’s brand culture,which are all developed from Jobs’s personal traits.Unique leadership styleWhen talking about the leadership style of Steve Jobs, I don’t think a specific leading style can fully describe his special and complicate leadership art. But if I required to classify, I think it much more approaches to the task-oriented leadership style. According to Fred Fiedler’s Contingency theory of leadership, “a task-oriented leader focuses only on getting the job done, and can be quite autocratic. He or she will actively define the work and the roles required, put structures in place, plan, organize and monitor. However, as task-oriented leaders spare little thought for the well-being of their teams, this approach can suffer many of the flaws of autocratic leadership, with difficulties in motivating and retaining staff”. (Fred Fiedle r, 1950s).The leader behaviors under this type includes distributing tasks, making plans, focusing on the results etc, which are also the leading traits of Steve Jobs. There is a famous nickname for Jobs--a boss from hell. He has a high requirement for his working team, could not bear unsmart staffs. In his famous marathon session on every Monday, he and his team have the tradition to check the entire business system, such as the salesof the past days, every little bottleneck design, and even delaying the launching date as long as the design not satisfied him. In the relationship around Steve Jos, many people admit that they have achieved a new level in career that beyond their imagination, and meanwhile, all of the people have the same feeling, that is no place can drive them reaching such achievements other than Apple company.ConclusionIn the end it is time to naturally draw a conclusion about the essay. Maybe my analysis about Steve Jobs’s leadership behavior is not perfect, but at least one thing for sure, everyone of us should learn something from this great leader, feeding our dreams big and then making them come true.Reference1.[美]沃尔特·艾萨克森. 乔布斯传. 中信出版社,2011,32~41.2.吕力.事业导向型的苛责式领导:对《史蒂夫·乔布斯传》的扎根研究,经济研究导刊,No.22 20123.尹之峰,白延静. 论一把手的领导力,经济研究导刊,No.14 2012。

IGFBP7 FOR PREDICTION OF RISK OF AKI WHEN MEASURED

IGFBP7 FOR PREDICTION OF RISK OF AKI WHEN MEASURED

专利名称:IGFBP7 FOR PREDICTION OF RISK OF AKI WHEN MEASURED PRIOR TO SURGERY发明人:Andrea Horsch,Birgit Klapperich,DirkBlock,Alfred Engel,Johann Karl,RosemarieKientsch-Engel,Ekaterina Manuilova,ChristinaRabe,Sandra Rutz,Monika Soukupova,Ursula-Henrike Wienhues-Thelen,PeterKastner,Edelgard Anna Kaiser申请号:US15943104申请日:20180402公开号:US20180231570A1公开日:20180816专利内容由知识产权出版社提供摘要:The present disclosure describes a method for predicting the risk of a patient to suffer from acute kidney injury (AKI) during or after a surgical procedure or after administration of a contrast medium. The method is based on the determination of the level of the biomarker IGFBP7 (Insulin-like Growth Factor Binding Protein 7) in a body fluid sample obtained from the patient prior to the surgical procedure or prior to the administration of a contrast medium. Further, the present disclosure describes a method for predicting the risk of a patient to suffer from acute kidney injury (AKI) based on the determination of the amount of the biomarker IGFBP7 (Insulin-like Growth Factor Binding Protein 7) and Cystatin C in a body fluid sample obtained from the patient. The present disclosure further encompasses kits and devices adapted to carry out the methods of the disclosed methods.申请人:Roche Diagnostics Operations, Inc.地址:Indianapolis IN US国籍:US更多信息请下载全文后查看。

2011The political economy of residual state ownership in privatized

2011The political economy of residual state ownership in privatized

The political economy of residual state ownership in privatized firms:Evidence from emerging marketsNarjess Boubakri a ,Jean-Claude Cosset b ,Omrane Guedhami c,d,⁎,Walid Saffar ea American University of Sharjah,UAEb HEC Montreal,Montreal,Quebec,Canada H3T 2A7c Moore School of Business,University of South Carolina,Columbia,SC,29223,USAd Memorial University of Newfoundland,St.John ’s NL,Canada A1B 3X5eOlayan School of Business,American University of Beirut,11072020Beirut,Lebanona r t i c l e i n f o ab s t r ac tAvailable online 24September 2010We investigate the political determinants of residual state ownership for a unique database of 221privatized firms operating in 27emerging countries over the 1980to 2001period.After controlling for firm-level and other country-level characteristics,we find that the political institutions in place,namely,the political system and political constraints,are important determinants of residual state ownership in newly privatized firms.Unlike previous evidence that political ideology is an important determinant of privatization policies in developed countries,we find that right-or left-oriented governments do not behave differently in developing countries.These results con firm that privatization is politically constrained by dynamics that differ between countries.©2010Elsevier B.V.All rights reserved.JEL classi fication:G32G38Keywords:PrivatizationControl structure Political institutions PerformanceEmerging markets1.IntroductionPrivatization can be de fined as the deliberate sale by a government of state-owned enterprises (SOEs hereafter)or assets to private economic agents.This shift of ownership —and control —to private owners creates a change in the prevailing incentive structures,and puts greater emphasis on pro fits and ef ficiency (Boycko et al.,1996;Shleifer and Vishny,1997).The literature provides strong evidence on the dividends of privatization and the bene fits derived from private ownership as compared to government ownership (e.g.,Megginson et al.,1994;Boubakri and Cosset,1998;Boubakri et al.,2005b,c;D'Souza et al.,2005).1The evidence also suggests that performance is negatively related to the government's continued role in the firms.For example,in their research on a set of emerging markets,Boubakri and Cosset (1998)and Boubakri et al.(2005c)find that there is greater improvement in performance after privatization,which is more pronounced when the government relinquishes its control rights.These conclusions are echoed by Chhibber and Majumdar (1999),who find that privately owned firms in India are more ef ficient than those under mixed ownership or those run as SOEs.Shleifer and Vishny (1994)conjecture that when politicians maintain control over firms,privatizing cash flow rights will only reduce ef ficiency and increase corruption.According to this argument,to ensure successful privatization,the immediate transfer of control rights should be of primary importance (Boycko et al.,1996).2Journal of Corporate Finance 17(2011)244–258⁎Corresponding author.E-mail address:omrane.guedhami@ (O.Guedhami).1Please refer to Megginson and Netter (2001)and Megginson and Sutter (2006)for an extensive review of the literature.2These ideas are largely echoed in the debate on transition,where the relative bene fits of a big bang approach compared to more gradual,sequenced reforms towards a market-oriented economy have generated wide interest among academics (see,for instance,Dewatripont and Roland,1995;Roland,2000;among others).The literature on privatization in transition economies is well summarized by Djankov and Murrell (2002)and Svejnar (2002).Note that we do not include firms from ex-communist countries in our study,as privatization in these countries is mainly conducted through vouchers distributed to citizens for free or at discounted prices,and not through typical methods such as asset sales or shareissues.0929-1199/$–see front matter ©2010Elsevier B.V.All rights reserved.doi:10.1016/j.jcorp fin.2010.08.003Contents lists available at ScienceDirectJournal of Corporate Financej o u r n a l h om e p a g e :w w w.e l sev i e r.c o m /l oc a t e /j c o r pfi nIn practice,however,privatization does not always seem to follow this idealized model,especially in developing (non-transition)economies.In an evaluation of the privatization experience of developing countries over the 1988to 2005period,Boubakri et al.(2008a)show that instead of immediately divesting all of their ownership,most governments divest only partially over time.Boubakri et al.(2005b)provide further evidence on this phenomenon in a study of the evolution of post-privatization ownership structure in a multinational sample of 209firms,mostly from emerging markets.They report that while privatization does lead to a drastic change in the ownership structure of SOEs,the transfer of ownership is mainly conducted through partial,staggered sales.Consistent evidence is also found by Gupta (2005),who shows that in India most privatization transactions are partial sales that leave the government in control,and by Fan et al.(2007),who show that in China the government is prohibited from selling its controlling stake in SOEs,which are thus privatized gradually by selling shares to minority investors.3A possible rationale for continued government in fluence following privatization is provided by the theoretical model of Perotti (1995),who argues that partial privatization can signal the government's commitment to market-oriented policies.By relinquishing their control rights,governments signal that privatization is credible and implies no policy risk (i.e.,risk of interference in the operations of newly privatized firms —NPFs hereafter —either through regulation or renationalization).By retaining residual ownership,governments thus signal their willingness to share in any remaining policy risk.As a result,and according to Perotti's (1995)model,partial privatization is a political choice that depends on the characteristics of the government in place,that is,on political institutions.Based on this model,Biais and Perotti (2002)show that right-wing governments,whose objective is to ensure their re-election,signal their commitment to the median voter through partial privatization and underpricing.Similarly,Jones et al.(1999)show that the terms of share issue privatizations —allocation and pricing —are structured to achieve political and economic objectives.The purpose of this study is to determine how political institutions in fluence post-privatization control structure in a large set of emerging markets.Our analysis consists of two parts.First,using hand-collected firm-level data,we examine the residual control of privatizing governments using four measures of control:direct ownership,ultimate ownership,golden shares,and political connection.To our knowledge,this is the first study to document,using firm-level data,residual state control in emerging economies along these various dimensions.We find that residual state ownership over a window of up to six years following privatization shows a signi ficant decline.However,the speed with which governments relinquish control appears to differ across industries and regions,and the state remains the controlling owner (holds more than 50%of the shares)in 46%of our sample firms.We further find that the method of privatization is correlated with residual state ownership;for instance,share issues on the stock market are associated with more gradual divestitures.In addition,governments tend to retain indirect control over NPFs through political connections (30.3%of our sample firms),and less frequently through golden shares (7.3%in our sample firms),which contrasts with Bortolotti and Faccio (2009),who document that 62.5%of a sample of OECD firms have golden shares (in 1996).The second part of our analysis focuses on the impact of political governance on post-privatization control structure.More speci fically,we assess how political constraints and institutions in fluence the government's residual ownership in the six years following privatization.Motivated by prior research,we conjecture that as a redistributive policy,privatization is politically costly and hence is necessarily constrained by the strength of checks and balances,by government ideology,and by the political system in place.Our multivariate analysis,which controls for other potential factors in fluencing privatization design and corporate ownership structure,shows that the decline in state ownership is indeed associated with a country's political environment.For instance,residual state ownership is higher in parliamentary systems and under regimes with greater constraints on the executive (checks).Contrary to what is documented for OECD samples,however,the ideology of the executive does not appear to affect residual ownership.These results are robust to several additional tests,and taken together suggest that it is important to control for a country's political environment and legal infrastructure when assessing the corporate governance of NPFs.Our paper makes two primary contributions to the literature.First ,our study extends prior work on the political determinants of privatization.Previous studies in this line of the literature focus largely on the country-level design of the privatization process.4For instance,Bortolotti and Pinotti (2008)conduct a country-level investigation of the determinants of privatization for 21advanced OECD economies,and show that the likelihood and extent of privatization are strongly positively associated with majority-rule political systems.Bortolotti and Faccio (2009)provide related evidence on the determinants of the control structure of OECD-country NPFs for the period 1996to 2000.However,by limiting attention to advanced economies,these papers'results may not generalize to emerging markets,where the public sector is relatively larger,where political institutions tend to exhibit less accountability,and where executives are less constrained and thus enjoy more latitude in decision making (Bortolotti et al.,2004;Klapper and Love,2004).Earlier studies by Bortolotti et al.(2001,2004)consider both developed and developing countries over the period from 1977to the mid-1990s and,using country-level data,examine the determinants of the decision to privatize,the method of privatization,revenues from divestiture,and the ownership share sold over the sample period.Yet while Bortolotti et al.'s (2001,2004)samples3Extant literature on the impact of government residual ownership on firm performance is mixed.For instance,while Gupta (2005)shows that the partial privatizations in India did observe improved performance despite post-privatization government control,Fan et al.(2007)show that continued government in fluence through political connections in China's partial privatizations is detrimental to performance.The authors conclude that:“…emerging economies …can learn from the experience of China's partial privatization that a government's reluctance to relinquish (or its desire to retain)even only a subset of its property rights with regard to its enterprises can have signi ficantly negative consequences on corporate governance and firm performance ”Fan et al.(2007:p.353).Evidence on newly privatized banks in developing countries (Boubakri et al.,2005a;Otchere,2005)indicates that privatization yields marginal improvements in post-privatization operating performance,which the authors attribute to continued government ownership.4We are aware of two single-country studies on the subject.Clarke and Cull (2002)examine the determinants of the decision to privatize state-owned banks in Argentina,and Dinc and Gupta (forthcoming)investigate the role of elections in privatization design in India.Additional related papers include Dastidar et al.(2007)on policy reversals in India,Sapienza (2004)on the banking sector in Italy and Beck et al.(2005)on the banking sector in Brazil.245N.Boubakri et al./Journal of Corporate Finance 17(2011)244–258246N.Boubakri et al./Journal of Corporate Finance17(2011)244–258include17emerging countries,these countries account for less than20%of their observations.Developing countries exhibit particular dynamics likely to affect the way privatization is implemented.5For example,political risk factors have more of an effect in developing countries than in developed countries(Boehmer et al.,2005).6To the extent that these factors affect government residual ownership, they are more likely to explain privatization outcomes in developing countries than samples based largely on developed markets may uncover.In this paper we extend this literature by using hand-collectedfirm-level data,runningfirm-level(in addition to country-level)analysis,considering indirect means of government control,namely,golden shares and political connection,and focusing exclusively on developing countries(27,from four geographical regions).Further,unlike Bortolotti and Faccio(2009),who consider the years1996to2000for all privatizedfirms regardless of their year of privatization,we examine the six-year window immediately following divestiture,when the influence of political factors is most likely to be strong.Second,our study tests hypotheses related to two strands of literature.The literature on the political economy of privatization posits that government commitment to market-oriented policies can be signaled by partial privatization(Perotti,1995),and that right-wing-oriented governments,which are typically more committed and lessfiscally constrained,favor less state control in the economy and hence divest control more quickly(Biais and Perotti,2002).The literature on the political determinants of corporate governance argues that a country's legal institutions are the product of choices made by politicians(Pagano and Volpin,2005),but that ownership structure and concentration are determined by the nation's political orientation rather than the prevailing legal institutions(Roe,2003).By showing that a country's political institutions explain post-privatization ownership structure over and above the role played by its legal institutions,we bridge these two literatures and add to previous evidence in Boubakri et al. (2005b)and Guedhami and Pittman(2006),who show that the quality of a country's legal institutions shape post-privatization ownership structure at thefirm level.The remainder of the paper is structured as follows.Section2develops our hypotheses on the relation between privatization design and political institutions.Section3discusses the data and the variables used in the study.Section4documents the post-privatization evolution of ownership structure and investigates its determinants.Section5summarizes ourfindings and concludes the paper.2.The political economy of government ownershipOur study builds on the theoretical model of Shleifer and Vishny(1994).The authors describe several sources of political benefits that make politicians less willing to give up control over publicfirms.For example,to win political support,most public enterprises employ too many people,produce goods that favor politicians rather than consumers(one such example is the Concorde supersonic aircraft;see Shleifer and Vishny,1994),locate their production in politically desirable rather than economically attractive regions,and charge prices significantly below marginal costs.In a political economy framework,the decision to divest control is determined by the trade-off between the political benefits and costs of such a decision.The political costs of privatization,which are generally the costs of redistribution and the consequent discontent and loss of voters,are usually immediate.In contrast,the benefits of privatization,which derive from improved corporate efficiency,occur only in the future. Therefore,privatization will take place when the current value of political benefits from future efficiency gains is higher than the immediate political costs of redistribution.7In such instances,privatization is likely to be implemented gradually(Banerjee and Munger,2004).Clarke et al.(2005)contend that the costs,benefits,and design of a country's privatization program are thus all affected by the country's political institutions.Describing the privatization process in several countries,Perotti and Guney(1993)find that,indeed,sales of ownership are generally gradual and staggered.They show that as the policy becomes more credible,sales will expand and revenues from privatization will rise.However,they document that even when governments seem willing to privatize,they put in place mechanisms such as golden shares and veto rights that give them ultimate control over several corporate decisions.8 Overall,prior evidence shows that,in practice,privatization is gradual and governments often retain control over thefirms. Perotti(1995)provides a theoretical rationale for this phenomenon.In particular,Perotti shows that gradual sales are used by governments to signal their commitment to the privatization policy,and to build investors'(and voters')confidence in their policy choices.To the extent that governments are unable to perfectly signal their commitment to future policy,however,retaining participation will be optimal as a signal of commitment as doing so indicates that the government is willing to share the residual risk with investors.Biais and Perotti(2002)also argue that credibility(commitment)is important for the government if it wants to gain the support of the median voter in future elections.5For a thorough discussion of the privatization experience of developing countries,see Megginson and Sutter(2006).6Indeed,Boehmer et al.(2005)find political factors to be relevant only in developing countries,whereas economic factors guide the decisions to privatize banks in both developing and developed countries(OECD).7The war of attrition model of Alesina and Drazen(1991)provides further insights on the costs and benefits of policy decisions by focusing on what causes economic reforms to be delayed.The authors show that stabilization programs will be delayed in presence of two interest groups(i.e.,veto players)that bargain over the distribution of reform costs.Each player has an incentive to block the reform in an effort to get the other player to“give in”first,as the player who concedesfirst is likely to bear the bulk of the reform costs.Policy changes are thus implemented only when both players benefit from the change.8In a large cross-country sample,Boubakri et al.(2005b)show that the average government stake declines substantially after privatization.In a related study by the same authors,Boubakri et al.(2005c)find that residual state ownership is higher in Asian countries compared to that in African,Latin American,and European countries.In summary,these studies suggest that there is a link between the political orientation of —and constraints on —the government on the one hand,and the post-privatization control structure on the other hand.The political economy literature captures these aspects through the following dimensions.2.1.Ideology of the executiveAccording to Biais and Perotti (2002),right-wing governments are more committed to privatization programs than left-wing governments,and thus are more likely to transfer control immediately (while selling ownership gradually),to signal their willingness to bear residual risk.Thus,according to the authors,right-wing governments are more likely to sell larger stakes (i.e.,lower residual government ownership).This leads to our first hypothesis:H1.Residual state ownership is positively (negatively)related to left-wing (right-wing)governments.2.2.The political systemA country's political system can generally be characterized by (a)the relationship between the executive and legislative branches and (b)the competitiveness of elections (Beck et al.,2001).The political system is presidential when there is a single executive elected by popular vote.Under this system,the president enjoys a large degree of independence from the legislature,and thus has a great degree of in fluence over the economic orientation of the country.In contrast,parliamentary systems are characterized by a concentration of power in the hands of the government (Persson and Tabellini,2000).Under a presidential regime,since executive accountability is lower,the executive may pass reforms that are costly in the short run,such as asset sales,as he has the authority to do so (i.e.,veto players have less in fluence in the polity).On the other hand,the executive may decide that the government has more to lose from market-oriented reforms,and hence may take advantage of his independence from the legislature to delay the costs of the reforms by privatizing more gradually,allowing politicians to continue to extract large rents from the partially privatized SOEs.Thus,the relation between the political system and residual government ownership can go either way.We therefore state our second hypothesis as follows:H2.Residual state ownership is related to the political system.2.3.Political constraintsBeck et al.(2001:p.169)state that “a key element in the description of any political system is the number of decision-makers whose agreement is necessary before policies can be changed.”The magnitude of the costs associated with privatization (namely,the loss of political rents and privileges from owning SOEs,and the loss of voter support)is thus likely to depend on the degree of checks and balances constraining the executive.According to Tsebelis (1995,p.289),“the potential for policy change decreases with the number of veto players …”.The political science literature shows that higher checks and balances on policy makers reduce policy volatility by limiting the ability of such actors to alter policy unilaterally (Henisz,2004;Henisz et al.,2005).This in turn enhances the credibility of “policy initiatives ”,which is of particular concern in redistributive policies such as privatization.We thus posit that privatizations are more gradual under governments constrained by strong checks and balances.This leads to our third hypothesis:H3.Residual state ownership is positively related to the degree of political constraints.3.Data and variablesIn this section,we describe our sample,our empirical approach,and the variables used in the analysis.3.1.The samplePrivatization in emerging markets provides an interesting setting in which to test our hypotheses on the importance of political institutions in explaining the control structure of privatized firms.We use a sample of 221firms privatized in 27emerging markets over the 1980to 2001period.Our sample of privatized firms is mainly drawn from Guedhami et al.(2009).This novel database is particularly suited to our research objectives as it tracks residual state ownership in the years surrounding privatization (i.e.,one year prior to privatization,the privatization year,and the three years following privatization).We update this database to include ownership data for the six years after the first privatization as well as information on political connections and golden shares.Table 1shows that the 221firms are located in four different geographical regions as categorized by the World Bank .In particular,40.72%are from Africa and the Middle East,21.27%from East and South Asia and the Paci fic,25.34%from Latin America,and 12.67%from Europe and Central Asia.The diversi fication across regions is important,because it comprises countries with different legal,political,and institutional environments and thus helps shed light on cross-firm differences in residual state ownership.Table 1also reveals that the sample is diversi fied across industries,with 25.79%in the financial sector,25.34%in the basic and petroleum sectors,and 16.29%in utilities.Further,82.35%of the sample privatizations occurred in the 1990s (including247N.Boubakri et al./Journal of Corporate Finance 17(2011)244–258248N.Boubakri et al./Journal of Corporate Finance17(2011)244–258Table1Distribution of sample privatizations.Year Number PercentageBy year198010.45 198110.45 19854 1.81 19864 1.81 19873 1.36 19883 1.36 19892310.41 199014 6.33 19912611.76 1992219.50 199311 4.98 1994188.14 1995167.24 19963616.29 19973013.57 19988 3.62 200010.45 200110.45Total221100By industryIndustry Number PercentageBasic industries3616.29 Capital goods10.45 Consumer durables8 3.62 Construction2511.31 Finance/real estate5725.79Food/tobacco198.60 Leisure10.45 Petroleum209.05 Services10.45 Textiles/trade10 4.52 Transportation7 3.17 Utilities3616.29Total221100By regionRegion(countries)Number PercentageAfrica and the Middle East(8)9040.72East and South Asia and the Pacific(8)4721.27Latin America and the Caribbean(8)5625.34 Europe and Central Asia(3)2812.67Total(27)221100By method of privatizationMethod Number PercentagePrivate Sale4626.14SIP13073.86Total176100Notes:this table provides descriptive statistics for the sample of221privatizedfirms used in this study.We report the distribution of sample privatizations by year, industry,region(as classified by the World Bank),and method of privatization.2000and2001),compared to17.65%in the1980s.Thesefigures reflect the trend towards large-scale privatizations in emerging markets during the1990s.9Note that close to74%of thefirms were privatized through share issues while26%were privatized through private sales.These private sales are implemented either through an auction or directly to private(local or foreign) investors.9When we examine the World Bank's updated list of privatizedfirms,wefind that30.48%of thefirms are from Africa and the Middle East,17.08%are from East and South Asia and the Pacific,42.35%are from Latin America,and10.09%are from Europe and Central Asia.We alsofind that20.52%of thefirms are from thefinancial sector and15.97%are utilities,and that80%of the privatization transactions occurred in the1990s.Thesefigures are close to those discussed in the text in reference to our sample.3.2.The variablesAppendix A provides the de finition and data sources of the variables used in our study.These variables can be classi fied into four categories:privatization and state control variables,political economy variables,legal variables,and firm-and country-level controls.3.2.1.Privatization and state control variablesTo investigate the control structure of our sample of privatized firms,we focus on post-privatization ownership structure along the following dimensions:direct (observable)ownership,ultimate ownership,golden shares,and political connections.We hand-collect the ownership data from two main sources,namely,the offering prospectus and annual reports.We also use additional sources such as Worldscope ;the Asian,Brazilian,and Mexican Company Handbooks;the Guide to Asian Companies;Bankscope;and Orbis .Our sources of ultimate ownership data are Ben-Nasr et al.(2009)for privatized firms,Faccio and Lang (2002)for Portuguese firms,and Claessens et al.(2000)for East Asian firms.Speci fically,we construct the following variables.(1)STATEOWN is the residual government ownership stake following privatization.(2)CONTROL is a dummy variable that takes the value of 1if the residual government ownership stake is greater than 50%,and 0otherwise.(3)STATE_ULTIMATEOWN is the government's ultimate control stake following privatization.We use the approach described in La Porta et al.(1999)to determine the ultimate control structure of privatized firms.By relying on voting rights,this approach allows us to identify the ultimate shareholders.Indeed,the government may divest more than 50%of the privatized firm but still control the firm indirectly through a pyramidal ownership structure that involves other state-owned-firms.(4)GOLDEN is a dummy variable that takes the value of 1if the government retains a golden share,and 0otherwise.10Even when a privatizing government relinquishes direct and ultimate control over the privatized firm,it may impose limits on corporate control by retaining a golden share that puts signi ficant constraints on the decisions of the firm.This practice is common in several developed countries as documented by Bortolotti and Siniscalco (2004).In contrast,very few developing countries have put such devices in place (exceptions are Brazil and Malaysia).(5)CONNECTED is a dummy variable that takes the value of 1if the firm is politically connected,and 0otherwise.Political connections emerge if the firm has politicians/bureaucrats on its board,or if the CEO is a politician.11In such cases the firm will not necessarily maximize pro fits,but rather will likely focus on the net political bene fits for politicians.The government may be more likely to divest ownership and control if the firm is politically connected because politicians will pursue their objectives on its behalf.However,if the firm is not politically connected,the government might have an incentive to privatize gradually in order to keep a hold on the firm's corporate decisions.3.2.2.Political economy variablesWe capture a country's political –economic institutions using the following variables,which come from Beck et al.'s (2001)Database of Political Institutions DPI (the World Bank)12:The ideology of the executive is measured by LEFT,a dummy variable equal to 1if the executive branch is left-wing,and 0otherwise.We distinguish between right-and left-wing governments on the grounds that right-wing governments are typically more committed to market-oriented reforms and thus are more likely to relinquish control faster (i.e.,lower residual ownership).The political system,captured by SYSTEM,is an index of the type of political system in the country:Direct presidential (0);strong president elected by assembly (1);and parliamentary (2).A presidential system is considered as having a tendency to be more authoritarian,with a strong separation of power.At the other end of the spectrum a parliamentary system exhibits no clear-cut separation of power between the legislature and executive.Given that more authoritarian governments are generally expected to be less inclined to conduct market-oriented reforms,they need to signal their commitment through gradual sales.Political constraints,measured by CHECKS,are a proxy for the degree of political constraints within the government.This variable is calculated as “the number of veto players in a political system,adjusting for whether these veto players are independent of each other,as determined by the level of electoral competitiveness in the system,their respective party af filiations,and the electoral rules ”(Beck et al.,2001,p.170).A high degree of constraints and a resulting failure of political actors to cooperate increase the level of uncertainty regarding policy outcomes,as governments are less able to achieve a consensus regarding privatization design.We thus expect to observe more gradual privatization under governments that exhibit higher political constraints.That is,we expect CHECKS to be positively related to residual government ownership.3.2.3.Legal variablesWe include two legal variables in our analysis,namely,the International Country Risk Guide's assessment of a country's level of corruption,CORRUPTION ,and La Porta et al.'s (1998)legal origin variable,COMMON,which captures the legal origin of each country's10Following Bortolotti and Faccio (2009:p.2918),we de fine a golden share as “the set of the state's special powers and statutory constraints on privatized firms.Typically,special powers include i)the right to appoint members in corporate boards;(ii)the right to consent to or to veto the acquisition of relevant interests in the privatized companies;(iii)other rights such as the consent to the transfer of subsidiaries,dissolution of the company,ordinary management,etc.The above mentioned rights may be temporary or not.On the other hand,statutory constraints include (i)ownership limits;(ii)voting caps;(iii)national control provisions.”11Politically connected firms are de fined as in Boubakri et al.(2008b:p.657)as follows:”a company is politically connected if at least one member of its board of directors (BOD)or its supervisory board is or was a politician,that is,a member of parliament,a minister or any other top appointed-bureaucrat.”12We employ this database because it covers a wide range of political variables and enables us to use observations that date back to the 1980s.Below we evaluate whether our results are sensitive to using alternative databases such as that of Botero et al.(2004)and Marshall and Jaggers (2009).249N.Boubakri et al./Journal of Corporate Finance 17(2011)244–258。

CHINA Impacts of Entry by Counterfeiters

CHINA Impacts of Entry by Counterfeiters

IMPACTS OF ENTRY BY COUNTERFEITERS∗Y I Q IANThis paper uses a natural experiment to test the impact of counterfeiting under weak intellectual property rights.I collect new panel data from Chinese shoe companies from1993–2004.By exploiting the discontinuity of government enforcement efforts for the footwear sector in1995and the differences in authentic companies’relationships with the government,I identify and measure the effects of counterfeit entry on authentic prices,qualities,and other market outcomes. The results show that brands with less government protection differentiate their products through innovation,self-enforcement,vertical integration of downstream retailers,and subtle high-price signals.These strategies push up authentic prices and are effective in reducing counterfeit sales.I.I NTRODUCTIONSince the early1990s,protection of intellectual property rights(IPR)has been at the top of the international trade agenda, resulting in a set of globally harmonized IPR specified in the World Trade Organization Agreement on Trade-Related Aspects of Intellectual Property Rights(TRIPS).IPR advocates believe in the stimulating effects IPR have on innovation,which fuels faster economic growth.Scholars have shown such stimulating effects to be very limited in practice,however,with inconclusive results from country case studies(Scherer and Weisburst1995; Kortum and Lerner1998;Sakakibara and Branstetter1999)and cross-country panel results that establish stimulating effects only in countries with higher development and education levels(Qian 2007,2008).Other related research discusses the effects of IPR implementations on the direction of innovative activity(Moser ∗I am grateful to my advisers Josh Lerner,Philippe Aghion,Richard Caves,and Richard Cooper for their constant advice and encouragement;toEric Anderson,Gary Chamberlain,Tat Chan,Pradeep Chintagunta,Davin Chor,Mercedes Delgado-Garcia,J.P.Dube,Sarah Ellison,Ray Fisman,Richard Freeman,Shane Greenstein,Karsten Hansen,Caroline Hoxby,Yasheng Huang, William Kerr,Michael Kremer,Don Lessard,Ben Olken,Ariel Pakes,Peter Rossi, Kathryn Spier,Scott Stern,Jim Stock,Joel Waldfogel,Lucy White,Hui Xie,and Pai-Ling Yin,and to participants of the seminars at Harvard,MIT Sloan,Kellogg, Brown,RAND,SUNY,Georgia Tech,Chicago GSB,NBER Productivity,IPR,and China Conferences for helpful feedback;to Lawrence Katz(the editor)and four anonymous referees for invaluable comments;to Ting Zhu,Mingxing Liu,and Qi Zhang for providing data;to Vincent Chen for helpful discussions in the early stages of the project;and to my beloved parents for their encouragement.Financial support from the Harvard CID Grants and IO Research Grants and cooperation from the Chinese Quality and Technology Supervision Bureau(QTSB)and the companies interviewed and surveyed are gratefully acknowledged.The results in this paper do not necessarily represent the views of QTSB.C 2008by the President and Fellows of Harvard College and the Massachusetts Institute of Technology.The Quarterly Journal of Economics,November200815771578QUARTERLY JOURNAL OF ECONOMICS2003)and foreign direct investments(Branstetter,Fisman,and Foley2006)and pinpoints the shortcomings of current patent systems(Gans,Hsu,and Stern2003;Jaffe and Lerner2004). Although the shift in policy focus from lowering trade barriers to an international rule of law for IPR is highly controversial (Lanjouw and Cockburn2001;McCalman2001),curbing coun-terfeiting through tightening IPR protection has been a common practice worldwide to foster brand values.Despite this practice being so common,little research attention has been paid to the flip side of IPR,that is,determining the economic principles underlying counterfeit infringements and the way in which IPR owners sustain themselves in the absence of governmental IPR monitoring.In this paper,I identify a natural experiment that allows me to investigate how markets function with less government IPR enforcement.This natural experiment was created by the Chinese government’s emergent reallocation of IPR enforcement resources away from monitoring footwear and fashion products to the other sectors in response to several food-poisoning and gas-explosion accidents in the early1990s.Counterfeiters massively entered the Chinese footwear industry in the mid-1990s after the policy shift,infringing on brands of both multinational corporations and Chinese enterprises.Within the footwear industry,the Chinese leather-and sports-shoe sectors include most of the companies that have their own brands and account for approximately US$6 billion in annual sales.Some Chinese brands,such as Anta and Li-ning,occupy a Chinese-market share close to Nike’s. My data set includes both domestic brands and multinational brands operating in China and is supplemented by the Chinese Industrial Census database,an eBay-in-China data set,product catalog information,and interviews.The natural experiment and the unique panel data set facilitate a systematic analysis of a wide range of economic issues pertaining to counterfeiting.I probe into the origins and impacts of counterfeits,which claim brand names that they do not own,and I propose potential remedies.Many insights emerge from this study.Brands with less government protection seek to differentiate their products and self-enforce their IPR through several strategic moves:investing in product attributes that are difficult to imitate,such as shoe-surface materials,technology,and elegant appearance;investing in nonprice signals including but not limited to licensed outletsIMPACTS OF ENTRY BY COUNTERFEITERS1579 (vertical integration);establishing brand-protection offices to monitor the market and to assist the government with en-forcement;and even employing subtle price signals.1All these strategies lead to higher authentic prices.Ifind that the prices set by the authentic manufacturers that were infringed upon rose by45%,on average,two years after their counterfeiters entered the market.The prices of the generic brands without counterfeits followed a smooth and slightly upward time trend, while the counterfeit prices remained level.I additionally show that investing in enforcement activities and switching from wholesale to largely retail distribution are effective in deterring counterfeit entry or at least in reducing counterfeit sales.Although no accuratefinancial value of global counterfeit-ing is available,estimates exist to reflect its massive nature.The World Customs Organization(WCO)estimates that over500bil-lion euros of traded world merchandise in2004may have been counterfeit(WCO2004).Therefore,the effects of counterfeits on market prices and authentic-producer marketing strategies are pertinent issues to address.However,prior literature on coun-terfeits is scarce.Darby and Karni(1973)theorize the reasons for and determinants of using fraud information as a means of attracting customers.They suggest utilizing branding and client relationships as tools for monitoring quality but do not discuss what happens when brands are counterfeited.For example,coun-terfeiters’attempts to infringe upon brands may generate asym-metric information complexities.Previous literature on pricing under asymmetric information on entrants’quality are confined to analysis with exogenous quality levels(Nelson1974;Milgrom and Roberts1986;Metrick and Zeckhauser1999).Additionally, there is a dearth of empirical studies on counterfeits or under-ground economics in general.The illicit nature of counterfeiting implies under-the-table activity and difficult-to-measure effects, and past economic studies on illegal behavior have mainly relied on self-reported data(for example,Levitt and Venkatesh[2000] on a drug-selling gang).This study provides a framework to synthesize various theo-ries on quality uncertainties and endogenous sunk costs(ESC),panies’self-enforcement is prevalent even in advanced and highly institutionalized contexts.For instance,the luxury house LVMH assigned ap-proximately60full-time employees and spent more than US$16million on inves-tigations and legal fees in2004alone.Many brands(e.g.,Fendi and Abbott)use holograms to distinguish their products from forgeries.1580QUARTERLY JOURNAL OF ECONOMICSa term coined by Sutton(1991),2and serves as a fairly clean empirical test of the theory.I collect detailed annualfinancial information from31domestic and multinational shoe companies of different sizes and brands in China to analyze the impacts of counterfeits over a recent twelve-year period.3In addition,I gather external data to verify self-reported data and to augment analysis drawn from the Chinese Industrial Census database,an eBay-in-China data set,and product catalog information.I then provide suitable instruments for various levels of counterfeit en-try and sales.The panel structure of my data enables me to better correct potential omitted-variable bias and helps to improve the precision of the entry-effect estimator.The study of multiple com-panies of different sizes and brands further makes the results gen-eralizable.Myfindings that authentic companies strive to upgrade quality,invest in self-enforcement,and build company stores af-ter counterfeiters enter demonstrates the value of disentangling quality uncertainties.These strategies can also broadly be con-sidered as ing a stylized vertical differentiation model with asymmetric information,I thus demonstrate that thefind-ings are consistent with theory predictions and have generalizable implications.The rest of the paper is organized as follows.Section II de-scribes the empirical research design and identification strategies. Section III introduces data,followed by empirical results in Sec-tion IV.Section V provides theoretical foundations for thefindings, and Section VI concludes and discusses policy implications.The remaining details are available in Qian(2006)and in the QJE online supplementary materials for this article.II.S TUDY D ESIGNIn testing the impacts of entries by counterfeiters,the ideal experiment would randomly assign counterfeit entry for a set of brands in a large pool while other brands would be kept im-mune from counterfeiting.If I define the authentic company’s2.Athey and Schmutzler(2001)and Ellickson(2004)also theorize and iden-tify examples that show quality investments as strategic complements and ways of sustaining market dominance.3.I collected and analyzed city-level data as well,and I found that authentic prices do not vary a great deal across regions.They differ by about US$1,which approximately accounts for transportation costs.There is a lack of variation in counterfeit entry across regions.However,I still conduct city-level IV estimations as robustness checks.IMPACTS OF ENTRY BY COUNTERFEITERS1581 strategy profile asσ=(Quality,Enforcement Expenditures,Ad-vertisement,Licensed Company Stores,Price),then the question of entry effects on each element inσcould simply be addressed with OLS regressions of the element on the binary indicator vari-able of entry.Formally,log(σat)=β0+β1×Counterfeit at+βT2×Year Dummies t (1)+βT3×Firm Dummies a+ at,whereσat denotes the response variable of authentic company a in year t,and the indicator variable Counterfeit at equals1if there are positive amounts of counterfeits for brand a in the year t.Thefixed effects for year(12years)andfirm(31branded companies)con-trol for year-specific confounding factors and time-invariantfirm attributes.The exogeneity of counterfeit entry,however,may not hold in reality,because entry is more likely to occur if the original producer has a larger markup,easier-to-copy quality,or a looser trademark management team.These unobserved time-variant firm characteristics are not captured by thefixed effects,resulting in correlation between Counterfeit at and at in equation(1).Sim-ple OLS without accounting for this entry endogeneity will lead to biasedβ1estimates.4Given these concerns,I seek appropriate instruments for the counterfeit entry variable to identify its effects.The IV (instrumental variable)strategy relies on a natural experiment in Chinese IPR enforcement change and its differential impacts on different brands.The remainder of this section explains the necessary details.The advantage of studying the Chinese shoe industry pri-marily comes from the natural experiment,which stems from an enforcement change around the year1995,due to external shocks exogenous to the shoe sector.In China,copyright and trademark laws were restored after1976.In1985,the Chinese government established the Quality and Technology Supervision4.The omitted variable bias potentially enters OLS in two directions:an up-ward bias due to brand effects,which correlate positively with the price outcome and counterfeit entry;and a downward bias due to internal management effects, which are positively correlated with the price outcome but negatively correlated with the brand’s counterfeit entry.In particular,a brand with good internal man-agement may effectively ward off counterfeits as well as maintain high-standard products with relatively high prices.In fact,when log prices are simply regressed on the fake entry dummy and a year trend,the entry coefficient is very large(0.78). Although the companyfixed effects help control for the omitted brand effects,they do not control for the time-variant management effects.The resulting OLS entry coefficient is,therefore,biased downward,as compared to the IV estimates.1582QUARTERLY JOURNAL OF ECONOMICSBureau(QTSB),5with a branch in each city and joint forces na-tionwide,to supervise product quality and outlaw counterfeit lo-calities.Due to a series of accidents arising from low-quality or counterfeit agricultural products and gas tanks,the Chinese gov-ernment issued notifications around1995to enhance quality su-pervision and combat counterfeits in seven main sectors prone to hazardous materials.6The majority of the Bureau workforce and funding went into these sectors,leaving loopholes for counterfeits to enter the footwear industry.For instance,in the early1990s, approximately10%–12%of the Bureau’s resources were devoted to the footwear sector;this number,however,fell to2%after1995 (QTSB yearbooks).As seen in the data,authentic companies ex-perienced significant counterfeit entry after this loosening of gov-ernmental monitoring and enforcement,with the highest level of entry occurring in1996.As expressed in interviews,authentic shoe producers were surprised at the massive entry of counterfeits but soon reacted. The branded companies that had been infringed upon set up their own brand-protection offices to compensate for the lack of gov-ernment monitoring.As Figure I shows,the drop in government trademark-enforcement expenditures in the shoe sector corre-sponds to both massive counterfeiting entries and investments in self-enforcement by authentic producers in the sample after 1995.The solid line in Figure I plots the total private deflated en-forcement expenditures of these branded companies over time.7 The companyfixed-effects regression of the log of company en-forcement investments on a legislation dummy is positive and significant at the5%level(coefficient=3.2).In light of the enforcement changes,which are shown to have instigated massive counterfeit entries,the ideal experiment5.It was recently renamed the“Administration of Quality Supervision,In-spection and Quarantine.”The Bureau enlarged its personnel and funding in1991 in a joint effort with legislation to protect IPR and monitor product quality.6.These sectors included pharmaceuticals;agricultural products(including fertilizers,pesticides,and other materials or instruments);fiber and cotton(partic-ularly bacteria-infected or bleached counterfeits);food;tobacco;alcohol;and gas. Notification No.52of late1994highlightedfiber and cotton quality supervision, and Notification No.10of early1996highlighted gas and other major hazardous products.7.The self-enforcement costs include all costs associated with brand protec-tion activities in each brand-protection office.They consist of expenses for sending employees to monitor the market,working with the government to track down counterfeit localities,and organizing or engaging in anti-counterfeit conferences, etc.Litigation costs are included but,in accordance with the law,are mostly paid by the party that lost in court(in this case,the counterfeiters).IMPACTS OF ENTRY BY COUNTERFEITERS158310,000 c o n s t a n t y u a n (o r p a i r s )F IGURE IPublic and Private Enforcement ExpendituresNote.This figure plots the trademark enforcement expenditure (in 10,000con-stant Chinese yuan,deflated by the WDI CPI index with 1995as the base year)in the shoe sector by the Quality and Technology Supervision Bureau (QTSB)and the sampled companies,and the total number (in 10,000pairs)of counterfeits in the sample each year.Only three data points are available for the QTBS expenditure in the shoe sector.would translate into randomly loosening IPR enforcements for a group of brands in China at a certain time,while leaving the IPR enforcements of the other brands unchanged.Although the government enforcement change mainly presents itself with time variations,I was able to bring in brand-level variations through measuring the relationship between each sampled authentic producer and the government.Pertinent details will be discussed in the following paragraphs,but the bottom line is straightfor-ward:After the enforcement-legislation change,the monitoring of counterfeits became decentralized,resulting in company-level supervision,carried out primarily through authentic manufac-turers’own initiatives to protect their own brands.However,the authentic companies still had to rely on the government to outlaw the counterfeit localities once these were discovered by their own enforcement employees,because only the government had this au-thority .Therefore,companies that had a poor relationship with the government received less attention and experienced more coun-terfeits.I thus exploit the interaction between the enforcement-legislation change and a proxy for the relationship between an1584QUARTERLY JOURNAL OF ECONOMICSauthentic company and the government to identify“randomized”counterfeit entry decisions8and to infer entry impacts.Before the enforcement change,the QTSB conducted regular inspections of shoe markets and factories.They confiscated and shut down counterfeit localities on the spot.The monitoring mech-anism was,therefore,quite uniform across different brands.After the enforcement change,however,companies that had a good re-lationship with the government received faster responses when they reported counterfeit cases.All else being equal,this type of phenomenon reduced the incentives of counterfeiters to infringe upon these brands.9Brand-level variation in relationships with the government(the QTSB in particular)is therefore helpful for exploring the variation in the effects of the policy shift on coun-terfeit entry and sales for different brands and,in turn,its effects on different authentic prices and other norms.The challenge is to obtain a proxy for such a relationship.10I seek a relationship proxy that is most relevant in explaining brand-level variation in counterfeiting and least influential on authentic-price and mar-keting outcomes,except when it affects counterfeiting.Based on these criteria,the number of days it took a branded company to obtain ISO certificates nationwide is the most appropriate proxy.Since the late1980s,all registered companies in China have been mandated to meet the standards set by the International8.Circumstances under which counterfeiters of a brand are more likely to enter for exogenous reasons that are not related to the brand holder’s price and quality prospects.9.Chinese news agencies broadcast counterfeit-confiscation news and con-sequently counterfeiters are likely to know which brands are harder to infringe upon.10.Previous literature on political connectedness largely measures country-level corruptions.Fisman(2001)pioneered such company-level measurement by linking the response of the share returns offirms traded on the Jakarta stock ex-change to a string of rumors about the adverse state of President Suharto’s health. However,it is hard to identify a politicalfigure similar to Suharto in the Chinese context that I am examining.The shareholders or directors of the sampled shoe companies also did not participate in electoral votes,a scenario used in Khwaja and Mian(2005)to document the political connectedness offirms in Pakistan. The World Bank World Business Environment Survey(WBES)measures political connectedness with managers’impressions of how fast things get done in dealing with governments(Batra,Kaufmann,and Stone2003).The only other alternative I found is a recent paper by Mobarak and Purbasari(2006),who propose that whether an Indonesian company acquired import licenses reflects its political con-nectedness.In the event that the political-connectedness element might play a role in the Chinese import-licensing system,I gather data for the sampled compa-nies(see online Appendix A).However,I use them only in supplemental analyses because they do not reflect a company’s relationship with the government agency of interest,that is,the agency that is in charge of IPR enforcements and that influences counterfeit entry and quantities.IMPACTS OF ENTRY BY COUNTERFEITERS1585 Standards Organization(ISO).11For the shoe industry,the ISO sets standards for the basic equipment a company uses and the basic rules pertaining to the environment and labor.The QTSB is in charge of ISO certification.For some companies,one month was sufficient to obtain the ISO certificate,but for others,the applica-tion date and grant date were more than300days apart.Of the companies that spent a long time fulfilling the ISO requirements, some were small,and others medium or large.Through close readings of documents and multiple interviews with companies and the QTSB,I was able to confirm that the stan-dards were rather basic and the differences in application times were largely due to bureaucracy.Notably,the standard for compa-nies to be registered as legal enterprises surpassed the basic qual-ity standard specified by the ISO.The companies also had to pass internal qualifications as outlined by the ISO before submitting their applications to the QTSB(QTSB2000).Thus,the variation in application time is largely due to relationships and not prod-uct quality or other company factors.Each registered branch of a branded company needed to apply for an ISO certificate through its local QTSB office.12I use the number of work days it took each branded company to obtain ISO certificates,averaged across all the relevant cities where that company had production or man-agement branches,as a proxy for the company’s relationship with the government in the national market.This is a more objective relationship proxy than managers’impressions recorded in World Bank surveys(Batra,Kaufmann,and Stone2003).The sampled shoe companies had to comply with two sets of ISO standards,one established in1994and the other in2000.I obtain each company’s application and grant dates for an ISO certificate corresponding to each set of standards and calculate the number of workdays between each pair of application and grant dates.I then construct a variable that equals the number of workdays between the application and grant dates for the1994 certificate through the year2000and that equals the number of workdays to obtain the2000certificate from the year2001on. The correlation between the number of days to obtain both sets of11.This differs from the United States,where companies adopt ISO standards voluntarily.12.For instance,the brand Senda originated in Yancheng city and applied for an ISO certificate there;its subsidiaries in Shanghai,Jianhu,Beijing,Jilin, etc.,also applied for and obtained ISO certificates from the corresponding QTSB branches.1586QUARTERLY JOURNAL OF ECONOMICSISO certificates is very high,0.96,suggesting that the relationship between a company and the government was rather steady in the period under examination.Further,there are more variations in the ISO indicator across brands orfirms within the same local area than across regions.When I regress the ISO values on the series of dummies indicating the city of application,none of the cities carry statistically significant coefficients.The p-values of these coefficients range from.23to.64.13There is also no significant correlation between this relation-ship proxy and the company’s size,sales,product quality,or pro-duction costs in my data.The largest correlation amounts to only 0.08.The manager of a famous Chinese-branded company com-plained about its poor relationship with the QTSB and the con-sequent slow response infighting its counterfeits:“Our company bases success on our ability and product quality and[we]never cared to work on relationships[guanxi].It is frustrating that we have to go through slow processes in some applications such as the ISO and wait months before the government outlaws the reported localities of our counterfeits.”14In addition,Chinese consumers hardly notice these ISO certificates.Therefore,the ISO does not signal product quality and is not likely to influence prices in any way other than through affecting counterfeit entry and quantity.15 Figure II exhibits a generally positive relationship between the average number of workdays a branded company took to ob-tain the ISO certificates and the mean quantity of counterfeit sales it experienced after1995.This correlation remains significant in regressions of counterfeit entry or sales on ISO days,after taking out company-and year-fixed effects.Section IV.B provides more data to support IV validity.III.D ATAI collected data through a combination of external data sources and original survey research.I acquired the Chinese Bureau of Statistics Industrial Census database,which contains detailedfinancial information and basic company characteristics13.I also regress the number of days for passing each of the two sets of ISO standards,respectively,on the application city’s per-capita income,growth rate, CPI,and income inequality measure for the relevant years andfind no significant coefficients.14.I have translated these quotations into English from the original Chinese.15.Many sectors are privatized in China,the footwear industry included. None of the companies in my sample is state-owned.Shoe prices are also freely set by supply and demand.F IGURE IIPlot of Mean Fake Sale Quantity against the Relationship Proxy PostEnforcement ChangeNote.Thisfigure shows a general positive correlation between the average counterfeit sale quantity(in10,000pairs)and the number of workdays it took the corresponding branded company to obtain the ISO certificate from the government, which is a proxy for the company’s relationship with government.The longer it takes to obtain an ISO certificate(higher values for the“ISO”variable),the worse is the relationship.(such as size and age)for all registered companies in China. However,onlyfive years of census data were available:1995and 1998–2001.Although the database includes the main products of each company,it does not contain any data on prices.I did notfind systematic information about counterfeiting in the existing Chinese or international data sources.It was,therefore, necessary to supplement the readily available data with my own survey research in China.The data I gathered consist of detailed information taken from companies’annualfinancial statements and other relevant company records on31branded companies and their correspond-ing counterfeits for the years1993–2004.These companies were surveyed and interviewed through a stratified random sampling method.Guided by the research design,I obtained data on the av-erage prices and costs of their three product-quality levels(high, medium,low),their brands’total domestic sales,the number of personnel and amount of expenditure used for trademark en-forcement,advertisement expenditure,and the total number of licensed company stores.Descriptive statistics for variables of interests are displayed in Table I.In my surveys,I specifically。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

The keys to employee engagement--Research of antecedents to Job engagement andOrganization engagementChapter One1.0IntroductionThe survey was to test a model of the antecedents of job and organization engagements based on social exchange theory.1.1B ackground of the ResearchEmployee engagement is an idea whose time has come. Which is getting more attention than our before. It was found that, individuals with their employer and will, therefore, be more likely to report more positive attitudes and intentions toward the organization. Engagement is an individual-level construct and if it does lead to business results, it must first impact individual-level outcomes. Although neither Kahn (1990) nor May et al (2004) included outcomes in their studies, Kahn (1992) proposed that engagement leads to both of individual and organization outcomes. There are a number of reasons to expect engagement to be related to work outcomes. For starters, the experience of engagement has been described as a fulfilling, positive work-related experience and state of mind (Schauteli and Bakker2004).It was also found that engagement was negatively related to turnover intention and mediated the relationship between job resources and turnover intention. There is some empirical research that has reported relationships between engagement and work outcomes. For example, engagement has been found to be positively related to organizational commitment and negatively related to intention to quit, and believed to also be related to job performance and extra–role behavior (Schaufrli and Bakker,2004; Sonnentag,2003). These positive experiences and emotions are likely to result in positive work outcomes. As noted by Schaufeli and Bakker (2004), engaged employees likely have a greater attachment to their organization and a lower tendency to leave their organization.Engagement is not an attitude; it is the degree to which an individual is attentive andabsorbed in the performance of their roles. Engagement has to do with how individuals employ themselves in the performance of their job. Furthermore; engagement involves the active use of emotions and behaviors in addition to cognitions. (2004 12 p Harter).The driving force behind the popularity of employee engagement is that it has positive consequences for organizations. As indicated earlier, there is general belief that there is a consequence between employee engagement and business results. (Harter et al, 2002).1.2P roblem StatementThe purpose of this study is to focus on the antecedents of two types of employee engagement: job and organization engagements. It was been found that previous research has focused primarily on engagement in one’s job. On the other hand, there is evidence that one’s degree of engagement depends on the role in quest ion (Rothbard, 2001). Thus it is possible that the antecedents depend on the types of engagement. Recent years, on the one hand, there has been a great deal of interest in employee engagement. Many have claimed that employee engagement predicts employee outcomes organizational success and financial performance (Bates, 2004). At the same time, it has been found that employee engagement is on the decline and there is a deepening disengagement among employee today. (Bates, 2004; Richman 2006) It has even been referred to as an “engagement gap” that is costing businesses $300billion a year in lost productivity (Bates, 2004; Johnson, 2004; Kowalski, 2003).Dissatisfaction is a motivator for change. This statement not only refers to the manager who wants to chang e some one else’s behavior, but it also applies to these who are the targets of change. People are most responsible to learning when they are moderately dissatisfied. Too little and they don’t want to bother too much is paralyzing. Therefore, if you want t o increase a person or group’s readiness to change you need to manage their dissatisfaction. (Robert E. Linneman and John L. Stanton, Jr.1993)1.3R esearch Objectives●To identify the factors that lead to employee engagement in an organization.●To identify the factors that lead to job employee engagement●To identify the factors that lead to organization employee engagement.1.3.1 Research question1.4 Significance of the studyIt has been prove that engaging and involving employee make good business sense and building shareholder value. Negative workplace relationships may be a big part of why so many employees are not engaged with their jobs.It was also found that, employee engagement has become more vital than our before. Which has become a hot topic in recent years among consulting firms and in the popular business press? Meanwhile, employee engagement has rarely been studied in the academic literature and relatively little is known about its antecedents of job and organization engagements based on social exchange theory. There is a surprising dearth of research on employee engagement in the academic literature (Robinson et al, 2004)Although employee engagement has become a hot topic among practitioners and Consultants, there has been practically no empirical research in the organizational Behavior literature. This has led to speculation that employee engagement might just. Be the “flavor of the month” or a fad with little basis in theory and research. The results of this study suggest the following: there is a meaningful distinction between job engagement and organization engagement; a number of antecedent variables predict job and organization engagement.1.5 The importance for individual and organizationsOne way for individuals to repay their organization is through their level of engagement. That is, employees will choose to engage themselves to varying degrees And in respo nse to the reso urces they receive fro m their o rganizatio n. Bringing oneself more f ully into one’s work roles and devoting greater amounts of cognitive, emotional, and physical resources is a very profound way for individuals to respond to an organization’s actions.It is more difficult for employees to vary their levels of job performance given that performance is often evaluated and used as the basis for compensation and other administrative decisions. Thus, employees are more likely to exchange their engagement for resources and benefits provided by their organization. This is consis tent with Robinson et al.’s (2004) description of engagement as a two-way relationship between the employer and employee.1.5.1Why engagement is important to employee?Engaged employees perform better. There are some reasons why engaged workersperform better than non-engaged workers. Engaged employees often experience positive emotions, including happiness, joy, and enthusiasm; experience better health; create their own job and personal resources; and transfer their engagement to others. Importance of Employee Engagement. Recent research has shown that engaged employees often experience positive emotions (Schaufeli and Van Rhenen, 2006), and this may be the reason why they are more productive. Happy people are more sensitive to opportunities at work, more outgoing and helpful to others, and more confident and optimistic (Cropanzano and Wright, 2001).Research suggests that engagement is positively related to health, and this would imply that engaged workers are better able to perform well. Schaufeli et al. (n.d.) have shown that engaged workers report less psychosomatic complaints than their non-engaged counterparts. Similarly, Demerouti et al. (2001) found moderate negative correlations between engagement (particularly vigor) and psychosomatic health complaints (e.g. headaches, chest pain). In addition, Hakanen et al. (2006), in their study among Finnish teachers showed that work engagement is positively related to self-rated health and workability.One important reason why engaged worked are more productive may be their ability to create their own resources. Research with Fredrickson’s (2001) broaden-and-build theory has shown that momentary experiences of positive emotions can build enduring psychological resources and trigger upward spirals toward emotional well-being. Positive emotions not only make people feel good at the moment, but also feel good in the future (Fredrickson and Joiner, 2002). Employee engagement is a psychological commitment–to take ownership for one’s work and to go the extra mile. Engaged employees learn more, grow faster, and show more initiative than employees who are not. They are committed to finding solutions, solve problems, and improve business processes. Therefore, employee engagement is strongly linked to business performance! How can there be big differences in engagement between teams of the same company, when all employees in those teams are subject to the same organizational culture, the same HR policies and the same economic conditions? The difference is the manager. The manager has the single most impact on the engagement and the happiness of an employee.In most organizations, performance is the result of the combined effort of individual employees. It is therefore conceivable that the crossover of engagement among members of the same work team increases performance. Crossover or emotional contagion can be defined as the transfer of positive (or negative) experiences from one person to the other (Westman, 2001). If colleagues influence each other with their work engagement, they may perform better as a team.kers are more productive than disengaged workers. Companies who can successfully engage their employee can achieve higher levels of performance and deliver greater returns to shareholders. Actively disengaged workers undermine the work of engaged workers. Unfortunately1.5.2 Why engagement is important to employer?Importance of employer engagement Employer engagement can help schools in many ways, including: contributing to work related learning providing real contexts for vocational learning contributing to student support e.g. through mentoring contributing to community cohesion meeting statutory requirements work related learning for information, advice and guidance .Engagement is so important to employer, employee engagement has been defined in many different ways and the definitions and measures often sound like other better known and established constructs like organizational commitment and organizational citizenship behavior (Robinson et al., 2004).Most often it has been defined as emotional and intellectual commitment to the organization (Baumruk, 2004; Richman, 2006; Shaw, 2005) or the amount of discretionary effort exhibited by employees in their jobs (Frank et al., 2004).For example, human resource practices such as flexible work arrangements, training programs, and incentive compensation might also be important for engagement. Future research could include a broader range of predictors that are linked to particular types of role engagement. Along these lines, future research should attempt to flesh out the types of factors that are most important for engagement in different roles (e.g. job, organization, and group).1.6 Study hypothesesH1. Rewards and recognition will be positively related to (a) job engagement and(b) Organization engagement.H2. Perceptions of procedural justice will be positively related to (a) job engagement and (b) organization engagement.H3 Perceptions of distributive justice will be positively related to (a) job engagement and (b) organization engagement.1.6.1 Study hypothesesFor instance, H2 Rewards and recognition will be positively related to (a) job engagement and (b) organization engagement. Furthermore, a sense of return oninvestments can come from external rewards and recognition in addition to meaningful work. Therefor e, one might expect that employees’ will be more likely to engage themselves at work to the extent that they perceive a greater amount of rewards and recognition for their role performances. Maslach (2001) have also suggested that while a lack of rewards and recognition can lead to burnout, appropriate recognition and reward is important for engagement.Fairness and justice is also one of the work conditions in the Maslach et al. (2001) engagement model. A lack of fairness can exacerbate burnout and while positive perceptions of fairness can improve engagement (Maslach et al., 2001). In terms of SET, when employees receive rewards and recognition from their organization, they will feel obliged to respond with higher levels of engagement. Thus, the second hypothesis is as follows: Distributive and procedural justice .The safety dimension identified by Kahn (1990) involves social situations that are predictable and consistent. For organizations, it is especially important to be predictable and consistent in terms of the distribution of rewards as well as the procedures used to allocate them. While distributive justice pertains to one’s perception of the fairness of decision outcomes, procedural justice refers to the perceived fairness of the means and processes used to determine the amount and distribution of resources (Colquitt, 2001; Rhoades et al., 2001). A review of organizational justice research found that justice perceptions are related to organizational outcomes such as job satisfaction, organizational commitment, organizational citizenship behavior, withdrawal, and performance (Colquitt et al. 2001). However, previous research has not tested relationships between fairness perceptions and employee engagement. The effect of justice perceptions on various outcomes might be due in part to employee engagement. In other words, when employees have high perceptions of justice in their organization, they are more likely to feel obliged to also be fair in how they perform their roles by giving more of themselves through greater levels of engagement.On the other hand, low perceptions of fairness are likely to cause employees to withdraw and disengage themselves from their work roles. Fairness and justice is also one of the work conditions in the Maslach et al. (2001) engagement model. A lack of fairness can exacerbate burnout and while positive perceptions of fairness can improve engagement (Maslach et al., 2001). Therefore, H5 and H6 are as follows: H5. Perceptions of procedural justice will be positively related to (a) job engagement and(b) organization engagement. H6. Perceptions of distributive justice will be positively1.7 Implications for this researchThe results of this study suggest that employee engagement is a meaningful construct That is worthy of future research. There are several avenues to consider. One area would be to investigate other potential predictors of job and organization engagement. The present study included a number of factors associated with Kahn’s (1990, 1992) and Maslac h et al.’s (2001) engagement models. However, there are other variablesthat might also be important for both job and organization engagement. For example, human resource practices such as flexible work arrangements, training programs, and incentive compensation might also be important for engagement. Future research could include a broader range of predictors that are linked to particular types of role engagement. Along these lines, future research should attempt to flesh out the types of factors that is most important for engagement in different roles (e.g. job, organization, and group).1.8 Structure of the studyIt also has confirms that engaged work places compared with least engaged are much more likely to have lower employee turnover, it is higher than average customer loyalty, above average productivity and earning. Figure is the whole table that2.0 IntroductionEmployee engagement, also called engagement or worker engagement, is a business management concept. An "engaged employee" is one who is fully involved in, and enthusiastic about, his or her work, and thus will act in a way that furthers their organization's interests. "Employee Engagement is a measureable degree of an employee's positive or negative emotional attachment to their job, colleagues and organization which profoundly influences their willingness to learn & perform at work". Thus engagement is distinctively different from satisfaction, motivation, culture, climate and opinion and very difficult to measure.As a manager, the above points can act as your shortlist of factors to which you need to attend. As you engage each employee in a conversation, look out for each of the above factors that get mentioned. Put systems and processes in place to fix any shortcomings and communicate openly and often. By working towards having satisfied workers, you are setting the conditions for giving them a reason to stay withyour organization and to deliver peak performance.2.1 Definitions of employee engagementIt would be defines employee engagement is as an individual’s degree of positive or negative emotional attachment to their organization, their job and their coworkers. (Scarlett Survey 2011) Schaufeli et al. (2002, p. 74) define engagement “as a positive, fulfilling, work-related state of mind that is characterized by vigor, dedication, and absorption.” They further state that engagement is not a momentary and specific state, but rather, it is “a more persistent and pervasive affective-cognitive state that is not foc used on any particular object, event, individual, or behavior” (p. 74).According to Kahn (1990, 1992), engagement is to be psychologically present When occupying and performing an organizational role. In the academic literature, a number of definitions have been provided. Kahn (1990,p. 694) defines personal engagement as “the harnessing of organization members’ selves to their work roles; in engagement, people employ and express themselves physically, cognitively, and emotionally during role performances.”Personal disengagement refers to “the uncoupling of selves from work roles; in disengagement, people withdraw and defend themselves physically, cognitively, or emotionally during role performances” (p. 694). Rothbard (2001, p. 656) also defines engagement as psychological presence but goes further to state that it involves two critical components: attention and absorption. Attention refers to “cognitive availability and the amount of time one spends thinking.About a role” while absorption “means being e ngrossed in a role and refers to the intensity of one’s focused on a role.”Burnout researchers define engagement as the opposite or positive antithesis of burnout (Maslach et al., 2001). According to Maslach et al. (2001), engagement is characterized by energy, involvement, and efficacy, the direct opposite of the three burnout dimensions of exhaustion, cynicism, and inefficacy.2.2Following shown that there is a structure of Antecedents ofEmployee Engagement2.2.1At the core of the model are two types of employee engagement: job and organization engagements. This follows from employee engagement the conceptualization of engagement as role related (Kahn, 1990; Rothbard, 2001); that is, it reflects the extent to which an individual is psychologically present in a particular organizational role. The two most dominant roles for most organizational members are their work role and their role as a member of an organization. Therefore, the model explicitly acknowledges this by including both job and organization engagements. This also follows from the notion that people have multiple roles and as suggested by Rothbard (2001) as well as May et al. (2004), research should examine engagement in multiple roles within organizations.2.2.1.1 Antecedents of employee engagementAlthough there is little empirical research on the factors that predict employee engagement, it is possible to identify a number of potential antecedents from Kahn’s (1990) and Maslach et al.’s (2001) model. While the antecedents might differ for job and organization engagement, identical hypotheses are made for both types of engagement given the lack of previous research and this being the first study to examine both job and organization engagement. Job characteristics, Psychological meaningfulness involves a sense of return on investments of the self-in-role performances (Kahn, 1992). According to Kahn (1990, 1992), psychological meaningfulness can be achieved from task characteristics that provide challenging work, variety, allow the use of different skills, personal discretion, and the opportunity to make important contributions. This is based on Hackman andOldham’s (1980) job characteristics model and in particular, the five core jo bcharacteristics (i.e. skill variety, task identity, task significance, autonomy, and feedback). Jobs that are high on the core job characteristics provide individuals with the room and incentive to bring more of themselves into their work or to be more engaged (Kahn, 1992). May et al. (2004) found that job enrichment was positively related to meaningfulness and meaningfulness mediated the relationship between job enrichment and engagement?Chapter 3: Methodology3.1Research designThe research design helps researcher to solve the problem by describing the characteristics of the variables in this study, and offer some ideas for future probing and research. Research design is a master plan specifying the methods and procedures for collecting and analyzing needed information. It is a framework of the research plan of action ( Zikmund, 2000).There are three basic types of research framework which are exploratory, descriptive, and hypotheses. For this research descriptive type is the most suitable. Descriptive research is normally directed by the one or more formal Questions or hypotheses. Typically, a survey or questionnaire is administered to a sample from a population of interest .Gilbert and Dawn (2005) defined the descriptive research study that typically concerned with determining the frequency with which something occurs or the relationship between two variables.This research gathers the information from primary and secondary data. A survey or questionnaire enables researchers could get information from each respondent directly. It is a tool collecting primary data that adapts well to quantitative research, as it allows the researchers to work with large sample and establish statistical relationships or nume rical c omparisons (Ra y mond, 2001). As cite d by Pa ul e t al(2004),questionnaire s are generally viewed as providing quantitative (numerical )information but researche rs can also use them to measure qualitative concepts such as opinions and attitudes through that are obtained from previously published materials.Such as magazines and government concerns publication (Gill& Johnson, 2002). In this study ,the researcher will collect primary data by using survey method .Survey is a research technique in which information is gathered and collected from a sample of people by using questionnaires(William&Zikmund,2002)3.2 Sampling DesignThe sampling of this research design is focus on the staff of UCSI University, and others are these employees nearby the university. The researcher takes the employeeas the target population and random takes a few private universities in Kuala Lumpur as its respondents. Finally, through many times random sampling, UCSI is chosen as the main target respondents because its location, and situation are more suitable for this research. Many questionnaires would distribute to respondents in above places randomly o collect information and will choose around 100 amounts of effective questionnaire as real data to analyze .The covered areas are mainly in UCSI’s North Wing and South Wing. The eligible respondents who are the individuals located in UCSI University.(Zikmund92000) stated that the process of sampling involves the procedures using a portion of large population to make conclusion representative the whole population. A convenience sampling which under non-probability sampling was conducted where the sample being selected from the entire population is unknown and based on conveniently available for the researcher (Zikmund, 2000)3.3Questionnaire designIn order to gain the quantitative description, the researcher chose the questionnaire as the research tool. Questionnaires will gather information that cannot be found elsewhere from secondary information such as books, newspapers and Internet resources: this is because the information researchers obtain will be fresh and unique. According to Zikmund (2000), the closed-end questions are mainly used to obtain the respondents’ demographic information like gender and nationality. The respondents will have to answer the questions asked in the questionnaire by selecting the appropriate answer provided (Zikmund, 2000).The questionnaire includes of two main parts. Part A asked about respondents’ personal background; part B and part C gathered the data about the factors that affect employee engagement .The questionnaire covers both male and female to make the data collection more suitable .The researcher uses the descriptive statistics to present quantitative descriptions to measure the existence of collection data in manageable form or chart.Questionnaires are usually paper-and–pen instruments that administer too many people at once with relatively inexpensive to administer (Researcher Design, Data collection Techniques and Selection of subjects, 2004). Questionnaires are easy to compare and analyze. While interviews are completed by interviewers based on what were the respondent’s experiences but can be very time consuming and expensive. Conversely, interview is the process recording actual verbal and non-verbal of people without communicating with them (Zilmund,2000). Structured interview is a quantitative technique that concerned the frequency with people does (Saunders &et al 2003)3.3.1MeasurementMeasures Job and organization engagement Items were written to assess participant’s psychological presence in their job and organization. It is impossible for the researcher to collect all the data available with the restriction of time and money .The budget constraints prevent the researcher to survey the whole population .Thus, convenience sampling technique provides an alternative to the population and It is cheaper for the data collection .Furthermore, this technique allows the researcher to obtain the sample size in quickly and saves time to meet the research project deadline.A sample item for job engagement is, “Sometimes I am so into my job that I lose track of time” and for organization engagement, “One of the most exciting things for me is getting involved with things happening in this organization.” Part icipants indicated their response on a five-point Likert-type scale with anchors (1) strongly disagree to (5) strongly agree. A principal components factor analysis with a prom ax rotation resulted in two factors corresponding to job and organization engagements.3.4.1Data GatheringData collection is the process of gathering information from the respondents .There are many research techniques. Data collection is divided into primary data and secondary data. For instance, Surveys can be divided into questionnaire and the interview and other journals on the line.3.4.2 Primary DataPrimary data is data collected for a particular research project at hand (Zikmund,2000).Survey techniques and interview technique play as a data collection method in this research: Kalsbeek (2000) said that survey is the most often to use in gathering information from a sample in which only a portion of individuals chosen from the population Under survey , information is collected by standardized and uniform in the questions asked , so that every individual is asked the same questions in the same way. Surveys can be divided into questionnaire and the interview. Questionnaires are usually paper-and –pen instruments that administer too many people at once with relatively inexpensive to administer (Researcher Design, Data collection Techniques and Selection of subjects, 2004). Questionnaires are easy to compare and analyze. While interviews are completed by interviewers based on what were the respondent’s experiences but can be v ery time consuming and expensive .Conversely, observation is the process recording actual verbal and non-verbal of people without communicating with them (Zilmund,2000). Structured observation is a quantitative technique that concerned the frequency with people do (Saunders &et al 2003)3.4.3 Secondary Data。

相关文档
最新文档