2012-IJRNC-Equivalence Of Sun Of Squares Convex Relaxations For Quadratic Distance Problems
The Dynamics of Consensus Building in
Administrative Science Quarterly 57(2)269–304ÓThe Author(s)2012Reprints and permissions:/journalsPermissions.nav DOI:10.1177/0001839212453456The Dynamics ofConsensus Building inIntracultural andInterculturalNegotiations Leigh Anne Liu,1Ray Friedman,2Bruce Barry,2Michele J.Gelfand,3and Zhi-Xue Zhang 4AbstractThis research examines the dynamics of consensus building in intracultural and intercultural negotiations achieved through the convergence of mental models between negotiators.Working from a dynamic constructivist view,according to which the effects of culture are socially and contextually contingent,we the-orize and show in two studies of U.S.and Chinese negotiators that while con-sensus might be generally easier to achieve in intracultural negotiation settings than intercultural settings,the effects of culture depend on the epistemic and social motives of the parties.As hypothesized,we find that movement toward consensus (in the form of mental model convergence)is more likely among intracultural than intercultural negotiating dyads and that negotiators’epistemic and social motives moderated these effects:need for closure inhibited consen-sus more for intercultural than intracultural dyads,while concern for face fos-tered consensus more for intercultural than intracultural dyads.Our theory and findings suggest that consensus building is not necessarily more challenging in cross-cultural negotiations but depends on the epistemic and social motivations of the individuals negotiating.Keywords:negotiation,culture,mental models,concern for face,need for closureNegotiation is a communicative exchange (e.g.,Putnam,1983)through which participants ‘‘define or redefine the terms of their interdependence’’(Walton and McKersie,1965:3).It is a pervasive form of social interaction that arises not just in formal arenas,such as international relations,industrial relations,and manager-subordinate relations,but also in informal contexts,such as 1Robinson College of Business,Georgia State University 2Owen Graduate School of Management,Vanderbilt University 3Department of Psychology,University of Maryland 4Guanghua School of Management,Peking University270Administrative Science Quarterly57(2012) interpersonal relations and marital decision making(Pruitt and Carnevale,1993). Arriving at optimal outcomes depends,in part,on creating a common under-standing of the situation(McGinn and Keros,2002).As Van Boven and Thompson(2003),Olekalns and Smith(2005),and Swaab et al.(2002)found, consensus making provides the very basis for economic success in negotia-tions,at least for negotiations that have the potential for joint gain through log-rolling(in which each party gives up issues that are low in value for them but high in value to the other side)and information sharing.Dyads with more simi-lar‘‘mental models’’—psychological representations of a situation(Craik, 1943)—are more likely to find joint gains than those whose mental models diverge(Van Boven and Thompson,2003).Beyond negotiations,similarity in mental models has been shown to be critical for effective team functioning (e.g.,Mathieu et al.,200;Lim and Klein,2006)and has been studied in areas as diverse as natural resource management(Jones et al.,2011)and campus sustainability efforts(Olson,Arvai,and Thorp,2011)to examine whether and how communities of people see problems in the same way.Techniques involv-ing mental models have also been proposed as a way to overcome stake-holders’knowledge limitations in effective risk communication(Morgan et al., 2002).The study of mental models and consensus building is especially important for cross-cultural negotiations.Prior research has revealed cultural variation in perceptions of uncertainty(Gudykunst and Nishida,1984),persuasion and influ-ence styles(Johnstone,1989;Brett and Okumura,1998),and judgment biases (Gelfand et al.,2001)and has identified systematic differences in cognitions (Nisbett,2003)and values(Schwartz,1994)when comparing people from dif-ferent cultures.These variations should have a major impact on mental models in negotiation and help us understand why cross-cultural negotiations can be more difficult than within-culture negotiations.When negotiators from different cultures meet,their mental models are more likely to be different than when the other party is from the same culture,and these differences may account for variations in the quality of negotiation results.It is not only mental models at the outset of negotiation that matter most for the development of consen-sus,however;the critical process for consensus building is change in mental models,specifically their convergence,during the course of the negotiation. The process of mental model convergence and the effects of culture on that convergence are the focus of the theory developed below and the two studies we report.The relationship between culture and consensus building via mental model convergence is complex because culture’s effect on social interaction is not static or deterministic.The challenges encountered when managing negotia-tions cross-culturally do not inevitably inhibit negotiated agreements.Rather, the impact of culture depends on other individual and situational factors.Here, we adopted a dynamic constructivist view of cultural influence,which holds that culture affects individual cognition and behavior through the activation of knowledge structures via cultural,motivational,and contextual cues(e.g., Briley,Morris,and Simonson,2000;Chiu et al.,2000;Hong et al.,2000;Morris and Fu,2001;Morris and Gelfand,2004;Fu et al.,2007).Dynamic constructi-vist research explores how behavioral propensities rooted in culture are altered by social experiences and motivational states(Morris,2011).Liu et al.271 Although there may be cultural differences in mental models,especially at the start of negotiation,we theorize that negotiators’tendency to constrain their views to culturally informed knowledge structures are likely to vary with individual needs and motivations,in particular,the need for closure which is a form of epistemic motivation(Webster and Kruglanski,1994),and concern for face,a form of social motivation(Cheung et al.,1996).These motivational fac-tors affect whether mental models converge during cross-cultural negotiations and,in doing so,influence the quality of the negotiated outcome.A dynamic constructivist view of culture helps us see that effects of cultural differences can be at times diminished and at times amplified.It is important,therefore,to look not just at how mental models may differ between negotiators from differ-ent cultures,but also at factors that enhance or constrain change and conver-gence in mental models in cross-cultural settings.This view moves beyond the simplistic assumption that all cross-cultural negotiations are alike and suggests that while cross-cultural negotiations do involve challenges,there are important factors that moderate these effects.We investigate these factors in two stud-ies,which compare intra-and intercultural negotiating dyads.CONSENSUS BUILDING AND CROSS-CULTURAL NEGOTIATIONSShared Mental ModelsIn literatures on communication and psychology,there is a rich and long-standing body of work showing how two parties interacting come to a common understanding of the situation(Grice,1975;Clark and Brennan,1991),some-times called‘‘grounding.’’The basic notion that individuals create meaning by building consensus around ideas developed in conversation traces back at least as far as the work of the early cognitive psychologist Frederic Bartlett(1967; cited in Kashima,2000).This process of consensus building—of creating a col-lective purpose and understanding during conversation—is critical because‘‘all collective actions are built on common ground and its accumulation’’(Clark and Brennan,1991:127).This is no small accomplishment,as people in conversa-tion have to come to a common understanding of both the content of the con-versation(talking about the same thing)and the process of conversation(who talks,when,and how).They have to overcome many barriers,such as speak-ers’variability and lack of proximity(Kraljic,Samuel,and Brennan,2008),as well as computer mediation(Brennan,1998).Yet,more often than not,this pro-cess is successful,as the two parties attend to each other’s statements, moves,and signals and adjust to each other.Inherent in the process of consensus building is the convergence of mental models.Mental models are cognitive representations that help individuals make sense of a situation(Craik,1943).Mental models include many intercon-nected elements of the situation perceived by the individual,forming a‘‘net-work of elements’’in which an element’s meaning is derived from its structural relation to other elements(Carley and Palmquist,1992).Mental models share common features with but can be distinguished from other cognitive structures people use to make sense of their surroundings. These include scripts(Abelson,1976),schemas(Brewer and Nakamura,1984; Fiske and Taylor,1991),frames(Minsky,1975),and belief or knowledge struc-tures(Fiske and Taylor,1991).These concepts all pertain to processes through272Administrative Science Quarterly57(2012) which an individual sorts out information in his or her environment.Scripts are concerned with event sequences in linear temporal order and patterns that guide behavior(Schank and Abelson,1977),while mental models are snap-shots of perceived relationships at a given point in time.Knowledge structures emphasize the framework for organizing,relating,and retaining information in memory(Mayer,1992),while mental models are the specific knowledge struc-tures being used for sense making on a certain occasion.Schemas or frames represent established ways of perceiving a situation(e.g.,having a‘‘relation-ship’’frame;Pinkley,1995)that are not necessarily derived from the particulars of a situation,whereas mental models are built around the actual circum-stances at a given point in a specific situation.Mental models differ from these other concepts in that mental models are situation dependent,the construction of a mental model yields an integrated network of relations among perceived elements in a situation,and therefore a mental model reflects a holistic and specific cognitive experience.The elements of mental models include complex concepts with meanings that are independent of the structural links(e.g.,Thompson and Loewenstein, 2003;Liu and Dale,2009).In a negotiated exchange,the process of interact-ing should create shifts in perceptions such that mental models are more similar at the end of the negotiation than at the beginning of negotiations. Three comparisons between mental models in a situation like this are possi-ble.First,each party comes to the situation with a pre-negotiation mental model;these models may be more or less different between the parties,in terms of which elements are seen as relevant,which are connected to each other,and which are more central.Second,the mental model held by one party before the negotiation may be more or less similar to that same party’s model after the negotiation,indicating the degree of change in mental mod-els that occurs for that individual as a result of the negotiation process.Third, the two parties’mental models after the negotiation can be assessed to determine how similar they are;if they are more similar after the negotiation, we say that the two parties achieved mental model convergence during the negotiation.The convergence of mental models—the building of consensus and greater similarity in mental models—is critical in negotiation because when negotiators share mental models,they are better able to understand one another and bet-ter able to exchange information accurately and efficiently(Van Boven and Thompson,2003).Self-verification theory holds that people seek out verifying information for epistemic and practical reasons.Knowing that one’s beliefs are similar to others’helps feed perceptions that one’s own beliefs are sensible; moreover,when interaction partners hold similar expectations and self-views, social interactions are less conflictual and better coordinated(Swann,Pelham, and Chidester,1988).The similarity between an individual’s self-views and oth-ers’appraisals of that individual constitutes the interpersonal congruence between them,which in turn promotes smooth and productive interactions (Polzer,Milton,and Swann,2002).The similarity of mental models also fosters feelings of coherence,control,and predictability,enhancing understanding and collective efficiency(Swann,Stein-Steroussi,and Giesler,1992).More similar-ity between the mental models of two negotiators creates a stronger basis for exchange that is more open and interactive,with a greater likelihood that oneLiu et al.273party will learn from the other.This can be especially important in negotiationsin which there is a potential for joint gain.Consensus based on similarity and adjustment of mental models can also affect the objective quality of the final results.Reduced to its essence,negotia-tion is a joint decision-making process aimed at allocating resources under con-ditions in which negotiators have divergent preferences and utilities(Bazermanand Carroll,1987;Neale and Northcraft,1991).Weingart and colleagues(Weingart et al.,1990;Weingart,Bennett,and Brett,1993)showed that exten-sive information exchange generates trust and cooperation during the negotia-tion process,resulting in better outcomes for both parties when thenegotiation has integrative potential.There is also evidence that negotiationpairs with higher summed perspective-taking ability(i.e.,the ability to look atproblems from another’s perspective)achieve higher joint profits than pairswith lower summed perspective-taking ability(Kemp and Smith,1994).Whenthere is similarity among important elements of individuals’mental models,thatsimilarity facilitates or,at a minimum,reflects understanding of each other’sperspective.We take as a starting point,therefore,the idea that interacting parties pro-gressing toward mutually satisfactory outcomes harbor mental models that aremore similar at the end than at the beginning of the interaction.Similarity orconsensus in mental models,in other words,is a potential marker of progresstoward mutual gain and mutual satisfaction to the extent that it reflects thecoordination of communications between parties,more accurate interpreta-tions of each other’s interests and messages,and a mutual understanding ofkey issues during decision making.Several studies of mental models in nego-tiation between people in the same culture support this view,showing thatwhen mental models converge,producing greater consensus in perceptions,negotiators are able to achieve higher levels of joint gain(Van Boven andThompson,2003;Olekalns and Smith,2005;Adair and Brett,2005).Figure1illustrates the consensus-building process resulting from change in mentalmodels in the direction of convergence that these prior studies have supported.This foundation is a particularly useful base for studying consensus building incross-cultural negotiations.Culture has been conceptualized as‘‘a loose network of domain-specific cognitive structures’’(Hong and Mallorie,2004:63),including theories andbeliefs that shape people’s patterns of feeling,behaviors,systems of thinking, Figure1.Consensus Building in Mono-cultural Negotiation Research274Administrative Science Quarterly57(2012) and cognitive processes(Triandis,1972;Hofstede,1997;Nisbett et al.,2001). Culture influences factors and processes that are particularly relevant to deci-sion making in general and negotiation in particular.This has been shown in studies of culture(e.g.,Nisbett et al.,2001)and in studies of negotiators’per-ceptions(e.g.,Mannix,Tinsley,and Bazerman,1995;Brett and Okumura, 1998;Gelfand et al.,2001,2002;Adair and Brett,2005).Thus we can expect that negotiators from different cultures will have mental models that are more likely to differ from each other than will negotiators from the same culture.If convergence of mental models is key to negotiation outcomes involving mutual gains,achieving those outcomes is likely to be a greater challenge for negotia-tors who come from different cultures,because they will start with mental models that are more different and need to undergo greater change of mental models to achieve consensus.But the effects of culture need not be regarded as static(e.g.,Hofstede, 1980)or deterministic.A dynamic constructivist view of culture emphasizes the malleable nature of culture(Kashima,2000),contending that the strength of culture’s influence in a given moment or situation varies a great deal(e.g., Morris and Gelfand,2004;Morris,2011).For example,marketing research shows that the impact of culture on consumers’decision making can be ampli-fied simply by requiring people to provide reasons for their choices(Briley, Morris,and Simonson,2000).Cultural differences are also amplified by individ-uals’levels of need for closure(Fu et al.,2007;Chiu et al.,2000)and by requir-ing accountability for individual behavior(Gelfand and Realo,1999;Liu, Friedman,and Hong,2012).Thus the presence of differences in mental models at the start of cross-cultural negotiations may pose more or less of a challenge to the process of building consensus under different conditions.To understand how a convergence of mental models can be achieved even when large differ-ences between models exist at the start of interaction,we need to assess fac-tors that generally enhance or inhibit the flexibility of mental models in negotiation.Motivation and Culture in Mental ModelsProposing a motivated information processing model,De Dreu and Carnevale (2003)suggested that interpersonal processes and outcomes are influenced by two types of motivations:epistemic motivation and social motivation.An important epistemic motivation is the need for closure,the motivation to reach judgments that are conventional and stable(Kruglanski,1989:236).Social moti-vation includes the concern for relations with others.De Dreu and Carnevale (2003)used a more specific definition of social motives,focused on the desire to help or hurt the payoff to the other party.While this particular expression of social motives is important for negotiation,the driver of the desire for those payoffs is concern for the quality of the relationship with the other party,which is core to the broader definition of‘‘social motivations.’’We focus here on the broader definition of social motives.As Forgas,Williams,and Laham(2005:5) put it,‘‘humans need meaningful social contact,and the motivation for such contact is crucial to the maintenance of a healthy sense of adjustment and a sense of identity.’’One example of this type of social motivation is concern for face,which is the motivation to enhance one’s self-image and avoid loss of rep-utation(Cheung et al.,1996).The epistemic motivation of need for closure andLiu et al.275 the social motivation of concern for face are likely to have different effects on mental model convergence and consensus.Need for closure.Need for closure is an individual characteristic pertaining to cognitive style that researchers have connected with both social interactions and culture.It is known to lead individuals to seek answers that concur with the group consensus(Kruglanski and Webster,1996)and resolve conflicts(Fu et al.,2007).Having a high need for closure implies a lack of flexibility in dealing with uncertainty.Need for closure represents the desire for definite structure, an affective discomfort occasioned by ambiguity,urgency for closure in judg-ment and decision making,desire for predictability in future contexts,and closed-mindedness and unwillingness to be confronted(Kruglanski,Webster, and Klem,1993;Webster and Kruglanski,1994;Kruglanski and Webster, 1996).Individuals high in need for closure tend to‘‘seize and freeze’’informa-tion early on during social interaction(De Dreu,Koole,and Oldersma,1999). Consequently,once one has frozen a mental model,he or she is more likely to dismiss information inconsistent with the model and less likely to adapt to new information than those who are low in need for closure(Kruglanski,1989; Kruglanski and Webster,1996;Jost,Kruglanski,and Simon,1999;De Dreu, Koole,and Oldersma,1999).Thus we expect that change in mental models will be lower for those high in need for closure,and as a result,consensus between interacting parties at the conclusion of negotiation will be less among those who are higher in need for closure than those who are lower in need for closure.Because consensus leads to more desirable joint outcomes,if high need for closure is associated with less change in mental models and less consensus,then we would expect joint outcomes to be lower for negotiators with high need for closure.Hypothesis1a:Individual change in mental models during a negotiation is smaller for negotiators high in need for closure than for negotiators low in need for closure.Hypothesis1b:Consensus(or convergence in mental models between negotiators) at the end of the negotiation is less in dyads of individuals with higher need forclosure than dyads of individuals with lower need for closure.Hypothesis1c:Joint outcomes are lower for negotiating dyads of individuals with higher need for closure.Need for closure and cultural match.Though need for closure has effects on negotiations in general,we expect that the effect will be especially strong in intercultural negotiations,in which there is a weak cultural match between negotiators.In intercultural interactions,heightened need for closure will lead negotiators not only to‘‘freeze’’the mental model early but also to instigate stereotypical judgments toward culturally distant others because they feel negatively disposed toward those with different opinions and cultural traits (Kruglanski and Webster,1996).Adherence to group norms provides cognitive closure(Kruglanski et al.,2006),which should be comforting for those with high need for closure facing uncertain situations such as an intercultural nego-tiation.Freezing the mental model makes it difficult to absorb new information or update one’s analysis of the negotiation and leads one to forego276Administrative Science Quarterly57(2012) opportunities to discover integrative potentials that can lead to joint gains. Therefore high need for closure makes it more difficult to reach agreement, and more so in intercultural negotiations.Research has shown that high need for closure tends to amplify cultural ten-dencies(Chiu et al.,2000;Fu et al.,2007),and this appears especially true for those who desire consistent cultural identity(Hong et al.,2003).Fu et al. (2007:203)argued that high need for closure activates cognitions that are con-ventional in one’s home culture and thereby provides the‘‘epistemic security of consensual validation.’’Thus in intercultural situations,individuals high in need for closure may exhibit an additional layer of rigidity on top of the general rigidity we would expect for those with high need for closure.Cultural tenden-cies toward change of mental models over the course of the negotiation and the quality of outcomes in negotiations are amplified when need for closure is high and the negotiation context is intercultural.Hypothesis1d:The negative effect of high need for closure on consensus is stron-ger for intercultural negotiation pairs than for same-culture pairs.Hypothesis1e:The negative effect of high need for closure on joint outcomes is stronger for intercultural negotiation pairs than for same-culture pairs.Concern for face.Concern for face,a form of social motivation,is an indi-vidual characteristic that encompasses motivation to enhance one’s public image and to avoid a loss of reputation(Goffman,1959;Ting-Toomey,1988; Cheung et al.,1996;Earley,1997).Face represents a claimed sense of self in a relational situation.Ting-Toomey(1988)argued that even though those in inter-cultural encounters differ in the way they manage face,face is a universal phe-nomenon in that everyone prefers to be respected,and everyone benefits from a sense of self-respect.Although the concept of face has its origin and holds more importance in Asian cultures(Hu,1944),measures of face as an individ-ual difference construct have been validated in both Eastern and Western cul-tures(Ting-Tommey et al.,1991;Cheung et al.,1996,2001;Liu,Friedman,and Chi,2005).Many texts have emphasized the importance of dealing with the other parties’face in conflict management and negotiation in order to recognize and respect their social identity and status(e.g.,Brett,2001).In contrast with need for closure,which is theorized epistemically to hinder mental model convergence,concern for face should facilitate the convergence of mental models.As a reflection of one’s desire for social acceptance,concern for face is associated with a claimed sense of social esteem that an individual wants others to have for him or her.Face is a vulnerable identity resource in social interaction because it can be enhanced,threatened,or bargained over (Ting-Toomey,1988;Erez and Earley,1993;Cocroft and Ting-Toomey,1994; Earley,1997).Because high concern for face indicates that one is more sensi-tive to how others view him or her,we expect greater awareness of the other party and his or her needs among individuals high in concern for face.Cheung et al.(1996,2001)found that concern for face is linked to higher levels of inter-personal relatedness,relationship orientation,and social sensitivity.Given this added attention to the other party,concern for face ought to enhance conver-gence of mental models with effects that are the inverse of need for closure.Liu et al.277 Hypothesis2a:Individuals’mental model change during negotiation is larger for those high in concern for face than for those low in concern for face.Hypothesis2b:Consensus(or mental model convergence between parties)at the end of a negotiation is greater in dyads composed of individuals with higher con-cern for face than dyads composed of individuals with lower concern for face.Hypothesis2c:Joint outcomes are greater for negotiation dyads composed of indi-viduals with higher concern for face.Concern for face and cultural match.Concern for face,as a manifestation of social motivation,encourages pro-social behaviors and attention to others, which leads to more information absorption and opportunities to discover inte-grative potential in negotiation.We expect that these effects will be especially strong in intercultural negotiations.Given greater initial differences in mental models between the two parties in intercultural situations,convergence requires more individual mental model change over the course of the interac-tion;in that situation,the amount of perceptual and cognitive attention paid to the other party can be especially beneficial.In intercultural negotiations,individ-uals experience a heightened awareness of self-identity,because they may attempt to be positive role models of their culture(Latane,1981),as well as a heightened awareness of how others perceive their culture.Negotiators with high concern for face will see the intercultural situation as an especially impor-tant opportunity to display the positive side of their culture and personality, while those with lower concern for face may attend less to situational differ-ences between intercultural and same-culture contexts.Accordingly,we pro-pose that intercultural negotiation provides a context in which the effects of concern for face on mental models will be amplified.Hypothesis2d:The positive effect of high concern for face on consensus is stronger for intercultural negotiation pairs than for same-culture pairs.Hypothesis2e:The positive effect of high concern for face on joint outcomes is stronger for intercultural negotiation pairs than for same-culture pairs.METHODSPilot StudyWe conducted a pilot study to establish the measurement of mental models within a two-party negotiation simulation called Cartoon(Brett and Okumura, 1999).The simulation involves the sale of syndicated rights of a children’s tele-vision cartoon.Participants in the U.S.and in China were assigned to either the buyer or seller role and were given confidential role information,in their native languages,the day before the negotiation.The seller is a major film production company that is prepared to negotiate a fixed five-year,100-episode contract. The buyer is an independent television station in a large metropolitan area.The parties negotiate five issues.One issue is distributive:the price of each epi-sode.Two integrative issues—financing terms and runs(the number of times each episode may be shown in the five-year period)—create a logrolling oppor-tunity:it is more important for the seller to have payment up front and for the buyer to have a greater number of runs.There is one common-value issue:。
二叠纪-三叠纪灭绝事件
二叠纪-三叠纪灭绝事件二叠纪-三叠纪灭绝事件(Permian–Triassic extinction event)是一个大规模物种灭绝事件,发生于古生代二叠纪与中生代三叠纪之间,距今大约2亿5140万年[1][2]。
若以消失的物种来计算,当时地球上70%的陆生脊椎动物,以及高达96%的海中生物消失[3];这次灭绝事件也造成昆虫的唯一一次大量灭绝。
计有57%的科与83%的属消失[4][5]。
在灭绝事件之后,陆地与海洋的生态圈花了数百万年才完全恢复,比其他大型灭绝事件的恢复时间更长久[3]。
此次灭绝事件是地质年代的五次大型灭绝事件中,规模最庞大的一次,因此又非正式称为大灭绝(Great Dying)[6],或是大规模灭绝之母(Mother of all mass extinctions)[7]。
二叠纪-三叠纪灭绝事件的过程与成因仍在争议中[8]。
根据不同的研究,这次灭绝事件可分为一[1]到三[9]个阶段。
第一个小型高峰可能因为环境的逐渐改变,原因可能是海平面改变、海洋缺氧、盘古大陆形成引起的干旱气候;而后来的高峰则是迅速、剧烈的,原因可能是撞击事件、火山爆发[10]、或是海平面骤变,引起甲烷水合物的大量释放[11]。
目录? 1 年代测定? 2 灭绝模式o 2.1 海中生物o 2.2 陆地无脊椎动物o 2.3 陆地植物? 2.3.1 植物生态系统? 2.3.2 煤层缺口o 2.4 陆地脊椎动物o 2.5 灭绝模式的可能解释? 3 生态系统的复原o 3.1 海洋生态系统的改变o 3.2 陆地脊椎动物? 4 灭绝原因o 4.1 撞击事件o 4.2 火山爆发o 4.3 甲烷水合物的气化o 4.4 海平面改变o 4.5 海洋缺氧o 4.6 硫化氢o 4.7 盘古大陆的形成o 4.8 多重原因? 5 注释? 6 延伸阅读? 7 外部链接年代测定在西元二十世纪之前,二叠纪与三叠纪交界的地层很少被发现,因此科学家们很难准确地估算灭绝事件的年代与经历时间,以及影响的地理范围[12]。
JP Morgen 中国研究 2012
Asia Pacific Economic Research China: Macro easing on the way with heightened growthconcerns•Delayed macro easing is now on the way; fiscal policy will play a more active role•It’s not an extra easing, but moving upfront proactive fiscal and prudent monetary policy as initially planned•Macro easing is moderate and targeted, and will benefit infrastructure, affordable housing, environment protection, tech and energy saving sectors•Economic restructuring and property tightening will continueA simple recap of economic dataChina’s April economic data came in as a big disappointment, and reignited concerns on China’s economic outlook. In general, various indicators pointed at broad-based weakness in economicactivity.On the external side, April merch andise exports weakened further. Exports increased by 4.9%oya,and imports increased by merely 0.3%oya. These translated into a decline of 0.5%m/m, sa and6.4%m/m, sa, respectively. The weakness of exports was broad-based, including low-end consumergoods (-9.4%m/m, sa), mechanical and electrical products (-1.6%m/m, sa) and high-tech products (-1.0%m/m, sa). Weaker-than-expected imports were in part due to declining import prices, but evenin volume terms, imports of commodities generally fell in April, most notably aluminum (-20.6%m/m, sa), refined oil (-17.6%m/m, sa) and copper (-11.8%m/m, sa).On the domestic front, retail sales value rose 14.1%oya in April, a moderate slowdown from15.2%oya in March. Seasonally-adjusted, retail sales in April increased by 0.6%m/m, after a notable expansion of 6.6%m/m in March.Fixed investment growth registered at 19.7%oya in April, compared to 20.7% in March. Thistranslated into FAI growth of 20.2%oya, ytd in the first four months of the year. Real estateinvestment slowed down notably, rising 18.7%oya in the first four months (vs. 23.5%oya, ytd in March). This translated into an increase of 9.2%oya in April, a significant slowdown from March(19.6%oya) and the lowest growth since June 2009. This happened against the backdrop of furtherdecline in house prices, housing transactions and new home starts.The limited available data in May suggested that the economic condition did not improve. The flash reading for China’s Markit manufacturing PMI eased to 48.7 in May, compared to the final readingof 49.3 in April. For the major PMI components, the output component rose to 50.5 (from 49.3 inApril), while the new orders component fell to 48.4 (from 49.7 in April), and the export orderscomponent fell to 47.8 (from 50.2 in April). This has been the seventh consecutive months that the Markit manufacturing PMI has stayed below the 50-threshold.With economic data weaker-than-expected, we have recently revised down our 2012 GDP forecast to 8.0%oya (previous forecast: 8.2%oy a).Real GDP growthpercent change20112012F4Q101Q112Q113Q114Q111Q122Q12f3Q12f4Q12f Headline GDP%oya9.28.09.99.79.59.18.98.17.87.98.1%oya, ytd9.28.010.49.79.69.49.28.17.97.98.0%q/q, saar10.59.68.28.68.8 6.87.09.19.5Good news, bad newsWhile April data were in general on the weak side, there are three indicators that were particularly worrisome.First, industrial production rose 9.3%oya in April, compared to 11.9%oya in March. This is the first time since June 2009 that IP growth drops to the single-digit range. Seasonally-adjusted, IP contracted by 0.9%m/m in April.Second, electricity consumption rose by only 3.6%oya in April, compared to 11.5%oya in 4Q11 and 6.8%oya in 1Q12. Given the close relationship between electricity consumption and GDP growth in the history, the deceleration in electricity consumption is worrisome and could suggest broad-based weakness in domestic demand.Third, corporate profitability has continued to deteriorate this year. In April, total profits of industrial enterprises declined by 2.2%oya. In the first four months, total profits declined by1.6%oya, although sales revenue increased by 12.7%oya. Accordingly, profit margin dropped to 5.6% in April, compared to 6.3% one year ago. In the first four months, corporate profits declined the most for state-owned enterprises (-9.9%oya) and foreign-owned enterprises (-13.2%oya), but increased for private-owned enterprises (20.9%oya) and collective-owned enterprises (12.3%oya). Despite the bad news, there were also encouraging messages. For the good news, easing inflation pressure and stable employment condition are worthy of special attention, and both factors may have notable impact on policymakers’ decisions.China’s April CPI inflation rate eased moderately to 3.4%oya, compared to 3.6% in March. This brought 1-year real deposit rate back to the positive territory at 0.1%. Seasonally-adjusted, headline CPI rose 0.2% m/m, sa in April. Increases in food prices softened to 7.0%oya (compared to 7.5%oya in March), and non-food prices remain quite stable and only increased by 0.1%m/m, sa in April.Looking ahead, we expect CPI inflation will continue to fall to around 3% in mid-year. The easing in inflation pressure tends to open the room for fine-tuning of monetary policy.In addition, employment condition has remained healthy. Unlike in 2008, this time th ere has been no major concern about unemployment associated with weakened economic growth.-150153045061218240304050607080910111213%oya, 3mma China: real industrial production%3m/3m, saar%3m/3m, saar-10-505101520256810121416%oya GDP and electricity consumption%oya 0406081012electricity consumptionGDP456789-50050100150200250%oya, 6mma %share, 6mma0406081012China: industrial enterpise profit growth and profit marginProfit growthProfit marginPolicy to shift towards growth concernsThe weaker-than expected economic performance reflected external and internal drags faced by the Chinese economy. On the external front, uncertainties surrounding the euro area sovereign debt crisis still persist, and in fact have come back to the center of the stage in recent weeks. The impact of the euro area crisis on the China economy is mainly via the trade channel. On the domestic front,the slowdown in domestic demand has been mainly due to the government’s efforts to address imbalances in the economy. Such imbalances include very high house prices and rapid growth at high social/environment/energy costs. Tightening in the housing market and sectors with overcapacity (e.g. auto and steel) is an important part of these efforts. However, the slowdown in these key industries has caused a significant drag on the economy.In addition, macro policy has been behind the curve. This is an additional factor behind business cautiousness and weak domestic demand. The proactive fiscal policy this year has so far been quite muted. On the monetary policy front, the signal from the central bank is on the cautious side. Both of the two RRR cuts this year came in later than expected, even though the PBOC acknowledged that current RRR is too high and RRR cuts should not be considered as a signal of monetary easing. The disappointing economic data added to growth concerns and now macro easing is on the way. The week after the release of the April economic data, the PBOC cut the RRR by 50bp and the State Council announced a consumption stimulus package of 36.3bn RMB. Moreover, Premier Wen and the State Council announced that stabilizing economic growth will receive higher importance in the government’s priority list, and pro-growth fiscal, monetary and financial policies will be adopted looking ahead.Our interpretation of the macro easingThe shift towards pro-growth policies is a welcomed move; however, we think that the macro easing is mainly to speed up the implementation of pro-growth measures in the initial economic plan, and does not include extra easing. This can be seen from the reiteration of proactive fiscal policy and prudent monetary policy in the government’s statement, and lack of new pro-growth policies if compared to the economic plan approved at the beginning of the year.What macro easing policies are likely to be adopted in the coming months? Given that the major concern in the economy is weak domestic demand, there are two possible policy options. One is to use fiscal policy instruments to encourage domestic investment and consumption. The other is interest rate cuts.We think the government is more likely to adopt the first option. In other words, the likelihood of interest rate cuts is small in the baseline scenario. There are several reasons. First, real deposit rates are still at low levels. An interest rate cut is a strong signal of deviation away from prudent monetary policy stance. Second, interest rate cut is a relatively blunt policy instrument and will benefit almost all sectors, thus conflicting with the government's efforts in economic restructuring. Third, banks are able to increase interest rate premium above the minimum lending rates, which makes rate cut less effective. Lastly, there are alternative options to lower the financing cost of potential borrowers. For example, the central bank can enlarge the range of discount offered by banks (currently 10% below the minimum lending rate); or the fiscal authority can lower the funding cost to specific borrowers via interest rate subsidy (e.g. affordable housing loans) or guarantee/re-guarantee arrangements (e.g. SME loans).Fiscal policy will play a more active role this year. We expect it to happen in three areas:1. Investment in areas such as infrastructure (especially in inland areas and rural areas), city transportation, environment protection, affordable housing, new energy and technology. We expect that investment is the key driver of economic comeback in 2H12. The pickup in investment not only involves the continuation of ongoing projects, but also approval of new projects. For instance, the NDRC website showed that the speed of project approval has speed up since April, with projects covering clean energy, airports, information, infrastructure and water conservation. On May 21 alone, NDRC approved 92 projects (compared to a total of 395 projects approved in Jan-Feb). At the local lev el, there is also evidence that government agencies and companies are encouraged to submit this year’s projects in the next 1-2 months.In addition, what is different this time is that investment activity could receive another stimulus as the government plans to open access of state-controlled sectors to private capital, such as railway, energy, and the financial sector. The general guideline was announced in 2010 and implementation details are expected to be released before the end of 2Q12.2. Structural tax cuts. The government will take further measures to reduce tax burden in certain sectors, for instance the expansion of VAT reform in the service sector, currently experiment stage in Shanghai, to other provinces. Structural tax cuts could be an effective policy to address the corporate profit problem and encourage investment. It is also necessary to reduce the share of government income as a percentage of GDP, given that the government aims to increase the share of labor income in GDP but at the same time corporate profitability is stressed. Nonetheless, the magnitude of structural tax cuts adopted by the Chinese government is likely to be limited.3. Consumption stimulus policy. On May 16, the government announced a consumption stimulus package (totaling 36.3 bn RMB ) to encourage consumption in energy-saving home appliance, small cars, etc. More measures could be announced along this line, not only to encourage consumption but also to encourage upgrading in related industries.Other than fiscal measures, monetary and industry policies (including the financial sector reform) will also be used to support economic growth. Monetary policy will rely mainly on quantitative measures. In particular, we expect 1-2 more RRR cuts this year.It is worth noting that, although macro policy is shifting towards growth concerns, economic restructuring remains a key policy objective. This has two important implications. First, macro easing tends to be moderate and targeted. Extra easing is possible, but only when the downside risk deteriorates further (e.g. if Greece exits the euro area). Second, property tightening (especially home purchase restrictions) will continue. The government may fine-tune the housing policy to support supply of affordable housing and ordinary private housing, but will not ease restrictions on speculative demand in the housing market.Exploring source of fundingThe economic stimulus plan will be funded by various funding sources. One is fiscal expenditure. Despite the fact that fiscal revenue growth has slowed down in recent months due to slowing economic activity and structural tax cuts, it is worth noting that the government managed to run a fiscal surplus of RMB875 billion in the first four months of the year. Recall that the 2012 fiscalbudget deficit was set at RMB800 billion. Additionally, taking into account the transfer from the fiscal stability fund, the total fiscal deficit was estimated to be RMB1.07 trillion (about 2% of GDP). Putting everything together would suggest decent room for fiscal spending growth in the coming months.Second, the central bank have been highlighting that it is important to ensure appropri ate growth in bank loans to ensure steady economic growth, with particular emphasis on areas of policy support. For example: bank loans to SMEs have been rising at a faster pace than average bank loans, amounting to 39.7% of total loans outstanding by 1Q12.Third is capital market, especially the bond market. Total local government bond issuance will come in at 250 billion yuan for 2012, compared to the 200 billion yuan annual issuance between 2009 and 2011. In addition, according to latest data on total social financing, amount of social financing arising from new corporate bond issuance has come in at 485.1 billion yuan during the first four months of the year.Last, but not least, the authorities have placed increasing emphasis on private capital, by allowing private capital to enter sectors previously dominated by the public sector, including railway, transport and communication, financial industry, medical services, etc .A particular example to illustrate how the source of funding issue will be addressed is in the area of railway investment. While railway investment growth slowed notably through the course of 2011, government appears to have turned proactive in resolving the sector’s funding constraints lately, with emphasis on credit support for railway investment, railway bond issuance, as well as by encouraging private sector involvement in railway investment.-1500-1000-50005001000Jan 11Apr 11Jul 11Oct 11Jan 12Apr 12RMB bn Government Surplus or DeficitImplications on the financial marketThe financial market has reacted strongly to the weak economic data. MSCI China index fell by about 13% in May and by 6% in the past two weeks. In the credit market, the 5-year sovereign CDS spread rose by 25bps in May to 136bps.(1) Stock marketLooking ahead, we remain cautious on the economic outlook in 2Q12 but expect the economy to come back in 2H12 due to the expected macro easing, especially expected increases in fixedinvestments. As such, the stock market may continue to observe high volatilities in the next 1-2 months, but that may provide a good opportunity to gradually accumulate selected quality China stocks to play the rising growth momentum in 2H12.While the stock market is to a large extent influenced by the risk appetite in the market, which in turn depends on the market confidence on the China economy as well as global events, such as the evolvement of the euro area sovereign debt problem and the possibility of extra quantity easing in the G4 economies. From the fundamental perspective, there are several key themes in China’s economic outlook that affect the financial market performance.First, China is likely to feature a combination of economic recovery and disinflation in the second half of the year. Real deposit rate, now already back to the positive territory, very likely will stay positive in the next few quarters as the inflation pressure continues to ease. We expect moderate upside in the China stock market in 2H, as the history has shown under similar economic environment.Second, given that macro policy is targeted, the stock market implication tends to be different across sectors. As pointed out above, we expect investment expenditure will be the most important component in the easing policy, especially in areas such as affordable housing, infrastructure, city transportation, environment protection, new energy and technology. For instance, on the railway side, although the railway investment in the first four months was still 43.6% lower compared to one year ago, the pace of decline has been moderating in the past few months. Recent news suggest that China would speed up railway investment from end May as a countercyclical tool. As such, investors may want to gradually add on exposures in sectors that benefit from the easing measures, including tech, cement, construction and airports. On the contrary, investors should be cautious in property, steel and related industries, in which tightening measures will remain in place.Third, from a longer perspective, moving inland will be a major theme in China. The economic growth in central and western China has outpaced eastern China in the past several years, and the catch up process will continue. Looking ahead, this concept could be reflected more clearly in the stock market.(2)FX marketIn the FX market, our baseline view is that USD/CNY moves side-ways during 2Q, before regaining small appreciation in the second half of the year. Chinese growth continues to face headwinds not only from weak and uncertain external demand, but also domestic drags associated with the government’s efforts in transition of growth model. Meanwhile, the cautious attitude of China’s policymakers towards growth-supportive policies has led to concerns that macro policy could be behind the curve, which creates downside risk to our near-term growth forecast. Nevertheless, we continue to expect that China will take a combination of fiscal, monetary and industry policies to boost growth, especially after mid-summer when the distraction from political uncertainties are likely to fade.With this macro backdrop, th e CNY is likely to fluctuate around the current level during 2Q, before resuming a gradual appreciation going into 2H12, as the domestic economy begins to improve uponpolicy support, and as the global environment hopefully stabilizes somewhat (given the current JPM forecasts). On the external front, as the trade balance returns to decent surpluses going ahead, external pressure could build on the CNY again, especially as the US enters the election season going into yearend. These factors would likely drive moderate CNY appreciation during the second half of the year (we look for about 2% appreciation in the CNY/USD exchange rate for the full year). And given the recent widening in USD/CNY daily band, the exchange rate movement could exhibit higher two-side volatility this year.Meanwhile, if the European situation turns more disorderly, with further significant downside risks on the European (as well as global) economies, there could be moderate downside risks on theCNY/USD exchange rate this year. For instance, as European team suggested, if Greece exits the euro area, euro/USD may drop to 1.1, euro area GDP growth may drop by 2%-pt compared to the benchmark. If that happens, it will hurt China’s export sector and drag on economic growth more notably. We expect policy support to step up in that scenario, including an adjustment in the exchange rate strategy. CNY may depreciate against USD to 6.40-6.45.(3)China’s demand for commodityChina’s economic outlook has important implications on the regional and global economies. China has contributed nearly 40% to the increase in global GDP in 2011. In the global commodity market, China’s consumption as a share of global demand has increased steadily in the past decade. In particular, in 2011 China accounted for 54% of global demand in iron ore, 43% in aluminum, 42% in zinc, 38% in copper and 39% in nickel. Lately, the weakening in China’s demand has been a major hit on the global commodity market as well as economic outlook for major commodity exporters (such as Australia and Brazil).There are two main reasons why we are cautious on China’s commodity demand in 2012. First, our study suggests that China’s commodity imports are closely related to fixed asset investments. For 2012, we expect FAI growth will slow down to 18% (vs. 24% in 2011). Looking further into the future, reducing the over-reliance on investment is a key objective for China’s policymakers. Hence, China’s commodity demand has ended the high-growth period.Second, China is also in the transition process of moving away from the high energy consumption, high environment cost growth model. In the 12th 5-year plan, China aims to reduce energy consumption per GDP by 20%. This will also slow down China’s demand in the global commodity market.Nevertheless, China’s demand is unlikely to collapse. FAI growth remains solid, though moderating from high levels observed in the past several years. Our forecast of FAI growth in real terms is 15% in 2012 (vs. 18% in 2011). And given that the expected easing policy will likely still focus on investment in targeted sectors, we expect China’s commodity demand could improve moderately entering the second half of the year.-1001020304050152025303540450506070809101112%oya, 3mma, both scales Fixed investment and commodity importsHaibin Zhu (852) 2800-7039haibin.zhu@JPMorgan Chase Bank, N.A., Hong KongGrace Ng(852) 2800-7002grace.h.ng@JPMorgan Chase Bank, N.A., Hong KongLu Jiang(852) 2800-7053lu.l.jiang@JPMorgan Chase Bank, N.A., Hong KongImportant disclosures related to this research are available on J.P. Morgan's website https:///disclosures/company .Confidentiality and Security Notice: This transmission may contain information that is privileged, confidential, legally privileged, and/or exempt from disclosure under applicable law. If you are not the intended recipient, you are hereby notified that any disclosure, copying, distribution, or use of the information contained herein (including any reliancethereon) is STRICTLY PROHIBITED. Although this transmission and any attachments are believed to be free of any virus or other defect that might affect any computer system into which it is received and opened, it is the responsibility of the recipient to ensure that it is virus free and no responsibility is accepted by JPMorgan Chase & Co., its subsidiaries and affiliates, as applicable, for any loss or damage arising in any way from its use. If you received this transmission in error, please immediately contact the sender and destroy the material in its entirety, whether in electronic or hard copy format.。
国家自然科学奖公示材料
在此基础上提出了若干新颖有效的控制器和滤波器的设计方法,完善了时延系统 的时 延相关稳定性和鲁棒控制理论体系。
项目的 8 篇代表性论文获得了国内外同行学者的广泛引用和高度评价,被他 引总计 1315 次,其中 SCI 他引 905 余次,并有 1 篇论文获得自动控制领域著名 期刊《Automatica》的高被引论文奖。
(2)巴西科学院和工程院院士、IEEE Fellow、鲁棒与非线性国际杂志(IJRNC) 领域主编 C. E. de Souza 教授及其合作者对“重要科学发现一”的评价为:“效仿 Xu and Lam (2005)关于线性状态时延系统的结果,本文提出的方法没有用到模型 变换或向量内积的上界,这与众多现存时延相关 Lyapunov-Krasovskii 泛函方法 形成鲜明对照。”(其中 Xu and Lam (2005)代表性论文 1)
推荐该项目为国家自然科学奖二等奖。
高会军(哈尔滨工业大学): 随着网络化和信息化技术的快速发展,时延受到了人们越来越多的关注。时
延系统作为无穷维系统,在稳定性分析和反馈控制设计等方面存在很大的难度, 特别是减小时延相关的分析与设计条件的保守性问题,成为时延系统理论研究领 域的重要研究课题。
该项目围绕不同结构的时延系统,深入研究了时延相关稳定性、不同性能指 标约束下的鲁棒控制和滤波问题,取得了重要的研究突破:提出了减小时延相关 稳定性判据保守性的松弛变量方法,获得了具有较少决策变量的稳定性判据;创 新性地证明了若干重要的稳定性判据之间的等价关系,揭示了不同分析方法之间 的本质联系;创新性地提出了扩展耗散性概念,将多种重要的性能指标整合在了 一个框架之中,统一了基于不同性能指标约束的控制器和滤波器的设计结果,为 鲁棒控制理论体系的完善产生了积极的促进作用。
名著中英文对照
The Voyage of the Beagle An Essay on the Principle ofPopulation The Interpretation ofDreams The History of the Decline and Fall of the Roman Empire
文学名著
The Iron Heel The People of the Abyss The Sea-Wolf The Son of the Wolf The White Fang Benito Cereno Billy Budd Moby Dick(The Whale) Typee Paradise Lost Paradise Regained A Dream of John Ball and A King's Lesson News from Nowhere Blix McTeague Moran of the Lady Letty The Octopus- A Story ofCalifornia Uncle Tom's Cabin Gulliver's Travels The Battle of the Books and Others Frankenstein Bride ofLammermoor Ivanhoe Rob Roy The Heat ofMid-Lothian The Antiquary The Talisman- A Tale of the Crusaders Waverley A Lover's Complaint A Midsummer Night's Dream All's Well That Ends Well As You Like It Cymbeline King John King Richard II King Richard III Love's Labour's Lost Measure for Measure Much Ado About Nothing Pericles, Prince of Type The Comedy of Errors King Henry the Fourth King Henry the Fifth King Henry the Sixth King Henry the Eighth The History of Troilus and Cressida The Life ofTimon of Athens
时空极差熵权法 英文
时空极差熵权法英文The Time-Space Extreme Difference Entropy Weighting Method (TSWEDEM) is a multi-criteria decision-making method that is used to evaluate the performance of a set of alternatives concerning multiple criteria. The technique employs a top-down approach that aims to identify the most advantageous alternative that satisfies the preferences and constraints of the decision-maker. This paper provides a comprehensive review of the TSWEDEM, including its theoretical background, algorithm, and practical applications.The TSWEDEM is based on two core concepts: entropy and weighting. Entropy is a measure of the uncertainty or unpredictability of a system, while weighting is a technique used to assign relative importance to different criteria or factors. In the context of the TSWEDEM, entropy is used to measure the degree of difference between the performance of alternatives with respect to each criterion, while weighting is used to incorporate the decision-maker's preferences for each criterion.The algorithm of the TSWEDEM can be divided into three steps: normalization of the decision matrix, determination of the weighting coefficient and calculation of the fuzzy comprehensive appraisal. The first step involves standardizing the decision matrix to avoid dominance by any single criterion. The second step involves determining the weighting coefficient for each criterion through the use of expert judgment or other methods. The final step involves calculating the fuzzy comprehensive appraisal for each alternative, which is a weighted sum of the normalized scores for each criterion.The advantages of the TSWEDEM include its ability to handle both quantitative and qualitative criteria, its ability to incorporate expert judgment, and its ability to provide a comprehensive evaluation of alternatives. The technique has been successfully applied in avariety of fields, including environmental management, transportation planning, and energy systems analysis.In environmental management, the TSWEDEM has been used to evaluate the performance of different waste management strategies. In transportation planning, it has been used to select the best transportation mode for a given route. In energy systems analysis, it has been used to evaluate the performance of different renewable energy technologies.However, the TSWEDEM has several limitations, including the subjectivity of the weighting process, the potential for inconsistency in expert judgment, and the lack of a clear theoretical foundation. Additionally, the technique can betime-consuming and computationally intensive, particularly when dealing with large and complex decision matrices.In conclusion, the TSWEDEM is a valuable multi-criteria decision-making method that can help decision-makers evaluate the performance of alternatives concerning multiple criteria. Its theoretical foundation, algorithm, and practical applications have been discussed in detail. Thetechnique has several advantages, but it also has some limitations, which need to be considered when applying it in practice.。
A cryogen-free dilution refrigerator based Josephson qubit measurement system(2012年中科院物理所)
A cryogen-free dilution refrigerator based Josephson qubit measurement systemYe Tian, H. F. Yu, H. Deng, G. M. Xue, D. T. Liu et al.Citation: Rev. Sci. Instrum. 83, 033907 (2012); doi: 10.1063/1.3698001View online: /10.1063/1.3698001View Table of Contents: /resource/1/RSINAK/v83/i3Published by the American Institute of Physics.Related ArticlesDynamic shear stress and heat transfer of solid hydrogen, deuterium, and neonJ. Appl. Phys. 111, 083513 (2012)A simple device for dielectric spectroscopy of polymers with temperature regulation close to 300 K based on a Peltier junctionRev. Sci. Instrum. 83, 043903 (2012)Thermal contact conductance of demountable in vacuum copper-copper joint between 14 and 100 KRev. Sci. Instrum. 83, 034902 (2012)Solar adsorption cooling system: An overviewJ. Renewable Sustainable Energy 4, 022701 (2012)A 17 T horizontal field cryomagnet with rapid sample change designed for beamline useRev. Sci. Instrum. 83, 023904 (2012)Additional information on Rev. Sci. Instrum.Journal Homepage: Journal Information: /about/about_the_journalTop downloads: /features/most_downloadedInformation for Authors: /authorsREVIEW OF SCIENTIFIC INSTRUMENTS83,033907(2012)A cryogen-free dilution refrigerator based Josephson qubit measurement systemY e Tian,1H.F.Yu,1H.Deng,1G.M.Xue,1D.T.Liu,1Y.F.Ren,1G.H.Chen,1D.N.Zheng,1 X.N.Jing,1Li Lu,1S.P.Zhao,1and Siyuan Han21Beijing National Laboratory for Condensed Matter Physics,Institute of Physics,Chinese Academy ofSciences,Beijing100190,China2Department of Physics and Astronomy,University of Kansas,Lawrence,Kansas66045,USA(Received14January2012;accepted9March2012;published online30March2012)We develop a small-signal measurement system on cryogen-free dilution refrigerator which is suitable for superconducting qubit studies.Cryogen-free refrigerators have several advantages such as less manpower for system operation and large sample space for experiment,but concern remains about whether the noise introduced by the coldhead can be made sufficiently low.In this work,we demon-strate some effective approaches of acoustic isolation to reduce the noise impact.The electronic circuit that includes the current,voltage,and microwave lines for qubit coherent state measurement is described.For the current and voltage lines designed to have a low pass of dc-100kHz,we show that the measurements of Josephson junction’s switching current distribution with a width down to1nA, and quantum coherent Rabi oscillation and Ramsey interference of the superconducting qubit can be successfully performed.©2012American Institute of Physics.[/10.1063/1.3698001]I.INTRODUCTIONExperimental studies of superconducting qubits require mK temperature environment with extremely low electro-magnetic(EM)noise level,so that coherent quantum states of qubits can be prepared,maintained,and controlled and their delicate coherent evolution can be monitored and measured.1–4So far,most of these experiments are performed using dilution refrigerators that use liquid He4as cryogen for thefirst-stage cooling.However,for such traditional wet systems,there are several obvious disadvantages.For in-stance,even“low-loss”Dewars require refilling every few days,therefore it is not only more expensive but also re-quires more manpower,infrastructure,and time to conduct experiments.Also,the need for a narrow neck in the helium Dewar to keep the boil-off to an acceptable level means that the space available for experimental wiring and other exper-imental services is limited.Moreover,the cold vacuum seal also means hermetically sealed and cryogenically compatible wiring feedthroughs are required in addition to the room tem-peraturefittings.There has been much recent interest in the dry dilu-tion refrigerators which use pulse-tube coolers to provide the first-stage cooling instead of the liquid cryogen.For these dry systems,the above mentioned disadvantages are avoided. Namely,they need much less manpower to operate and have a much larger sample space.The smaller overall size of the systems is also convenient for magnetic shielding and other lab-space arrangements.In addition,they do not rely on liq-uid He4cryogen which is becoming a more expensive natural resource.Presently,there are still less users for the dry system than those for the wet system.A main concern about using pulse-tube based dilution refrigerators for sensitive small sig-nal measurements is whether noises(vibration,electrical,and acoustic)introduced by the coldhead is sufficiently low.In this paper,we describe the design,construction,and charac-terization of a system built on an Oxford DR200cryogen-free dilution refrigerator suitable for experiments that are ex-tremely sensitive to their electromagnetic environments.The paper is organized as follows.Wefirst describe the mechan-ical constructions which reduce the acoustic vibrations to a very low level.We then present the electronic measurement setup that includes variousfiltering,attenuation,and amplifi-cation for different parts of the qubit quantum state prepara-tion and measurement.Finally,we show that the system is adequate for studying coherent quantum dynamics of su-perconducting qubits by demonstrating Rabi oscillation and Ramsey fringe in an Al superconducting phase qubit.II.MECHANICAL CONSTRUCTIONFigure1(a)shows the front view of our system with the DR200refrigerator installed on an aluminum-alloy frame near the center.The refrigerator can reach its base tempera-ture of8mK and temperatures below20mK,respectively, before and after all electronic components and measurement lines described below are installed.It has a prolonged ver-sion compared to the Oxford’s standard design and has a large sample space of25cm in diameter and28cm in height.To reduce vibrations coupled from thefloor,the aluminum-alloy frame is placed on four0.8cm thick rubber stands.The re-frigerator can be attached to the frame via an air-spring sys-tem to provide further vibration isolation.The turbo pump, originally placed on the top of the refrigerator,is moved to the other side of the wall in the next room and is connected to the fridge by a stainless steel pipe and a1m long bellows as can be seen in Fig.1(b).Both the pipe and the bellows have the same diameter as that of the inlet of the turbo pump. This setup proves significantly reducing the vibrations from the turbo pump.Furthermore,the rotary valve is separated0034-6748/2012/83(3)/033907/5/$30.00©2012American Institute of Physics83,033907-1FIG.1.Photos of the cryogen-free DR200dilution refrigerator system suitable for qubit quantum-state measurements.(a)Front view.(b)Top view.The fridge is installed on an aluminum-alloy frame with double acoustic isolation from the ground using rubber stands and air springs.The turbo pump and rotary valve are mechanically decoupled from the fridge.The fridge is also electrically isolated from the pumps,the control instruments,and the gas lines.The thick red arrow in (b)indicates where an accelerometer sensor is placed for vibration measurement.from the insert and securely attached to the wall,as can also be seen in Fig.1(b).Soft plastic hoses are used for the connec-tions to the DR unit wherever needed.Other equipments,in-cluding the forepump,the compressor (dry pump),the pulse-tube refrigerator (PTR)compressor,and a water cooler are all located in the next room.The overall arrangements of the sys-tem are schematically shown and further explained in Fig.2.Vibration levels of the system are measured using an ac-celerometer in three cases:Namely,the whole system is off,only the turbo pump is on,and both the turbo pump and the PTR are on.The results are shown in Fig.3.The accelerom-eter is manufactured by Wilcoxon Research (Model 731)and its sensor is placed on the top of the refrigerator,as indicated by a thick red arrow in Fig.1(b),which measures the verti-cal acceleration of the system.The sensor is connected to a preamplifier with a 450-Hz low-pass filter.The output volt-age signal of the preamplifier (1kV corresponds to 9.8m/s 2)FIG.2.Schematic overall arrangement of the cryogen-free DR200dilution refrigerator measurement system.(1)Aluminum-alloy frame;(2)PTR cold-head;(3)and (4)pumping line;(5)bellows assembly;(6)turbo pump;(7)rotary valve;(8)compressor;(9)forepump;(10)LN2coldtrap;(11)PTR compressor;(12)and (13)electrically isolated gas line and connecters;(14)rubber stands;(15)air-spring system (optional);(16)sand bag;(17)trilayer μ-metal shielding.Thin blue lines represent the gas lines.is then measured by a spectrum analyzer.In its power spec-tral density mode,we directly obtain the acceleration “power”spectral density,which has the unit (m/s 2)2/Hz,or the corre-sponding data presented in unit (m/s 2)/√Hz,which we call acceleration spectral density.The velocity spectral density shown in Fig.3in unit (m/s)/√Hz are the latter data divided by ωand averaged over 400measurements in each case.The baseline in Fig.3(bottom line)decreases mono-tonically from 10−6(m/s)/√Hz at low frequency (<5Hz)to 10−9(m/s)/√Hz at high frequency (>200Hz),which is comparable to the data recorded in several scanning tunneling microscopy (STM)labs around the world.5As a result of the separation of the turbo pump from the main body of the refrigerator,we see that the veloc-ity spectral density increases only slightly when theturboFIG.3.Velocity spectral density of the dilution refrigerator system measured under three conditions:The whole system is off (bottom line),only the turbo pump is on (middle line),and both the turbo pump and the PTR are on (top line),respectively.See text for measurement details.Peaks at 50Hz and its harmonics and subharmonics are due to power line interference (not due to mechanical vibrations).FIG.4.Diagram of the electronic measurement system.Typical three kinds of the measurement lines are shown starting from the left side of the diagram:The qubit flux bias and SQUID-detector current lines,the SQUID-detector voltage lines,and the microwave/fast-pulse lines.pump is switched on (middle line).On the other hand,it increases significantly,especially above about 100Hz,when the PTR is also running (top line).This is unavoidable since the PTR is mechanically integrated to the body of the fridge.At f >200Hz,the data are approximately two orders of magnitude higher than the baseline.The results presented in Sec.IV below show that this level of vibration has negligible effect on the control and measurement of the coherent quan-tum dynamics of superconducting phase qubits.III.ELECTRONIC SETUPThe electrical measurement circuit includes three types of lines:The current bias lines,the voltage sensing lines,and the microwave/fast-pulse lines,which are shown schemat-ically in Fig.4.The current and voltage lines from room temperature to low temperature,which are used for the qubit flux bias or superconducting quantum interference device (SQUID)-detector current bias and the SQUID-detector voltage measurement,are composed of flexible coax-ial cables,electromagnetic interference (EMI)filters,resis-tance/capacitance (RC)filters,and copper powder filters 6,7down to the sample platform.Flexible coaxial cables man-ufactured by GVL Cryoengineering with a bandwidth of dc to 300MHz are used.The cables have φ0.65mm CuNi outer conductor,teflon dielectric,and central conductors made of ultra-low-temperature-coefficient φ0.1mm brass Ms63(used above the 4K plate)and of φ0.1mm superconducting NbTi (used below the 4K plate).EMI filters placed outside the fridge at the room temperature are VLFX-470(VLFX)low-pass filters manufactured by MiniCircuits,which are used to filtering out high frequency noises from room temperature lines.The characteristic impedance of the coaxial filter VLFX is 50 and the passing band of the filter is dc to 470MHz with attenuation greater than 40dB between 2and 20GHz.RC filters are inserted into the coaxial cables and are ther-mally anchored to the 4K plate.All RC filters have a 3dB cutoff frequency of 100kHz.To reduce joule heating R (C)is chosen to be 1k (2040pF)and 10k (204pF)for the cur-rent and voltage leads,respectively.The copper powder filtersare made following the technique developed by Lukashenko and Ustinov.7Their 3dB cutoff frequency is around 80MHz and has more than 60dB attenuation at 1GHz.Their typ-ical characteristic and final appearance of these filters are shown in Fig.5and the inset.These copper powder filters are mounted next to the sample platform which is at the mixing chamber temperature.Low-noise preamplifiers are used in the voltage lines of the detector SQUIDs.These preamplifiers,with a gain of 1000,are made using two “analog devices”AD624instru-mentation amplifiers (gains set to 10and 100,respectively)in series.The bandwidth of these preamps is 100kHz and the noise characteristic is shown in Fig.6.Finally,the microwave lines,shown schematically in Fig.4,are composed of 2.2mm diameter semirigid coaxial cables manufactured by Keycom with non-magnetic stainless steel inner/outer conductors with MiniCircuits attenuators (frequency range 0–18GHz)to in-crease signal-to-noise ratio.The total attenuation coefficient in each line is typically 40dB.FIG.5.Typical attenuation versus frequency characteristic of the copper powder filters.The attenuation at 120MHz is about 3dB,and is more than 60dB above 1GHz.The length of filter is 7cm as is shown in the inset.FIG.6.Noise characteristic of a voltage preamplifier with gain of 1000made from two AD624instrumentation amplifiers in series.The noise spectrum density is less than 10nV rms/√Hz (refer to input)above ∼10Hz up to 100kHz (flat part above 1.6kHz not shown).The inset shows the final as-sembly (the longer one)together with that of an isolation amplifier with unity gain (the shorter one).All coaxial cables and attenuators are thermally anchored to various temperature stages (namely,on the 70K,4K,still,100mK,and mixing chamber plates).The thermal anchors for the flexible coaxial cables are made by two copper plates with a 3.5cm section of coaxial cables clamped between them.As can be seen in Figs.1(a)and 1(b),a smaller EM shielding box (Faraday cage)made of cold-rolled steel sheet is placed near the top of the fridge,in which EMI filters and pream-plifiers are located.A trilayer μ-metal magnetic shield,also illustrated in Fig.2,is used to reduce the ambient static field to about 20nT.The sample holder has twelve 50 coplanar waveguides (CPWs)allowing signals with frequencies up to 18GHz to pass through with minimal reflections.The fridge is galvanically isolated from the vacuum pumps,the control instruments,and gas lines so that the system can be electri-cally connected to a dedicated ground post.We also use an optical coupler in the control line to achieve galvanic isola-tion between the refrigerator and the control rack (see Fig.1).Our tests showed that this setup is adequate for measur-ing coherent quantum dynamics of Josephson qubit circuits as described in detail below.IV .QUANTUM COHERENT-STATE MEASUREMENTSIn order to qualify our system for qubit experiments,we chose to measure coherent dynamics of a superconducting phase qubit,which is extremely sensitive to its EM environ-ment.As an initial characterization of the electronic system,we measured the switching current distribution P (I )of a Nb dc-SQUID with the loop inductance much smaller than the Josephson inductance of the junctions so that the SQUID be-haves as a single junction with critical current I c tunable by applying a magnetic field.8From the measured width σof P (I )versus temperature,we found that σcontinuously de-crease to as low as 4nA at 20mK by reducing I c and as low as 1nA near 1K due to phase diffusion 9,10indicating a current noise level below 1nA in our measurement circuit.The flux biased phase qubit had similar layout and pa-rameters as that described in Ref.11and was coupled to an asymmetric dc-SQUID to readout its quantum state.A current bias line was used to apply an external magnetic flux to the qubit loop which controls the shape of the potential energy landscape and the level separation E 10between the ground and first excited states of the qubit.For a given E 10,a short res-onant microwave pulse of variable length with frequency f 10=E 10/h was applied,which coherently transfers population between the ground state and the first excited state.Conse-quently,we observed Rabi oscillations by keeping the ampli-tude of the microwave field constant while varying the dura-tion of the modulation pulses.Data obtained with microwave frequency of about 16GHz are shown in Fig.7(a)as symbols.A simple fit to an exponentially damped sinusoidal function yields a decay time of 70ns.In Fig.7(b),the Rabi frequency versus microwave amplitude,which shows the expected linear dependence,is displayed.FIG.7.(a)Rabi oscillation measured from an rf-SQUID type phase qubit made of Al Josephson junction (symbols).The applied microwave frequency is ∼16GHz and a decay time of 70ns is obtained from the exponentially damped sinusoidal oscillation fit (line).(b)Rabi frequency versus microwave amplitude (symbols).The line is a guide to the eye.FIG.8.(a)Ramsey fringe measured from an rf-SQUID type phase qubit made of Al Josephson junction(symbols).The applied microwave frequency is ∼16GHz and a decay time of38ns is obtained from the exponentially damped sinusoidal oscillationfit(line).(b)Ramsey frequency versus detuning(symbols). The line is a guide to the eye.Ramsey interference was also observed in the phase qubitby applying twoπ/2pulses,separated by a time delayτ.Manipulation of the qubit state for Ramsey pulse sequencecan be explained in terms of the Bloch sphere and Blochvectors.3Thefirstπ/2pulse rotates the qubit state vectorto the x-y plane,where it is allowed to evolve around the zaxis freely for a duration ofτ.Subsequently,the secondπ/2pulse is applied which is followed immediately by qubit statereadout.When the microwave frequency is detuned slightlyby a mountδf away from f10,thefinal state of the qubit atthe end of the secondπ/2pulse will be a superposition ofthe ground(thefirst excited)state with probability ampli-tude cos2πδfτ(sin2πδfτ).The sinusoidal dependence of thestate probabilities on time can then be obtained by varyingτ.Figure8(a)shows the measured Ramsey fringe with a periodof1/δf forδf=100MHz.Fitting the data to exponentiallydamped sinusoidal oscillation leads to a decay(dephasing)time of T∗2=38ns.The linear dependence of the Ramseyfrequency on detuning can be seen in Fig.8(b).These results are comparable to those obtained from the same phase qubit using a qubit-experiment-qualified wet dilution refrigerator and thus clearly demonstrate that our cryogen-free dilution refrigerator provides a good low-temperature,low-noise platform for delicate experiments such as measuring the coherent quantum dynamics of Josephson phase qubit described above.What is worthy to mention is that the overall design and construction of our sys-tem,built in a laboratory which is not electromagnetically shielded,are relatively easy to achieve with minimal addi-tional cost.The realization of the present system should be useful also for the quantum-state measurements at low tem-peratures of many other physical systems whose properties and dynamics are very sensitive tofluctuations in their envi-ronment.ACKNOWLEDGMENTSWe are grateful to Junyun Li of Oxford Instruments Shanghai Office for his continuous support in this work and to H.J.Gao,L.Shan,and C.Ren for their kind help dur-ing vibration measurements.The work at the Institute of Physics was supported by the National Natural Science Foun-dation of China(Grant Nos.10874231and11104340)and the Ministry of Science and Technology of China(Grant Nos. 2009CB929102and2011CBA00106).1Y.Makhlin,G.Schön,and A.Shnirman,Rev.Mod.Phys.73,357(2001). 2A.Wallraff,A.Lukashenko,C.Coqui,A.Kemp,T.Duty,and A.V. Ustinov,Rev.Sci.Instrum.74,3740(2003).3J.Clarke and F.K.Wilhelm,Nature(London)453,1031(2008).4J.Q.You and F.Nori,Nature(London)474,589(2011).5J.E.Hoffman,Ph.D.dissertation,University of California,Berkeley, 2003).6J.M.Martinis,M.H.Devoret,and J.Clarke,Phys.Rev.B35,4682 (1987).7A.Lukashenko and tinov,Rev.Sci.Instrum.79,014701(2008). 8H.F.Yu,X.B.Zhu,Z.H.Peng,W.H.Cao,D.J.Cui,Y.Tian,G.H.Chen, D.N.Zheng,X.N.Jing,Li Lu,S.P.Zhao,and S.Y.Han,Phys.Rev.B81, 144518(2010).9S.-X.Li,W.Qiu,S.Han,Y.F.Wei,X.B.Zhu,C.Z.Gu,S.P.Zhao,and H.B.Wang,Phys.Rev.Lett.99,037002(2007).10H.F.Yu,X.B.Zhu,Z.H.Peng,Y.Tian,D.J.Cui,G.H.Chen,D.N.Zheng, X.N.Jing,Li Lu,S.P.Zhao,and S.Han,Phys.Rev.Lett.107,067004 (2011).11M.Neeley,R.C.Bialczak,M.Lenander,E.Lucero,M.Mariantoni, A.D.O’Connell,D.Sank,H.Wang,M.Weides,J.Wenner,Y.Yin, T.Yamamoto,A.N.Cleland,and J.M.Martinis,Nature(London)467, 570(2010).。
Laser Ranging to the Moon, Mars and Beyond
a r X i v :g r -q c /0411082v 1 16 N o v 2004Laser Ranging to the Moon,Mars and BeyondSlava G.Turyshev,James G.Williams,Michael Shao,John D.AndersonJet Propulsion Laboratory,California Institute of Technology,4800Oak Grove Drive,Pasadena,CA 91109,USAKenneth L.Nordtvedt,Jr.Northwest Analysis,118Sourdough Ridge Road,Bozeman,MT 59715USA Thomas W.Murphy,Jr.Physics Department,University of California,San Diego 9500Gilman Dr.,La Jolla,CA 92093USA Abstract Current and future optical technologies will aid exploration of the Moon and Mars while advancing fundamental physics research in the solar system.Technologies and possible improvements in the laser-enabled tests of various physical phenomena are considered along with a space architecture that could be the cornerstone for robotic and human exploration of the solar system.In particular,accurate ranging to the Moon and Mars would not only lead to construction of a new space communication infrastructure enabling an improved navigational accuracy,but will also provide a significant improvement in several tests of gravitational theory:the equivalence principle,geodetic precession,PPN parameters βand γ,and possible variation of the gravitational constant G .Other tests would become possible with an optical architecture that would allow proceeding from meter to centimeter to millimeter range accuracies on interplanetary distances.This paper discusses the current state and the future improvements in the tests of relativistic gravity with Lunar Laser Ranging (LLR).We also consider precision gravitational tests with the future laser rangingto Mars and discuss optical design of the proposed Laser Astrometric Test of Relativity (LATOR)mission.We emphasize that already existing capabilities can offer significant improvements not only in the tests of fundamental physics,but may also establish the infrastructure for space exploration in the near future.Looking to future exploration,what characteristics are desired for the next generation of ranging devices,what is the optimal architecture that would benefit both space exploration and fundamental physics,and what fundamental questions can be investigated?We try to answer these questions.1IntroductionThe recent progress in fundamental physics research was enabled by significant advancements in many technological areas with one of the examples being the continuing development of the NASA Deep Space Network –critical infrastructure for precision navigation and communication in space.A demonstration of such a progress is the recent Cassini solar conjunction experiment[8,6]that was possible only because of the use of Ka-band(∼33.4GHz)spacecraft radio-tracking capabilities.The experiment was part of the ancillary science program–a by-product of this new radio-tracking technology.Becasue of a much higher data rate transmission and, thus,larger data volume delivered from large distances the higher communication frequency was a very important mission capability.The higher frequencies are also less affected by the dispersion in the solar plasma,thus allowing a more extensive coverage,when depp space navigation is concerned.There is still a possibility of moving to even higher radio-frequencies, say to∼60GHz,however,this would put us closer to the limit that the Earth’s atmosphere imposes on signal transmission.Beyond these frequencies radio communication with distant spacecraft will be inefficient.The next step is switching to optical communication.Lasers—with their spatial coherence,narrow spectral emission,high power,and well-defined spatial modes—are highly useful for many space applications.While in free-space,optical laser communication(lasercomm)would have an advantage as opposed to the conventional radio-communication sercomm would provide not only significantly higher data rates(on the order of a few Gbps),it would also allow a more precise navigation and attitude control.The latter is of great importance for manned missions in accord the“Moon,Mars and Beyond”Space Exploration Initiative.In fact,precision navigation,attitude control,landing,resource location, 3-dimensional imaging,surface scanning,formationflying and many other areas are thought only in terms of laser-enabled technologies.Here we investigate how a near-future free-space optical communication architecture might benefit progress in gravitational and fundamental physics experiments performed in the solar system.This paper focuses on current and future optical technologies and methods that will advance fundamental physics research in the context of solar system exploration.There are many activities that focused on the design on an optical transceiver system which will work at the distance comparable to that between the Earth and Mars,and test it on the Moon.This paper summarizes required capabilities for such a system.In particular,we discuss how accurate laser ranging to the neighboring celestial bodies,the Moon and Mars,would not only lead to construction of a new space communication infrastructure with much improved navigational accuracy,it will also provide a significant improvement in several tests of gravitational theory. Looking to future exploration,we address the characteristics that are desired for the next generation of ranging devices;we will focus on optimal architecture that would benefit both space exploration and fundamental physics,and discuss the questions of critical importance that can be investigated.This paper is organized as follows:Section2discusses the current state and future per-formance expected with the LLR technology.Section3addresses the possibility of improving tests of gravitational theories with laser ranging to Mars.Section4addresses the next logical step—interplanetary laser ranging.We discuss the mission proposal for the Laser Astrometric Test of Relativity(LATOR).We present a design for its optical receiver system.Section5 addresses a proposal for new multi-purpose space architecture based on optical communica-tion.We present a preliminary design and discuss implications of this new proposal for tests of fundamental physics.We close with a summary and recommendations.2LLR Contribution to Fundamental PhysicsDuring more than35years of its existence lunar laser ranging has become a critical technique available for precision tests of gravitational theory.The20th century progress in three seem-ingly unrelated areas of human exploration–quantum optics,astronomy,and human spaceexploration,led to the construction of this unique interplanetary instrument to conduct very precise tests of fundamental physics.In this section we will discuss the current state in LLR tests of relativistic gravity and explore what could be possible in the near future.2.1Motivation for Precision Tests of GravityThe nature of gravity is fundamental to our understanding of the structure and evolution of the universe.This importance motivates various precision tests of gravity both in laboratories and in space.Most of the experimental underpinning for theoretical gravitation has come from experiments conducted in the solar system.Einstein’s general theory of relativity(GR)began its empirical success in1915by explaining the anomalous perihelion precession of Mercury’s orbit,using no adjustable theoretical parameters.Eddington’s observations of the gravitational deflection of light during a solar eclipse in1919confirmed the doubling of the deflection angles predicted by GR as compared to Newtonian and Equivalence Principle(EP)arguments.Follow-ing these beginnings,the general theory of relativity has been verified at ever-higher accuracy. Thus,microwave ranging to the Viking landers on Mars yielded an accuracy of∼0.2%from the gravitational time-delay tests of GR[48,44,49,50].Recent spacecraft and planetary mi-crowave radar observations reached an accuracy of∼0.15%[4,5].The astrometric observations of the deflection of quasar positions with respect to the Sun performed with Very-Long Base-line Interferometry(VLBI)improved the accuracy of the tests of gravity to∼0.045%[45,51]. Lunar Laser Ranging(LLR),the continuing legacy of the Apollo program,has provided ver-ification of GR improving an accuracy to∼0.011%via precision measurements of the lunar orbit[62,63,30,31,32,35,24,36,4,68].The recent time-delay experiments with the Cassini spacecraft at a solar conjunction have tested gravity to a remarkable accuracy of0.0023%[8] in measuring deflection of microwaves by solar gravity.Thus,almost ninety years after general relativity was born,Einstein’s theory has survived every test.This rare longevity and the absence of any adjustable parameters,does not mean that this theory is absolutely correct,but it serves to motivate more sensitive tests searching for its expected violation.The solar conjunction experiments with the Cassini spacecraft have dramatically improved the accuracy in the solar system tests of GR[8].The reported accuracy of2.3×10−5in measuring the Eddington parameterγ,opens a new realm for gravitational tests,especially those motivated by the on-going progress in scalar-tensor theories of gravity.1 In particular,scalar-tensor extensions of gravity that are consistent with present cosmological models[15,16,17,18,19,20,39]predict deviations of this parameter from its GR value of unity at levels of10−5to10−7.Furthermore,the continuing inability to unify gravity with the other forces indicates that GR should be violated at some level.The Cassini result together with these theoretical predictions motivate new searches for possible GR violations;they also provide a robust theoretical paradigm and constructive guidance for experiments that would push beyond the present experimental accuracy for parameterized post-Newtonian(PPN)parameters(for details on the PPN formalism see[60]).Thus,in addition to experiments that probe the GR prediction for the curvature of the gravityfield(given by parameterγ),any experiment pushingthe accuracy in measuring the degree of non-linearity of gravity superposition(given by anotherEddington parameterβ)will also be of great interest.This is a powerful motive for tests ofgravitational physics phenomena at improved accuracies.Analyses of laser ranges to the Moon have provided increasingly stringent limits on anyviolation of the Equivalence Principle(EP);they also enabled very accurate measurements fora number of relativistic gravity parameters.2.2LLR History and Scientific BackgroundLLR has a distinguished history[24,9]dating back to the placement of a retroreflector array onthe lunar surface by the Apollo11astronauts.Additional reflectors were left by the Apollo14and Apollo15astronauts,and two French-built reflector arrays were placed on the Moon by theSoviet Luna17and Luna21missions.Figure1shows the weighted RMS residual for each year.Early accuracies using the McDonald Observatory’s2.7m telescope hovered around25cm. Equipment improvements decreased the ranging uncertainty to∼15cm later in the1970s.In1985the2.7m ranging system was replaced with the McDonald Laser Ranging System(MLRS).In the1980s ranges were also received from Haleakala Observatory on the island of Maui in theHawaiian chain and the Observatoire de la Cote d’Azur(OCA)in France.Haleakala ceasedoperations in1990.A sequence of technical improvements decreased the range uncertainty tothe current∼2cm.The2.7m telescope had a greater light gathering capability than thenewer smaller aperture systems,but the newer systemsfired more frequently and had a muchimproved range accuracy.The new systems do not distinguish returning photons against thebright background near full Moon,which the2.7m telescope could do,though there are somemodern eclipse observations.The lasers currently used in the ranging operate at10Hz,with a pulse width of about200 psec;each pulse contains∼1018photons.Under favorable observing conditions a single reflectedphoton is detected once every few seconds.For data processing,the ranges represented by thereturned photons are statistically combined into normal points,each normal point comprisingup to∼100photons.There are15553normal points are collected until March2004.Themeasured round-trip travel times∆t are two way,but in this paper equivalent ranges in lengthunits are c∆t/2.The conversion between time and length(for distance,residuals,and dataaccuracy)uses1nsec=15cm.The ranges of the early1970s had accuracies of approximately25cm.By1976the accuracies of the ranges had improved to about15cm.Accuracies improvedfurther in the mid-1980s;by1987they were4cm,and the present accuracies are∼2cm.One immediate result of lunar ranging was the great improvement in the accuracy of the lunarephemeris[62]and lunar science[67].LLR measures the range from an observatory on the Earth to a retroreflector on the Moon. For the Earth and Moon orbiting the Sun,the scale of relativistic effects is set by the ratio(GM/rc2)≃v2/c2∼10−8.The center-to-center distance of the Moon from the Earth,with mean value385,000km,is variable due to such things as eccentricity,the attraction of the Sun,planets,and the Earth’s bulge,and relativistic corrections.In addition to the lunar orbit,therange from an observatory on the Earth to a retroreflector on the Moon depends on the positionin space of the ranging observatory and the targeted lunar retroreflector.Thus,orientation ofthe rotation axes and the rotation angles of both bodies are important with tidal distortions,plate motion,and relativistic transformations also coming into play.To extract the gravitationalphysics information of interest it is necessary to accurately model a variety of effects[68].For a general review of LLR see[24].A comprehensive paper on tests of gravitationalphysics is[62].A recent test of the EP is in[4]and other GR tests are in[64].An overviewFigure1:Historical accuracy of LLR data from1970to2004.of the LLR gravitational physics tests is given by Nordtvedt[37].Reviews of various tests of relativity,including the contribution by LLR,are given in[58,60].Our recent paper describes the model improvements needed to achieve mm-level accuracy for LLR[66].The most recent LLR results are given in[68].2.3Tests of Relativistic Gravity with LLRLLR offers very accurate laser ranging(weighted rms currently∼2cm or∼5×10−11in frac-tional accuracy)to retroreflectors on the Moon.Analysis of these very precise data contributes to many areas of fundamental and gravitational physics.Thus,these high-precision studies of the Earth-Moon-Sun system provide the most sensitive tests of several key properties of weak-field gravity,including Einstein’s Strong Equivalence Principle(SEP)on which general relativity rests(in fact,LLR is the only current test of the SEP).LLR data yielded the strongest limits to date on variability of the gravitational constant(the way gravity is affected by the expansion of the universe),and the best measurement of the de Sitter precession rate.In this Section we discuss these tests in more details.2.3.1Tests of the Equivalence PrincipleThe Equivalence Principle,the exact correspondence of gravitational and inertial masses,is a central assumption of general relativity and a unique feature of gravitation.EP tests can therefore be viewed in two contexts:tests of the foundations of general relativity,or as searches for new physics.As emphasized by Damour[12,13],almost all extensions to the standard modelof particle physics(with best known extension offered by string theory)generically predict newforces that would show up as apparent violations of the EP.The weak form the EP(the WEP)states that the gravitational properties of strong and electro-weak interactions obey the EP.In this case the relevant test-body differences are their fractional nuclear-binding differences,their neutron-to-proton ratios,their atomic charges,etc. General relativity,as well as other metric theories of gravity,predict that the WEP is exact. However,extensions of the Standard Model of Particle Physics that contain new macroscopic-range quantumfields predict quantum exchange forces that will generically violate the WEP because they couple to generalized‘charges’rather than to mass/energy as does gravity[17,18]. WEP tests can be conducted with laboratory or astronomical bodies,because the relevant differences are in the test-body compositions.Easily the most precise tests of the EP are made by simply comparing the free fall accelerations,a1and a2,of different test bodies.For the case when the self-gravity of the test bodies is negligible and for a uniform external gravityfield, with the bodies at the same distance from the source of the gravity,the expression for the Equivalence Principle takes the most elegant form:∆a= M G M I 2(1)(a1+a2)where M G and M I represent gravitational and inertial masses of each body.The sensitivity of the EP test is determined by the precision of the differential acceleration measurement divided by the degree to which the test bodies differ(position).The strong form of the EP(the SEP)extends the principle to cover the gravitational properties of gravitational energy itself.In other words it is an assumption about the way that gravity begets gravity,i.e.about the non-linear property of gravitation.Although general relativity assumes that the SEP is exact,alternate metric theories of gravity such as those involving scalarfields,and other extensions of gravity theory,typically violate the SEP[30,31, 32,35].For the SEP case,the relevant test body differences are the fractional contributions to their masses by gravitational self-energy.Because of the extreme weakness of gravity,SEP test bodies that differ significantly must have astronomical sizes.Currently the Earth-Moon-Sun system provides the best arena for testing the SEP.The development of the parameterized post-Newtonian formalism[31,56,57],allows one to describe within the common framework the motion of celestial bodies in external gravitational fields within a wide class of metric theories of gravity.Over the last35years,the PPN formalism has become a useful framework for testing the SEP for extended bodies.In that formalism,the ratio of passive gravitational to inertial mass to thefirst order is given by[30,31]:M GMc2 ,(2) whereηis the SEP violation parameter(discussed below),M is the mass of a body and E is its gravitational binding or self-energy:E2Mc2 V B d3x d3yρB(x)ρB(y)EMc2 E=−4.64×10−10andwhere the subscripts E and m denote the Earth and Moon,respectively.The relatively small size bodies used in the laboratory experiments possess a negligible amount of gravitational self-energy and therefore such experiments indicate nothing about the equality of gravitational self-energy contributions to the inertial and passive gravitational masses of the bodies [30].TotesttheSEP onemustutilize planet-sizedextendedbodiesinwhichcase theratioEq.(3)is considerably higher.Dynamics of the three-body Sun-Earth-Moon system in the solar system barycentric inertial frame was used to search for the effect of a possible violation of the Equivalence Principle.In this frame,the quasi-Newtonian acceleration of the Moon (m )with respect to the Earth (E ),a =a m −a E ,is calculated to be:a =−µ∗rM I m µS r SEr 3Sm + M G M I m µS r SEr 3+µS r SEr 3Sm +η E Mc 2 m µS r SEMc 2 E − E n 2−(n −n ′)2n ′2a ′cos[(n −n ′)t +D 0].(8)Here,n denotes the sidereal mean motion of the Moon around the Earth,n ′the sidereal mean motion of the Earth around the Sun,and a ′denotes the radius of the orbit of the Earth around the Sun (assumed circular).The argument D =(n −n ′)t +D 0with near synodic period is the mean longitude of the Moon minus the mean longitude of the Sun and is zero at new Moon.(For a more precise derivation of the lunar range perturbation due to the SEP violation acceleration term in Eq.(6)consult [62].)Any anomalous radial perturbation will be proportional to cos D .Expressed in terms ofη,the radial perturbation in Eq.(8)isδr∼13ηcos D meters [38,21,22].This effect,generalized to all similar three body situations,the“SEP-polarization effect.”LLR investigates the SEP by looking for a displacement of the lunar orbit along the direction to the Sun.The equivalence principle can be split into two parts:the weak equivalence principle tests the sensitivity to composition and the strong equivalence principle checks the dependence on mass.There are laboratory investigations of the weak equivalence principle(at University of Washington)which are about as accurate as LLR[7,1].LLR is the dominant test of the strong equivalence principle.The most accurate test of the SEP violation effect is presently provided by LLR[61,48,23],and also in[24,62,63,4].Recent analysis of LLR data test the EP of∆(M G/M I)EP=(−1.0±1.4)×10−13[68].This result corresponds to a test of the SEP of∆(M G/M I)SEP=(−2.0±2.0)×10−13with the SEP violation parameter η=4β−γ−3found to beη=(4.4±4.5)×10−ing the recent Cassini result for the PPN parameterγ,PPN parameterβis determined at the level ofβ−1=(1.2±1.1)×10−4.2.3.2Other Tests of Gravity with LLRLLR data yielded the strongest limits to date on variability of the gravitational constant(the way gravity is affected by the expansion of the universe),the best measurement of the de Sitter precession rate,and is relied upon to generate accurate astronomical ephemerides.The possibility of a time variation of the gravitational constant,G,wasfirst considered by Dirac in1938on the basis of his large number hypothesis,and later developed by Brans and Dicke in their theory of gravitation(for more details consult[59,60]).Variation might be related to the expansion of the Universe,in which case˙G/G=σH0,where H0is the Hubble constant, andσis a dimensionless parameter whose value depends on both the gravitational constant and the cosmological model considered.Revival of interest in Brans-Dicke-like theories,with a variable G,was partially motivated by the appearance of superstring theories where G is considered to be a dynamical quantity[26].Two limits on a change of G come from LLR and planetary ranging.This is the second most important gravitational physics result that LLR provides.GR does not predict a changing G,but some other theories do,thus testing for this effect is important.The current LLR ˙G/G=(4±9)×10−13yr−1is the most accurate limit published[68].The˙G/G uncertaintyis83times smaller than the inverse age of the universe,t0=13.4Gyr with the value for Hubble constant H0=72km/sec/Mpc from the WMAP data[52].The uncertainty for˙G/G is improving rapidly because its sensitivity depends on the square of the data span.This fact puts LLR,with its more then35years of history,in a clear advantage as opposed to other experiments.LLR has also provided the only accurate determination of the geodetic precession.Ref.[68]reports a test of geodetic precession,which expressed as a relative deviation from GR,is K gp=−0.0019±0.0064.The GP-B satellite should provide improved accuracy over this value, if that mission is successfully completed.LLR also has the capability of determining PPNβandγdirectly from the point-mass orbit perturbations.A future possibility is detection of the solar J2from LLR data combined with the planetary ranging data.Also possible are dark matter tests,looking for any departure from the inverse square law of gravity,and checking for a variation of the speed of light.The accurate LLR data has been able to quickly eliminate several suggested alterations of physical laws.The precisely measured lunar motion is a reality that any proposed laws of attraction and motion must satisfy.The above investigations are important to gravitational physics.The future LLR data will improve the above investigations.Thus,future LLR data of current accuracy would con-tinue to shrink the uncertainty of˙G because of the quadratic dependence on data span.The equivalence principle results would improve more slowly.To make a big improvement in the equivalence principle uncertainty requires improved range accuracy,and that is the motivation for constructing the APOLLO ranging facility in New Mexico.2.4Future LLR Data and APOLLO facilityIt is essential that acquisition of the new LLR data will continue in the future.Accuracies∼2cm are now achieved,and further very useful improvement is expected.Inclusion of improved data into LLR analyses would allow a correspondingly more precise determination of the gravitational physics parameters under study.LLR has remained a viable experiment with fresh results over35years because the data accuracies have improved by an order of magnitude(see Figure1).There are prospects for future LLR station that would provide another order of magnitude improvement.The Apache Point Observatory Lunar Laser-ranging Operation(APOLLO)is a new LLR effort designed to achieve mm range precision and corresponding order-of-magnitude gains in measurements of fundamental physics parameters.For thefirst time in the LLR history,using a3.5m telescope the APOLLO facility will push LLR into a new regime of multiple photon returns with each pulse,enabling millimeter range precision to be achieved[29,66].The anticipated mm-level range accuracy,expected from APOLLO,has a potential to test the EP with a sensitivity approaching10−14.This accuracy would yield sensitivity for parameterβat the level of∼5×10−5and measurements of the relative change in the gravitational constant,˙G/G, would be∼0.1%the inverse age of the universe.The overwhelming advantage APOLLO has over current LLR operations is a3.5m astro-nomical quality telescope at a good site.The site in southern New Mexico offers high altitude (2780m)and very good atmospheric“seeing”and image quality,with a median image resolu-tion of1.1arcseconds.Both the image sharpness and large aperture conspire to deliver more photons onto the lunar retroreflector and receive more of the photons returning from the re-flectors,pared to current operations that receive,on average,fewer than0.01 photons per pulse,APOLLO should be well into the multi-photon regime,with perhaps5–10 return photons per pulse.With this signal rate,APOLLO will be efficient atfinding and track-ing the lunar return,yielding hundreds of times more photons in an observation than current√operations deliver.In addition to the significant reduction in statistical error(useful).These new reflectors on the Moon(and later on Mars)can offer significant navigational accuracy for many space vehicles on their approach to the lunar surface or during theirflight around the Moon,but they also will contribute significantly to fundamental physics research.The future of lunar ranging might take two forms,namely passive retroreflectors and active transponders.The advantages of new installations of passive retroreflector arrays are their long life and simplicity.The disadvantages are the weak returned signal and the spread of the reflected pulse arising from lunar librations(apparent changes in orientation of up to10 degrees).Insofar as the photon timing error budget is dominated by the libration-induced pulse spread—as is the case in modern lunar ranging—the laser and timing system parameters do√not influence the net measurement uncertainty,which simply scales as1/3Laser Ranging to MarsThere are three different experiments that can be done with accurate ranges to Mars:a test of the SEP(similar to LLR),a solar conjunction experiment measuring the deflection of light in the solar gravity,similar to the Cassini experiment,and a search for temporal variation in the gravitational constant G.The Earth-Mars-Sun-Jupiter system allows for a sensitive test of the SEP which is qualitatively different from that provided by LLR[3].Furthermore,the outcome of these ranging experiments has the potential to improve the values of the two relativistic parameters—a combination of PPN parametersη(via test of SEP)and a direct observation of the PPN parameterγ(via Shapiro time delay or solar conjunction experiments).(This is quite different compared to LLR,as the small variation of Shapiro time delay prohibits very accurate independent determination of the parameterγ).The Earth-Mars range would also provide for a very accurate test of˙G/G.This section qualitatively addresses the near-term possibility of laser ranging to Mars and addresses the above three effects.3.1Planetary Test of the SEP with Ranging to MarsEarth-Mars ranging data can provide a useful estimate of the SEP parameterηgiven by Eq.(7). It was demonstrated in[3]that if future Mars missions provide ranging measurements with an accuracy ofσcentimeters,after ten years of ranging the expected accuracy for the SEP parameterηmay be of orderσ×10−6.These ranging measurements will also provide the most accurate determination of the mass of Jupiter,independent of the SEP effect test.It has been observed previously that a measurement of the Sun’s gravitational to inertial mass ratio can be performed using the Sun-Jupiter-Mars or Sun-Jupiter-Earth system[33,47,3]. The question we would like to answer here is how accurately can we do the SEP test given the accurate ranging to Mars?We emphasize that the Sun-Mars-Earth-Jupiter system,though governed basically by the same equations of motion as Sun-Earth-Moon system,is significantly different physically.For a given value of SEP parameterηthe polarization effects on the Earth and Mars orbits are almost two orders of magnitude larger than on the lunar orbit.Below we examine the SEP effect on the Earth-Mars range,which has been measured as part of the Mariner9and Viking missions with ranging accuracy∼7m[48,44,41,43].The main motivation for our analysis is the near-future Mars missions that should yield ranging data, accurate to∼1cm.This accuracy would bring additional capabilities for the precision tests of fundamental and gravitational physics.3.1.1Analytical Background for a Planetary SEP TestThe dynamics of the four-body Sun-Mars-Earth-Jupiter system in the Solar system barycentric inertial frame were considered.The quasi-Newtonian acceleration of the Earth(E)with respect to the Sun(S),a SE=a E−a S,is straightforwardly calculated to be:a SE=−µ∗SE·r SE MI Eb=M,Jµb r bS r3bE + M G M I E b=M,Jµb r bS。
遥感与地理信息系统方面的好的期刊
遥感与地理信息系统方面的好杂志国内的期刊:1)遥感学报(98年前《环境遥感》杂志,国内比较好的遥感专业杂志,主编是原遥感所所长、现国家科技部部长徐冠华院士,遥感文章比较多,象国内比较牛的遥感理论研究的大牛复旦大学的金亚秋教授和北京师范大学的新当选的院士李小文教授经常有文章发表;基于遥感和GIS资源环境等应用的文章也比较好,主要是中科院地理所和遥感所的;还有就是图像处理的算法研究或新型的遥感方法如雷达干涉测量、高光谱方面的研究,主要由武汉大学测绘遥感信息工程国家重点实验室(L)和中科院遥感所的文章。
(2)测绘学报(侧重测量基础理论的研究,但经常有非常好的综述型的文章,上面的测绘学博士论文摘要是非常好,还有主编陈俊勇院士治学非常严谨,一般的假冒伪劣文章很难找到市场,该刊宁缺勿滥,2001年仍然是季刊,文章少,但很精。
不过该刊刊登的文章比较偏重大地测量(GPS),GIS的文章相比比较少)。
(3)武测学报(2001年改名《武汉大学学报》信息科学版)本杂志是原武汉测绘科技大学学报,主编是中国科学院和中国工程院双院士李德仁教授,很多具有创新性和理论性的测绘研究成果都在该刊发表,展示中国测绘学术研究的最高水平,引导测绘理论研究的方向。
我认为上面的博士论文摘要比较好,真正体现了我国3S技术的研究动向和学术水准。
本刊出版内容包括综述与瞻望、学术论文和研究报告、本领域重大科技新闻等,涉及测绘学研究的主要方面,尤其是摄影测量与遥感、大地测量与地球重力场、工程测量、地图制图、地球动力学、全球定位系统(GPS)、地理信息系统(GIS)、图形图像处理等。
该刊现同时有英文版,名为GEO-SPATIAL INFORMATION SCIENCE,是中文版的精华版,万方科技期刊网上可以下载全文。
(4)中国图象图形学报1996年创刊,由中国图象图形学会、中科院遥感所、中科院计算所共同主办,主编是科技部部长徐冠华院士。
2001年起《中国图象图形学报》分A、B版。
2012-Applied-Surface-Science
Applied Surface Science 258 (2012) 6116–6126Contents lists available at SciVerse ScienceDirectApplied SurfaceSciencej o u r n a l h o m e p a g e :w w w.e l s e v i e r.c o m /l o c a t e /a p s u scCorrosion mechanism and model of pulsed DC microarc oxidation treated AZ31alloy in simulated body fluidYanhong Gu a ,∗,Cheng-fu Chen a ,Sukumar Bandopadhyay b ,Chengyun Ning c ,Yongjun Zhang b ,Yuanjun Guo caDepartment of Mechanical Engineering,University of Alaska Fairbanks,Fairbanks,AK 99775,USA bDepartment of Mining Engineering,University of Alaska Fairbanks,Fairbanks,AK 99775,USA cCollege of Materials Science and Engineering,South China University of Technology,Guangzhou 510640,PR Chinaa r t i c l ei n f oArticle history:Received 11February 2012Received in revised form 29February 2012Accepted 5March 2012Available online 11 March 2012Keywords:Microarc oxidation AZ31magnesium alloy Pulse frequencySimulated body fluid HydroxyapatiteCorrosion mechanism Modela b s t r a c tThis paper addresses the effect of pulse frequency on the corrosion behavior of microarc oxidation (MAO)coatings on AZ31Mg alloys in simulated body fluid (SBF).The MAO coatings were deposited by a pulsed DC mode at four different pulse frequencies of 300Hz,500Hz,1000Hz and 3000Hz with a constant pulse ratio.Potentiodynamic polarization and electrochemical impedance spectroscopy (EIS)tests were used for corrosion rate and electrochemical impedance evaluation.The corroded surfaces were examined by X-ray diffraction (XRD),X-ray fluorescence (XRF)and optical microscopy.All the results exhibited that the corrosion resistance of MAO coating produced at 3000Hz is superior among the four frequencies used.The XRD spectra showed that the corrosion products contain hydroxyapatite,brucite and quintinite.A model for corrosion mechanism and corrosion process of the MAO coating on AZ31Mg alloy in the SBF is proposed.© 2012 Elsevier B.V. All rights reserved.1.IntroductionMagnesium and its alloys have been recently recognized as promising biodegradable orthopedic materials [1–5],because of their excellent biocompatibility,biodegradability,non-toxicity and non-necessity of a second surgery for implant removal.These preferable traits make Mg and Mg alloys superior to commonly used permanent metallic materials (stainless steel,titanium and cobalt–chromium-based alloys).However,as is well known Mg and Mg alloys have such a high corrosion rate [6]in the human body that they will lose mechanical integrity before the fracture has suf-ficiently fixed [7,8].It is important to control the corrosion rates of the Mg-based materials to match the bone healing rate.It is usually accomplished by surface modification of the substrate materials to achieve a desirable corrosion property.Several coating techniques have been applied to decrease degradation rate for biomedical application [9–13].Among these,microarc oxidation (MAO)coating is an electro-deposition coat-ing technique,which enables to produce a uniform,dense,hard,∗Corresponding author.Tel.:+19077990308.E-mail address:ygu2@ (Y.Gu).well-adhered and corrosion resistant coating on the surface of Mg and Mg alloys [14–18].In order to develop better magnesium-based biomaterials,it is essential to understand the corrosion process and mechanism of MAO coated magnesium alloy under physiological conditions.This paper reports a study on the corrosion mechanism of MAO coated AZ31Mg alloys in a simulated body fluid (SBF),which mimics the body environment in order to reveal the corrosion behavior of mag-nesium and its alloys in biomedical applications [19,20].This study could provide useful reference for predicting the surface-modified Mg alloys in vivo bone bioactivity [21].Much research has been conducted to evaluate the corrosion behavior of MAO coatings on pure Mg and AZ91alloys in SBF [22,23],but studies on AZ31alloys in SBF are still in demand.In addition,the effects of pulse frequency on the corrosion performance of MAO coated Mg alloy have been studied in NaCl solution [24,25].The corrosion mechanism appli-cable for biodegradable Mg implants has been reviewed by Atrens [26].The effects of pulse frequency on corrosion resistance of MAO coated AZ31alloy in SBF,however,are still missing.The corrosion mechanism of MAO coatings on Mg alloys in SBF is also not well understood.Therefore,the aim of the present study is to investigate the influence of pulse frequency on the corrosion behavior of MAO coatings on AZ31alloys in the SBF to develop a better understanding of corrosion mechanisms in physiological conditions.0169-4332/$–see front matter © 2012 Elsevier B.V. All rights reserved.doi:10.1016/j.apsusc.2012.03.016Y.Gu et al./Applied Surface Science258 (2012) 6116–61266117Table1Chemical composition of the SBF(pH7.25,1L)[28].Reagent Purity(%)AmountNaCl99.57.996gNaHCO399.5–100.30.350gKCl99.50.224gK2HPO4·3H2O99.00.228gMgCl2·6H2O98.00.305g1kmol/m3HCl–40cm3CaCl295.00.278gNa2SO499.00.071g(CH2OH)3CNH299.9 6.057g1kmol/m3HCl–Appropriate amountfor adjusting pH2.Experimental2.1.Specimen preparationThe chemical composition of the AZ31Mg alloy used in this study is as same as shown in our previous work[27].Prior to MAO treatment,the samples were successively polished on various grade SiC papers up to1200gritfinish.After polishing,the samples were degreased ultrasonically in metal cleaning agent for2min,rinsed in deionized water for1min,and dehydrated in ethyl alcohol for 2min and then immediately dried in warmflowing air.MAO experimental equipment details are shown in the author’s previous publication[27].An electrolyte prepared from30g/L Na3PO4solution in distilled water was kept at room temperature during the entire coating procedure.The coatings were obtained under a constantly applied DC voltage of325V for5min at four dif-ferent pulse frequencies:300Hz,500Hz,1000Hz and3000Hz at a constant pulse ratio of0.3.These coatings are labeled as300Hz, 500Hz,1000Hz and3000Hz correspondingly.2.2.Electrochemical testPotentiodynamic polarization and electrochemical impedance spectroscopy(EIS)tests were employed to evaluate the corrosion behavior of MAO coatings on AZ31alloys in a SBF.These experi-ments were performed using a computer controlled Corrosion Cell Kit(Gamry Instruments,Inc.).The composition of the SBF is given in Table1[28].The solution was buffered with tris(hydroxymethyl), minomethane((CH2OH)3CNH2)and1.0mol/L HCl to a physiologi-cal pH of7.25.The SBF was refreshed each day and was maintained at the most common body temperature of36.5±0.5◦C.The corrosion cell consists of a saturated calomel electrode (SCE)reference electrode,and a graphite counter electrode and the coated sample with exposed area of4.18cm2as the working elec-trode.The polarization and EIS curves were collected after10min of exposure at open circuit potential once a day within7-day immer-sion duration.The polarization curves were measured at a scanning rate of3mV/s and the EIS tests were carried out in a range of0.1Hz to100kHz with AC voltage amplitude of10mV.The equivalent circuits were used to carry out thefitting of the experimental EIS data.All potentiodynamic polarization and EIS data were analyzed using the Gamry Echem Analyst software.2.3.Specimen characterizationThe compositions of the corrosion products were determined by the X-ray diffraction(XRD)method using an X’Pert PRO diffrac-tometer.The samples were scanned using CuKa radiation under 45kV voltage and40mA current at normal–2Âgeometry with a range of20–80◦2Â,a step size of0.025◦at glancing angles of2.5◦.A PanAlytical Axios dispersive X-rayfluorescence(XRF)instrument Fig.1.An example of dispersive XRF spectrum of the MAO coating produced at 3000Hz.was used for the quantitative element concentration analysis of the post-corrosion samples.The changes in the surface micromorphology of a sample were observed by an Olympus BX60optical microscope after the sam-ples were removed from the SBF,rinsed with distilled water,and dried in warmflowing air once a day.The changes in the surface macromorphology were recorded by a digital camera at the end of the corrosion tests in order to evaluate the extent of the corrosion damage.3.Results and discussion3.1.Element concentration analysisIn order to determine the phosphorous content in the coat-ing,dispersive XRF analysis was performed on the surface of MAO coatings produced at various pulse frequencies.The spectrum is presented in Fig.1,and the element concentration is listed in Table2.It was found that the phosphorous content was decreasing with increasing processing pulse frequency.The resultant coatings had a phosphorous content of18.9,18.8,18.5and18.1wt.%in the 300Hz,500Hz,1000Hz and3000Hz conditions,respectively.The concentration of sodium was also found in the decreasing order in the coatings as a function of increasing frequency.This confirms the effective participation of Na+and PO43−ions in the MAO process at lower frequency conditions,which is attributed to the relatively longer pulse-on-time,providing a higher energy per pulse in the 300Hz condition.3.2.Potentiodynamic polarization testPotentiodynamic polarization curves of MAO coated samples produced at different pulse frequencies and uncoated AZ31alloy are shown in Fig.2.The important parameters such as corrosion potential(E corr)and corrosion current density(I corr),were gen-erated directly from the potentiodynamic polarization curves by the Tafelfit method.The corrosion potential E corr and corrosionTable2XRF element concentration of MAO coatings produced at various pulse frequencies (wt.%).Sample O Mg P Al Zn Na Mn 300Hz43.828.818.90.8260.921 6.490.259 500Hz45.428.718.80.5460.980 5.250.322 1000Hz46.828.318.50.4060.909 4.820.241 3000Hz47.328.118.10.6910.970 4.510.3186118Y.Gu et al./Applied Surface Science 258 (2012) 6116–6126Fig.2.Tafel curves of MAO coatings and uncoated AZ31alloy after immersion in the SBF for different days.Table 3Corrosion potential E corr for MAO coatings and uncoated AZ31alloy.Immersion time (day)300Hz500Hz1000Hz3000HzAZ31alloy1−1.07−1.10−1.14−1.15−1.305−1.08−1.04−1.01−1.11−1.317−1.27−1.26−1.27−0.93−1.33current density I corr for MAO coatings produced at various pulse frequencies and uncoated AZ31alloy are listed in Tables 3and 4,respectively.It can be observed from Table 3that all the coated samples show higher E corr than uncoated AZ 31alloy irrespective of immersion time.This suggests that MAO coated samples are less reactive to SBF as compared to uncoated AZ31alloys.At the end of 7-day corrosion tests,the E corr of the 3000Hz sample shows the highest value ofTable 4Corrosion current density I corr for MAO coatings and uncoated AZ31alloy.Immersion time (day)300Hz500Hz1000Hz3000HzAZ31alloy 1 3.70 3.63 3.24 3.091015 6.59 6.45 6.11 5.77183725.6016.7014.7010.90215−0.93V.However,MAO coatings produced at 300Hz,500Hz and 1000Hz show lower corrosion potential from −1.27V to −1.26V.It can be seen from the data presented in Table 4that all the coated samples show much lower I corr than uncoated AZ 31alloy irrespective of immersion time.The corrosion current density increases with increasing the immersion time.I corr values corre-spond to corrosion rate based on Eq.(1).The corrosion rate (CR,mm/year)of all the samples was calcu-lated by the following equation [29]:CR (mm/year)=0.00327EW ·I corrd(1)where EW is the equivalent weight,I corr is the corrosion current density (A/cm 2),and d is the sample density (g/cm 3).Calculation of the corrosion rate for a sample in Eq.(1)requires the determination of the equivalent weight (EW).This equivalent weight is a weighted average of a /n for the major alloying ele-ments in the sample.The recommended procedure for calculating the equivalent weight,EW,is suggested in [29]as:EW =f i ain i(2)where f i ,n i and a i are mass fraction,electrons exchanged,and atomic weight (in g),respectively,of the i th major alloying ele-ment in the sample.These parameters of the MAO coated samples and uncoated AZ31alloy are listed in Table 5.Y.Gu et al./Applied Surface Science 258 (2012) 6116–61266119Table 5Mass fraction,electrons exchanged,and atomic weight of MAO coatings and uncoated AZ31alloy.Major alloying elementsMgAlZnNaf i (%)for MAO coatings See element concentration results in Table 2f i (%)for AZ31alloy 95.44 3.00 1.000a i (g)24.3126.9865.3822.99n i2321Table 6Calculated EW of MAO coated samples and uncoated AZ31alloy.Sample300Hz500Hz1000Hz3000HzAZ31alloyEW5.375.064.884.8312.20Fig.3.Corrosion rates of MAO coatings and uncoated AZ31alloy after immersion in the SBF.Based on Tables 2and 5,the calculated EW of all coated samples and uncoated AZ31alloy is shown in Table 6.The calculated corrosion rates as function of immersion time are plotted in Fig.3.It can be seen that the highest CR of the uncoated AZ31alloy is 4.789mm/year,indicating a very poor corro-sion resistance.All coating samples,however,have much lower CRthan uncoated AZ31alloy.This indicates that the corrosion resis-tance of the AZ31alloy is greatly enhanced by the MAO coating.For each sample,the CR value increases with increasing immersion time.The CR increase is relatively much larger after immersion of the samples for more than 5days.The larger corrosion rate is due to the enlarged pores and the cracks that may have been formed after longer immersion.This is validated by the optical micrograph presented in Section 3.4.3.The corrosion rate has a decreasing trend with increasing frequency.After immersing the samples for a period of 7days,3000Hz treated samples exhibited the lowest corrosion rate (0.096mm/year),around 50times slower than that of the uncoated AZ31alloy (4.789mm/year).3.3.Electrochemical impedance spectroscopyElectrochemical impedance spectroscopy (EIS)was used to investigate the corrosion behavior of the MAO coated specimens and the uncoated AZ31alloy.For coated samples,an appropriate equivalent circuit is proposed,shown in Fig.4a.This equivalent electrical circuit is based on the EIS studies of Ahn et al.[30],Liang et al.[31]and Ghasemi et al.[32]on MAO coatings.For uncoated AZ31alloy,the EIS data can be represented as an equivalent cir-cuit,shown in Fig.4b.In the equivalent circuits,R soln is the solution resistance between the working electrode and the reference elec-trode,R po is the resistance of the porous regions of the MAO coating which is in parallel with constant phase element CPE-C coat ,R ct is the resistance of the MAO coating-substrate interface in parallel with CPE-C dl .The Bode plots of MAO coated samples produced at various pulse frequencies and uncoated AZ31alloy after immersing in the SBF for a period of 7days are presented in Fig.5.The corresponding EIS data are presented in Table 7.It can be seen that the value of solution resistance R soln is very low for all the samples because the SBF is very conductive.The value of CPE-C coat is lower than CPE-C dl because the coating is relatively thick compared to the interface layer between the dense layer and the AZ31substrate.The global impedance of all the coated samples is much higher than the uncoated AZ31alloy.This indicates that the MAO is an effec-tive coating technique which enhances the corrosion resistance for AZ31alloys.It can be seen that the charge transfer resistance R ct of the MAO coatings is approximately 5–15times higher than the corresponding value of R po .This suggests that the inner dense layer plays a key role in protecting the magnesium alloy from corrosion [31].In addition,the resistance of the porous region R podecreasesFig.4.Equivalent electrical circuits used to fit the impedance data of (a)coated and (b)uncoated AZ31alloy.6120Y.Gu et al./Applied Surface Science 258 (2012) 6116–6126Fig.5.The Bode plots of electrochemical impedance behavior of MAO coatings and uncoated AZ31alloy after immersion in the SBF for different days.with increasing immersion time.This is due to the fact that the cor-rosive SBF enters the samples,and enlarges micropores in the outer porous layer.Therefore,the charge transfer resistance R ct is the dominating parameter for electrochemical performance of the coated samples.Fig.6shows the R ct value for all the specimens as a function of immersion time.A sample with a higher R ct value is more corrosion resistant.It can be observed that all coated samples have shown much higher R ct values compared to those of uncoated AZ31alloy.It is noticed that the higher the pulse frequency used in the MAOcoating,the larger is the R ct .Therefore,with larger R ct ,the coated samples are more corrosion resistant than AZ31alloy.Among the four coated samples,the samples treated at 3000Hz sample results in higher R ct value,while samples treated at 300Hz has the least R ct value.It can also be observed that the R ct value decreases with increas-ing immersion time.Specifically,for the samples coated at 300Hz,500Hz and 1000Hz,R ct value showed a steep drop after 5-day immersion.The results shown in Figs.6and 3taken together would suggest that for a constant immersion time,a steep reduction inTable 7EIS data for MAO coated samples and uncoated AZ31alloy.SampleImmersion time (day)R soln ( cm 2)CPE-C coat (F)R po ( cm 2)CPE-C dl (F)R ct ( cm 2)(a)300Hz175623177911,313580110121026191763152361813601(b)500Hz1134925768011,584514330727147722373511248583698(c)1000Hz1602133585812,502516117851077541775483351065214(d)3000Hz1731440976812,6235114571276987463768164591685966(e)AZ31alloy112––1121534548––24349735––87240Y.Gu et al./Applied Surface Science 258 (2012) 6116–61266121Fig.6.Charge transfer resistance R ct of MAO coatings and uncoated AZ31alloy during exposure to the SBF.R ct value would indicate a much higher corrosion rate (CR).The steep drop of the R ct value is due to the penetration of SBF in the outer porous layer and the dense layer through the pores,reach-ing the substrate and thus causing the MAO coating to degrade rapidly.Thus 3000Hz treated sample demonstrates the slowest degradation among all the coated samples,since it has the highest R ct value.3.4.Characterization of corroded surfaces3.4.1.XRF analysis after corrosionAn example of the dispersive XRF spectrum of the MAO coating produced at 3000Hz after 7days of immersion in the SBF is shown in Fig.7.The element concentrations of the MAO coatings produced at various pulse frequencies are presented in Table 8.The principle components of the corrosion products for all the MAO coatings are similar,including elements such as O,Mg,P,Al,Zn,Na,Mn,Cl,K,S and Ca.The presence of Cl,K,S,P and Ca were found in the coatings,which validates the effective participation of Cl −,K +,SO 42−,HPO 42−and Ca 2+ions in the SBF solution.The content of phosphorous on the corroded surface increased with increasing pulse frequencies.The respective concentration of phosphorous is 14.51,15.08,16.11andFig.7.An example of dispersive XRF spectrum of the MAO coating produced at 3000Hz after 7-day immersion in theSBF.Fig.8.XRD spectra of MAO coatings and uncoated AZ31alloy after immersion in the SBF.(1)Mg;(2)MgO;(3)Mg(OH)2;(4)Mg 3(PO4)2;(5)quintinite;(6)HA.16.85wt.%for MAO coatings produced at 300Hz,500Hz,1000Hz and 3000Hz.The concentration of calcium in the coatings after 7days immersion in the SBF also increased with increasing pulse frequencies.The Mg(OH)2layer formed on the surface creates a barrier for magnesium from absorbing the calcium and phospho-rous from the SBF [19].The decreasing amount of Mg(OH)2that deposits on the corroded surface is a function of pulse frequencies,and can be seen from XRD spectra (Section 3.4.2)that the inten-sity of Mg(OH)2phase decreased with increasing frequencies.Due to the limitations of the XRF technique,the presence of carbon and hydrogen from the HCO 3−in the SBF could not be detected.The for-mation of quintinite-Mg 4Al 2(CO 3)(OH)12·3H 2O from XRD results,however,validates the participation of HCO 3−.3.4.2.XRD analysis after corrosionThe XRD spectra of the MAO coatings produced at vari-ous MAO operating parameters are shown in Fig.8.It can be seen that the MAO operating parameters has negligible effect on the phase composition of the corroded surface.The corroded surface of the uncoated AZ31alloy after the electro-chemical corrosion test is composed of Mg,brucite-Mg(OH)2,quintinite (Mg 4Al 2(CO 3)(OH)12·3H 2O)and hydroxyapatite (HA)-Ca 10(PO 4)6(OH)2.The magnesium comes from the substrate.Brucite,quintinite and hydroxyapatite are the corrosion products.The formation of hydroxyapatite is due to HPO 42−and Ca 2+ions in the SBF solution.For the MAO coatings,the corroded surface is comprised of MgO and Mg 3(PO 4)2,besides Mg,brucite,quintinite and hydroxyapatite.The magnesium oxide and the Mg 3(PO 4)2are MAO coating constituents [27].The magnesium oxide peak is very weak and is only found at approximately 62.3◦.This is attributed to the conversion of MgO to Mg(OH)2.The corrosion products such as Mg(OH)2,quintinite and hydrox-yapatite are generated due to the following reactions in the SBF:Mg +2H 2O →Mg(OH)2+H 2O ↑.MgO +H 2O →Mg(OH)24Mg 2++2Al 3++HCO 3−+13OH −+2H 2O →MgAl 2(CO 3)(OH)12·3H 2O6122Y.Gu et al./Applied Surface Science258 (2012) 6116–6126Table8XRF element concentration of MAO coated AZ31alloys after immersion in the SBF(wt.%).Sample O Mg P Al Zn Na Mn Cl K S Ca 300Hz43.5719.6314.519.32 2.21 1.010.60 1.190.170.127.67 500Hz43.1216.5215.089.57 2.12 1.630.75 2.310.120.138.64 1000Hz43.2917.2716.11 6.48 2.09 1.590.560.890.160.0311.52 3000Hz43.2810.9316.858.33 2.48 1.340.800.270.180.0415.50Fig.9.Surface micrographs of MAO coated and uncoated AZ31alloy after immersion in the SBF for different days.Y.Gu et al./Applied Surface Science 258 (2012) 6116–6126612310Ca2++6HPO 42−+8OH−1→Ca 10(PO 4)6(OH)2+6H 2O3.4.3.Surface micrographsIn order to investigate the corroded surfaces after the electro-chemical corrosion tests,the optical surface micrographs shown in Fig.9were produced for the MAO coated specimens and uncoated AZ31alloy after immersion in the SBF for different durations.The micrographs show the differences in the extent of corrosion dam-ages of uncoated AZ31alloy and the MAO coatings produced at 300Hz,500Hz,1000Hz,3000Hz frequencies.The MAO coated specimens (Fig.9a–d)enhanced the corrosion resistance of the AZ31alloy (Fig.9e).For each specimen,increasing immersion time results in larger extent of corrosion with more fractures,cracks,holes and pits.The initiation and promotion of pits could be caused by prior large cathodic polarization used in this study [33].As it can be observed from Fig.9a,at day 1,the surfaces of all the coated sam-ples are full of numerous small pinholes.As the corrosion proceeds,cracks,fracture and pits gradually appear and enlarges with differ-ent sizes and depths in the MAO coatings.After 5days of immersion,more cracks are formed in the 300Hz specimen (Fig.9a2)compared to the other coated specimens.At the end of 7-day immersion,severe fractures and deep large pits are formed for the 300Hz and the 500Hz samples (Fig.9a3,Fig.9b3).Overall,the microstructure of the 3000Hz samples (Fig.9d1–d3)shows a relatively uni-form corrosion micrograph,which has less fractures and shallowcorrosion pits than the other samples after a 7-day immersion in the SBF.This is due to the fact that the surface morphology of MAO coatings produced at 3000Hz is relatively uniform with fine pores and a smooth surface finish compared to other coating samples [27].These results validate their highest charge transfer resistance as shown in Fig.6.In case of the 300Hz samples,the corrosion damage is evident in the micrograph (Fig.9a1–a3).Among all the samples produced at various pulse frequencies after immersion for a duration of 7days,the 3000Hz samples exhibit the best corrosion resistance,which is in agreement with the results of Tafel and EIS tests.3.4.4.Macroscopic appearanceThe surface appearances of the MAO coatings produced at vari-ous pulse frequencies and uncoated AZ31alloy after immersion in the SBF for 7days are shown in Fig.10.It can be clearly seen that the extent of corrosion damage is significantly reduced for microarc oxidation coating (Fig.10a–d)as compared to uncoated AZ31Mg alloy (Fig.10e).This result is consistent with the electrochemi-cal results.The corroded surface of the MAO coating produced at 3000Hz is much smoother with few local corrosion pits except on the right top of the sample.The 300Hz sample has the most local pits and shows delamination among the four coated samples.Therefore,it can be concluded that the MAO coating produced at 3000Hz is more corrosion resistant than those produced at 300Hz,500Hz and1000Hz,respectively.Fig.10.Sample appearance of MAO coated and uncoated AZ31alloy after 7-day immersion in the SBF.6124Y.Gu et al./Applied Surface Science 258 (2012) 6116–6126Fig.11.Schematic diagram of the corrosion process and mechanism of MAO coated AZ31Mg alloy upon immersion in the SBF.4.Corrosion mechanism modelBased on the electrochemical corrosion principle,the Tafel and EIS results,and the corroded surface characterization,a schematic diagram for the corrosion process of the MAO coated AZ31alloy in the SBF is presented in Fig.11.The coating produced on the AZ31alloy is composed of three layers [34]:a 5–10m outer porous layer,a dense inner layer,and a thin interlayer between the dense layer and AZ31alloy substrate.Itmay be seen from Fig.11a,the outer porous layer has several large-sized,deep pores or cavities.Some pores start from the outermost surface while the other exists within this layer.The underneath layer is denser with fewer larger cavities and micropores are mostly smaller.When the MAO coated sample was immersed into the SBF,the MgO constituent on the outermost surface starts to react with the corrosive solution and converts to Mg(OH)2(Fig.11b).The amount of this corrosion product is not significantly large at the initialY.Gu et al./Applied Surface Science258 (2012) 6116–61266125immersion due to the smaller amount of MgO than the magnesium in the substrate.Besides the deposit of Mg(OH)2,the hydroxyapatite is found on the surface of the samples(Fig.11b).These precipitates are also found in the SBF.Understanding of the precipitation of hydrox-yapatite is extremely important for MAO coated Mg alloys in biomedical applications.This hydroxyapatite is the implant mate-rial due to its excellent biocompatibility and bioactivity[35].The formation of the hydroxyapatite in the SBF is self-generated after a certain length of immersion[21].It has been reported that if a material is able to have‘apatite’formed on its surface in SBF,when implanted,the‘apatite’produced on its surface in the living body, and will bond to living bone through this apatite layer[21].The hydroxyapatite has been observed to be formed on the MAO coat-ings after immersion in the SBF.Thus,if the MAO coated Mg alloy is implanted to the human body,the hydroxyapatite would be pro-duced in the living body and will bond to the living bone.Therefore, the SBF solution takes significant effect on the corrosion behavior of MAO coated Mg alloys.The SBF would be very useful to predict the in vivo bioactivity for the MAO coatings on the Mg alloys for biomedical applications.With the ongoing reactions,the pores in the outer layer slowly enlarge.Therefore,the solution SBF solution goes through outer layer and reaches the inner dense layer,as well as the interface between the dense layer and the AZ31substrate(Fig.11c).The time to penetrate from the outermost surface layer to the inter-face takes longer because of small micropores in the dense layer. This indicates that the protective function of the MAO coating is primarily due to the dense layer.At the same time,more and more hydroxyapatite is deposited on the outermost surface of the MAO coating(Fig.11c and d).Once the corrosive SBF solution reaches the interface layer, the substrate is easily corroded.Magnesium reacts with the SBF solution,and more Mg(OH)2is produced with gas bubbles rising and a new corrosion product quintinite is formed as precipitate (Fig.11e).If the SBF penetrates all the pores on the coating and reaches the substrate,fast degradation occurs.More precipitates are accumulated on the sample surface so that a thin layer com-posed of hydroxyapatite and brucite is formed(Fig.11e).When the corrosion continues,this thin layer will become relatively thicker (Fig.11f).The relativelyflat increase of corrosion rate and gentle decrease of the R ct value of all the MAO coatings from1day to5days (Figs.3and6)suggests the role of the MAO coating.The increase of CR and the relatively steep decrease of R ct from the immersion time of5–7days indicate that the SBF reached the substrate.Simultane-ously the precipitation layer(brucite and hydroxyapatite)is formed gradually.The precipitation layer creates a barrier and does not allow the corrosive SBF to penetrate further.It would predict that the degradation would be reduced significantly by both the MAO coating and the precipitation layer if the MAO coated samples con-tinues to be immersed in the SBF.Nevertheless,if this MAO coated Mg alloy is implanted to a human body,the hydroxyapatite layer will play important role in bonding to the living bone due to its outstanding biocompatibilities.Therefore,in vitro experiments to some extents cannot simulate the real in vivo activity.After Mg alloys are completely dissolved,the insoluble coating constituents(MgAl2O4and the Mg3(PO4)2)and the corrosion pre-cipitates(brucite,quintinite and hydroxyapatite)are of significant concern.Once the AZ31Mg alloy corrodes away,the MAO coating layer and the precipitation layer formed after immersion will lose the support of the substrate and will further detach from the sub-strate.The detached fragments will remain in the human body if MAO coated Mg alloy is used in bone replacement.The hydroxyap-atite,however,would behave differently since hydroxyapatite will bond to the living bone.The detached MAO coating layer and the precipitate layer may lead to several unwanted side effects.Fur-ther research is warranted in this area including the development of a suitable MAO coating which will dissolve in the human body without any side effects.Generally the extent of corrosion damage of MAO coated Mg alloy in the SBF depends on the size,depth and the number of micropores on the outermost surface layer of the MAO coatings, the uniformity of the coatings and the thickness of the dense layer. Additionally the corrosive simulated bodyfluid has a significant effect on the corrosion performance of MAO coated Mg alloy.5.ConclusionsFour MAO pulse frequencies of300Hz,500Hz,1000Hz and 3000Hz were used to prepare protective coatings on AZ31alloys for biomedical applications.In summary,the following conclusions have been drawn.(1)Potentiodynamic polarization and EIS results indicated thatwith the increase of the pulse frequency,irrespective of immersion time,the corrosion rate of MAO coated samples decreased and charge transfer resistance increased accordingly.The3000Hz sample showed better corrosion resistance with least corrosion damage to the surface.This was ascertained from the micrographs and macrographs after immersion of the coated samples in the SBF.(2)The XRD spectra indicate that hydroxyapatite,brucite and quin-tinite precipitates were formed on the surface of MAO coated samples after immersion into the SBF for7days.The formation of hydroxyapatite suggests the SBF is very useful in predicting the in vivo bone bioactivity for the MAO coated Mg alloys for orthopedic applications.(3)A model for corrosion mechanism of MAO coated AZ31mag-nesium alloy in the SBF is developed.The corrosion damage depends on the thickness of the dense layer,the porosity and the uniformity of the MAO coating.The formed precipitation layer on the surface of MAO coating during corrosion would also effectively inhibit Mg alloys from degrading. AcknowledgementsY.Gu thanks Dr.J.Zhang for initiating this research.C.Ning acknowledges thefinancial support of National Basic Research Pro-gram of China(2012CB619100)and the National Natural Science Foundation of China(Grant51072057).This work is supported by the UAF Graduate School Fellowship.References[1]R.F.Zhang,G.Y.Xiong,C.Y.Hu,Comparison of coating properties obtained byMAO on magnesium alloys in silicate and phytic acid electrolytes,Curr.Appl.Phys.10(2010)255–259.[2]S.Shrestha,Magnesium and surface engineering,Surf.Eng.26(2010)313–316.[3]Y.Dai,Q.Li,H.Gao,L.Q.Li,F.N.Chen,F.Luo,S.Y.Zhang,Effects offive additiveson electrochemical corrosion behaviours of AZ91D magnesium alloy in sodium chloride solution,Surf.Eng.27(2011)536–543.[4]Z.X.Li,M.Kawashita,Current progress in inorganic artificial biomaterials,J.ans14(2011)163–170.[5]M.P.Staiger,A.M.Pietak,J.Huadmai,G.Dias,Magnesium and its alloys asorthopedic biomaterials:a review,Biomaterials27(2006)1728–1734.[6]N.T.Kirkland,J.Lespagnol,N.Birbilis,M.P.Staiger,A survey of bio-corrosionrates of magnesium alloys,Corros.Sci.52(2010)287–291.[7]F.Witte,V.Kaese,H.Haferkamp,E.Switzer,A.Meyer-Lindenberg,C.J.Wirth,H.Windhagen,In vivo corrosion of four magnesium alloys and the associatedbone response,Biomaterials26(2005)3557–3563.[8]L.Li,J.Gao,Y.Wang,Evaluation of cyto-toxicity and corrosion behavior ofalkali-heat-treated magnesium in simulated bodyfluid,Surf.Coat.Technol.185(2004)92–98.[9]P.Habibovic,F.Barrere,C.A.van Blitterswijk,K.de Groot,yrolle,Biomimetichydroxyapatite coating on metal implants,J.Am.Ceram.Soc.85(2002) 517–522.。
Entanglement, Information and Multiparticle Quantum Operations
I. INTRODUCl information-theoretic properties of quantum systems are attributable to the existence of entanglement. Entanglement is responsible for the nonlocal correlations which can exist between spatially separated quantum systems, as is revealed by the violation of Bell’s inequality [1]. It also lies at the heart of several intriguing applications of quantum information, such as quantum teleportation [2], quantum computational speed-ups [3,4] and certain quantum cryptographic protocols [5]. The central position of entanglement in quantum information theory, and its usefulness in applications, has led to considerable efforts being devoted to finding a suitable measure of how much entanglement a quantum system contains. This problem has been solved completely for bipartite pure states [6], and the accepted measure
Eigenvalues of a real supersymmetric tensor
Abstract In this paper, we define the symmetric hyperdeterminant, eigenvalues and E-eigenvalues of a real supersymmetric tensor. We show that eigenvalues are roots of a one-dimensional polynomial, and when the order of the tensor is even, E-eigenvalues are roots of another one-dimensional polynomial. These two one-dimensional polynomials are associated with the symmetric hyperdeterminant. We call them the characteristic polynomial and the E-characteristic polynomial of that supersymmetric tensor. Real eigenvalues (E-eigenvalues) with real eigenvectors (E-eigenvectors) are called H-eigenvalues (Z-eigenvalues). When the order of the supersymmetric tensor is even, H-eigenvalues (Z-eigenvalues) exist and the supersymmetric tensor is positive definite if and only if all of its H-eigenvalues (Z-eigenvalues) are positive. An m th-order n -dimensional supersymmetric tensor where m is even has exactly n (m − 1)n −1 eigenvalues, and the number of its E-eigenvalues is strictly less than n (m − 1)n −1 when m ≥ 4. We show that the product of all the eigenvalues is equal to the value of the symmetric hyperdeterminant, while the sum of all the eigenvalues is equal to the sum of the diagonal elements of that supersymmetric tensor, multiplied by (m − 1)n −1 . The n (m − 1)n −1 eigenvalues are distributed in n disks in C. The centers and radii of these n disks are the diagonal elements, and the sums of the absolute values of the corresponding off-diagonal elements, of that supersymmetric tensor. On the other hand, E-eigenvalues are invariant under orthogonal transformations. © 2005 Elsevier Ltd. All rights reserved.
Study of snow climate in the Japanese Alps Comparison to snow climate in North America
Study of snow climate in the Japanese Alps:Comparison to snow climate in North AmericaShinji Ikeda a ,⁎,Ryuzo Wakabayashi b ,Kaoru Izumi c ,Katsuhisa Kawashima ca Graduate School of Science &Technology,Niigata University,Niigata,Japanb Alpine Research Institute of Avalanche,Hakubamura,JapancResearch Center for Natural Hazards and Disaster Recovery,Niigata University,Niigata,Japana b s t r a c ta r t i c l e i n f o Article history:Received 2November 2008Accepted 9September 2009Keywords:Avalanche safety Snow climateSnowpack structure Weak layersIn the case of the Japanese Alps,it is experientially known that there is a notable snow climate difference between the Japan Sea side mountains and the Paci fic Ocean side mountains.For the purpose of improving avalanche safety,we studied the snow climate characteristics using meteorological and snow pit data collected from two study plots in the mountain regions.Ten years of meteorological data and 4–10years of snow pit data were employed in the study.A snow climate classi fication scheme proposed in North America was used to determine the snow climate of these study plots.The general snowpack characteristics for each snow climate presented in previous studies were used in the present study to determine the snowpack characteristics of the study plots.Both meteorological and snow pit data suggested that the Japan Sea side mountains have the same characteristics as the maritime snow climate in North America.On the other hand,the Paci fic Ocean side mountains have unique characteristics caused by a combination of continental and maritime climate in fluences.The Paci fic Ocean side mountains have similar characteristics to the continental snow climate of North America,however,that climate is different in that it is characterized by a large amount of rainfall and a high predominance of faceted crystals and wet grains.We identi fied a new snow climate for the Paci fic Ocean side mountains of the Japanese Alps,a “rainy continental snow climate.”©2009Elsevier B.V.All rights reserved.1.IntroductionThe Japanese Alps,situated on the main island of Japan,consist of three separate ranges:the Northern,Central,and Southern Japanese Alps,which are located between 35°and 37°N and 136°and 140°E (Fig.1).The Japanese Alps contain some of the most popular regions in Japan for snow-related activity such as skiing and mountaineering because they have many mountains with heights of around 3000m and witness an abundance of snow during the winter months.However,these activities are not without risk,and every winter,several recreationists are killed in avalanches.During the winter of 2007–08,seven recreationists were killed in avalanches in the Japanese Alps.Snowfall in the Japanese Alps occurs under two different meteorological conditions.In one system,snowfall is related to synoptic cyclones,and in the other,snowfall occurs due to the effect of the NW winter monsoon,which arrives from the cold Asian continent and passes over the warm surface of the Japan Sea (Maejima,1980).The snowfall occurring due to the latter reason is hereafter referred toas the Japan Sea effect snowfall.The Japan Sea effect snowfall is a phenomenon that is similar to the lake effect snowfall that occurs around the Great Lakes of North America.However,the Japan sea effect brings heavier snowfall than the Lake effect because the temperature gradient between the water body and the air mass over the Japan Sea is greater than that over the Great Lakes (Ninomiya,1968).The snowfall distribution in the case of the Japan Sea effect snowfall is primarily controlled by orographic lifting (Estoque and Nimomiya,1976).Therefore,the northern part of the Northern Japanese Alps receives considerably heavier snowfall than the southern part of the Northern,Central,and Southern Japanese Alps.In addition,the frequency of occurrence and duration of the Japan Sea effect snowfall are greater than those of the snowfall related to synoptic cyclones.It is believed that approximately 70%of the major snowstorms in Japan can be classi fied as Japan Sea effect snowfalls (Takahashi and Nakamura,1986).Consequently,the amount of snowfall throughout the entire winter in the northern part of the Northern Japanese Alps is signi ficantly greater than that in the southern part of the Northern,Central,and Southern Japanese Alps.For climatic studies,the Japanese Alps are therefore divided into the Japan Sea side mountains,which include the northern part of the Northern Japanese Alps,and the Paci fic Ocean side mountains,which consist of the southern part of the Northern Japanese Alps,the Central and the Southern Japanese Alps (Iida,1970).Cold Regions Science and Technology 59(2009)119–125⁎Corresponding author.Current address:291-1Nashinoki,Myoukou City,Niigata,Japan.Tel.:+81255738504.E-mail address:ikeda.shinz@ (S.Ikeda).0165-232X/$–see front matter ©2009Elsevier B.V.All rights reserved.doi:10.1016/j.coldregions.2009.09.004Contents lists available at ScienceDirectCold Regions Science and Technologyj o u r n a l h o m e p a g e :w ww.e l s ev i e r.c o m /l o c a t e /c o l d re g i o n sInformation about the local snow climate can provide useful background information about the types of avalanches to be expected in the area.Atkins (2005)pointed out that information about the characteristics of an expected avalanche in an area provides insight into how that instability responds over time as the meteorological history of the winter unfolds and also provides insight into the systematic components of the spatial variability of snow instability.Further,he showed that an assessment of the character of expected avalanche activity is a key link in the chain between stability evaluation and terrain management.This information is not only indispensable for avalanche forecasting but is also invaluable for determining the range and variability of an avalanche climate,for both professional forecasters and backcountry recreationists.Numer-ous studies have been conducted on snow and avalanche climates,mainly in North America (i.e.,LaChapelle,1966;Armstrong and Armstrong,1987;Mock and Birkeland,2000;Haegeli and McClung,2003).Recently,Haegeli and McClung (2007)also included snowpack observations to complement the meteorologically dominated ap-proach to snow and avalanche climate studies.The existing research on mountain snow climates in Japan is aimed at the development of water resources,with the focus on determining the amount of snow water equivalent on mountains,and does not consider avalanche safety (i.e.,Ogasawara,1964;Nakagawa et al.,1976).The goal of this study was to fill this void by examining the characteristics of snow climates associated with avalanches using meteorological and snow pit data collected from regions with typical snow climates for the purpose of improving avalanche safety.2.Study sites and dataset 2.1.Study sitesThe data for this study was collected from two study plots in the Japanese Alps:one representing the Japan Sea side mountains and the other one representing the Paci fic Ocean side mountains.The Tsugaike study plot (36°46′N,137°49′E,1560m a.s.l.),which is used to represent the Japan Sea side mountains,is located at the top of the gondola lift of the Tsugaike highland ski resort,which is at the foot of Mt.Hakuba –Norikura (2456m a.s.l.)in the northern part of the Northern Japanese Alps (Fig.1).The Nishikoma study plot (35°49′N,137°50′E,1900m a.s.l.),which is used to represent the Paci fic Ocean side mountains,is located on the road that leads to the top of Mt.Shogigashira (2730m a.s.l.)in the northern part of the Central Alps (Fig.1).The tree line of Mt.Hakuba –Norikura is at 2200m a.s.l.and that of Mt.Shogigashira is at 2600m a.s.l.Both of the study plots are located in subalpine forests,consisting mainly of fir trees and mountain birches,and on horizontal surfaces that have suf ficient space for continuous snow pit observations and are not in fluenced by snow falling off tree crowns or by snowdrifts.2.2.Weather and snow pit dataset2.2.1.Weather datasetA summary of the collected datasets is shown in Table 1.Since there have been no permanent avalanche safety operations for ski resorts or backcountry areas in Japan,the availability of continuous weather and snowpack observations for the mountainous regions of Japan is very limited.While the ten year dataset of observations used in this study is short for a discussion of the local snow climate,it is the only data that is currently available.Since the air temperature was not continuously measured at the two snow study sites,it had to be extrapolated from neighboring weather stations of the Automated Meteorological Data Acquisition System (AMeDAS).The air temper-ature data for the Tsugaike study plot are derived from the values recorded by the AMeDAS station at Hakuba,which is located approximately 5km to the southeast of the study plot at 703m a.s.l.(an elevation difference of 857m).The air temperature data for the Nishikoma study plot are derived from the values recorded by the AMeDAS station at Ina,which is located approximately 10km to the east of the study plot at 674m a.s.l.(an elevation difference of 1226m)(Fig.1).The AMeDAS stations at both Ina and Hakuba are located on horizontal surfaces.2.2.2.Snow pit datasetThe grain type,grain size,hand hardness,and snow temperature (every 10cm)values were determined from snow pit observations,following the observation guidelines of the Canadian Avalanche Association (CAA,1995).In Tsugaike,shovel compression tests (Canadian Avalanche Association,1995)were carried out to deter-mine the weak layer and weak interface.Since in most cases,clear results cannot be obtained from a shovel compression test with fragile depth hoar developed thickly from the ground,we did not perform this test atNishikoma.Fig.1.Study sites.Map Images used to produce the detailed maps are from the Geographical Survey Institute of Japan:Digital Map 200000(Map Image).120S.Ikeda et al./Cold Regions Science and Technology 59(2009)119–1253.Method3.1.Snow climate classi ficationWe use the snow climate classi fication scheme proposed by Mock and Birkeland (2000).For this classi fication,meteorological data obtained from December to March (mean air temperature,total rainfall,total snow water equivalent,total snowfall,and the December temperature gradient)are used to classify the local snow climate into one of three classes —maritime,transitional,or continental (Fig.2).The decision thresholds for the classi fication scheme were derived using a box plot analysis of historic weather and snow records from 23stations across the western United States (Mock and Birkeland,2000).Since not all of the necessary parameters for the classi fication scheme were available at the two study plots,their values had to be derived from other observations.The lapse rate for Nishikoma,6°C km −1,is estimated from the datasets for two winter seasons (2000–01and 2001–02),while that for Tsugaike,5°C km −1,is estimated from the dataset for a single winter season (2007–08).The amount of rainfall is estimated by summing the rainfall values recorded at the AMeDAS station when the estimated air temperature at the study plot decreases to 0°C or lower.Estimated values for the maximum snow water equivalent (Max.SWE)were used instead of the snow water equivalent (SWE).The Max.SWE (in cm)is given by the expression Maximum snow depth (cm)×0.3(g cm −3)/1(cm 2).The mean snow density was assumed to be 0.3g cm −3,which is the value in the study of Shimizu and Abe (2001).The values obtained by dividing the Max.SWE (cm)by 0.1(g cm −3)were used instead of the amount of total snowfall (the seasonal new snow density was assumed to be 0.1g cm −3).These methods have the limitations that the values used for the SWE,snowfall,and rainfall tend to be underestimated.3.2.Snowpack characteristicsThe general snowpack characteristics of the main snow climates have been described in detail by LaChapelle (1966),Tremper (2000),and McClung and Schaerer (2006).A maritime snow climate is characterized by a relatively thick snowpack (over 300cm)and relatively mild air temperature.While rainfall may occur at any time during the winter,avalanche formation in maritime snow climates usually occurs immediately after a storm,with failures occurring in the new layer of snow near the surface.Snowpacks have relatively stable structures due to a progressive increase in the snow hardness from the surface to the bottom.This increase in hardness is caused by the deep snow cover,warm snowpack temperature,and weak temperature gradient in the mon weak layers such as low cohesion new snow and weak interfaces such as rain crusts are not persistent due to deep snow covers and warm snowpack temperatures.A continental snow climate is characterized by a relatively thin snowpack (less than 1.5m)and relatively low air temperature.The snowpack has persistent structural weaknesses caused by the presence of a weak depth hoar and facets in thesnowpack.The depth hoar and the facets are formed by a strong temperature gradient in the mon weak layers such as faceted crystals and depth hoar are persistent due to shallow snow covers and the low snowpack temperature.Avalanches are often linked to persistent structural weaknesses in the snowpack.Often,in continental ranges,it is possible to find weak faceted crystals and depth hoar almost throughout the snowpack.A transitional snow climate can generally be described as having intermediate character-istics between those of the maritime and continental snow climates.However,Haegeli and McClung (2007)pointed out that the weak layer patterns observed in transitional snow climate areas are clearly more complex than a simple combination of maritime and continental in fluences.To examine and quantify the snowpack characteristics at the two study sites,we focused on the three variables dominance of snow grain type,hardness pro file,and weak layer types.Table 1Contents of the collected weather and snow pit observation datasets.Study site Elements Observation siteObserverPeriodMeasurement frequency NishikomaAir temperature AMeDAS at Ina (674m a.s.l.)Japan Meteorological Agency 1995–96to 2004–051/h Rainfall1/h Air temperature Nishikoma (1900m a.s.l.)Authors 2000–01to 2001–021/hSnow depth Authors 1995–96to 2005–061/month Snow pitAuthorsTsugaike Air temperature AMeDAS at Hakuba (703m a.s.l.)Japan Meteorological Agency 1996–97to 2005–06and 2007–081/h Precipitation 1996–97to 2005–061/h Air temperature Tsugaike (1560m a.s.l.)Authors2007–081/h Snow depth Tsugaike Ski patrol 1996–97to 2005–062/daySnow pitAuthors1999–00to 2005–062to4/monthFig.2.Flowchart illustrating the procedure for the classi fication of snow climates (Mock and Birkeland,2000).121S.Ikeda et al./Cold Regions Science and Technology 59(2009)119–1253.2.1.Dominance of snow grain type (%)To determine the dominant type of snow grains at the two study sites,an estimate was made of the ratio of the total thickness of each grain type layer (rounded grains,faceted crystals including depth hoar and wet grains)to the total thickness of all of the grain type layers (except new snow and decomposing).The dominant type of snow grain expresses the dominant type of metamorphism in the snowpack of the study site.To focus on the dominant type of metamorphism,we integrate the faceted crystals and depth hoar in the estimation.Further,to exclude the in fluence of observation timing related to snowstorms,we eliminated new snow and decomposing snow from the estimation.3.2.2.Hardness pro filesThe typical type of structural weakness caused by the hardness pro file characteristics related to the snow climate of a given study site was examined by analyzing the frequency of observation for each hardness pro file type.Schweizer and Wiesinger (2001)describe the classi fication of ram hardness pro files that divides them into 10types and related them to snow stability to interpret a snow pro file,with the goal of stability evaluation (Fig.3).This classi fication method is based on a huge dataset of snow pro files corrected in Switzerland,which came mainly from continental and transitional snow climates.Therefore,the classi fication method is very detailed and primarily focuses on the snowpack characteristics of those snow climates.To focus on the climatic characteristics of the snow pro file,we integrated the classi fication into 4types in this study,and applied it to the hand hardness pro files (Fig.3).Types A and B represented types 6and 8,respectively.Types 1,2,3,4,5,7,and 9were integrated into type C.Type 10and those that could not be classi fied into any other type were integrated into D type.Type A:hardness increases progressively from the surface to the bottom;type B:large gradients of hardness exist between hard layers below and soft layer above (melt –freeze crust and new snow are common);type C:structural weakness caused by the presence of weak depth hoar and facets in the snowpack;D type:other irregular pro file types.Types A and B are typical of a maritime climate,while type C is typical of a continental climate.Types A and Dare present in generally stable conditions,whereas types B and C indicate potential instability.3.2.3.Weak layers and weak interfacesWhen failure planes were determined by shovel compression tests,the snow grain type of the weak layers and the combination type of the weak interfaces were identi fied.The typical types of weak layers and interfaces in the Tsugaike study plot were characterized by analyzing the determination frequency for each type.4.Results4.1.Meteorological conditions and snow climate classi ficationFig.4shows the mean rainfall,snow depth,and air temperature values for each month for a period of 10years.The snow depth at Tsugaike reaches a maximum in February and decreases in March,and the snow depth at Nishikoma is at a maximum in March.The later timing of the maximum snow depth at Nishikoma is related to the signi ficant snowfall in March,which is related to the higher frequency of synoptic low pressure systems along the Paci fic coast in the spring months (Iida,1970).The difference in the snow depths between the study plots is signi ficant (the 10-year mean snow depth values at Nishikoma and Tsugaike are 113cm and 384cm,respectively).On the other hand,the difference in the air temperature values is small when considering the altitude difference of 340m between the sites.The 10-year mean air temperature values at Nishikoma and Tsugaike are −6.6°C and Tsugaike −5.6°C,respectively (when adjusting the value from Tsugaike to the same altitude as Nishikoma:1900m a.s.l.,it would be −7.3°C).Since the temperature values are extrapolated from nearby weather stations,the temperature values for the two stations are basically the same when considering all of the possible errors included in ourcalculation.Fig.3.Classi fication of hardness pro files described by Schweizer and Wiesinger (2001)(upper)and integrated into 4types.Fig.4.Ten-year mean values of rainfall,snow-depth,and air temperature for each month.Table 2Snowclimate classi fication results for data obtained from Tsugaike.The gray-shaded numbers are the values of the classi fication criteria.122S.Ikeda et al./Cold Regions Science and Technology 59(2009)119–125The results of the snow climate classi fications for Tsugaike and Nishikoma are shown in Tables 2and 3,respectively.Due to the geographic situation of the Japanese Alps,the term “continental ”might not be appropriate.However,regardless of the proximity of the Japanese Alps to the ocean,the local snowpack can still have continental characteristics,as observed in the Rocky Mountains of the U.S.So,while the term might not be correct geographically,it might still apply to the observed snowpack characteristics.In Tsugaike,in all 10winters,the climate was classi fied as a maritime climate.Among the 10winters,9were classi fied on the basis of the SWE threshold and 1was classi fied by considering the rainfall threshold.For Nishikoma,the classi fication scheme produced six continental and four maritime classi fications.The different snow climate assignments were primarily related to the observed amounts of rainfall.Even in some of the years that were classi fied as con-tinental,the observed value was close to the suggested threshold(8cm).Hence,it was classi fied as a continental climate or maritime climate based on a slight difference in the amount of rainfall.How-ever,as mentioned in the method section,the SWE and rainfall tend to be underestimated.When considering the possible errors,it is possible that several winters classi fied as continental climates based on the rainfall should have been classi fied as maritime climates.However,it is dif ficult to consider the possibility that they should have been classi fied as transitional climates based on the SWE.4.2.Snowpack structureIn Tsugaike,the snowpack shows the typical characteristics of a maritime snow climate:the dominant type of snow grain is rounded grains and the snow hardness increases progressively from the surface to the bottom (Fig.5).The dominance of rounded grains is 83%(Fig.6)and hardness pro file types A and B represent 66%(Fig.7)of all the pro files observed at yers of faceted crystals are observed infrequently at Tsugaike.The thickness of these layers is generally in the range of a few mm to 1cm and they are all observed near the surface,indicating that they had developed through the process of near-surface faceting (Birkeland,1998).On the other hand,in Nishikoma,the dominant type of snow grain is faceted crystals.The snowpack at this study site is characterized by basal layers of depth hoar,faceted crystals formed near the snow surface,and wet grains.Persistent structural weaknesses resulting from the presence of weak depth hoar and faceted crystals exist in the snowpack (Fig.5).The dominance values for the faceted crystals and wet grains were 48%and 36%(Fig.6),respectively,and the C-type hardness pro file was determined to be 56%(Fig.7)of all the pro files observed at Nishikoma.The snowpack characteristics at Nishikoma are therefore similar to those of a continental snow climate,except for the difference in the dominance of wet grains.Table 3Snow climate classi fication results for data obtained fromNishikoma.The gray-shaded numbers are the values of classi fication criteria.Fig.5.Typical snow pro files during winter.The left panel is for Nishikoma (99.02.29)and the right panel is for Tsugaike (01.02.15).123S.Ikeda et al./Cold Regions Science and Technology 59(2009)119–1254.3.Weak layers or interfaces observed at TsugaikeShovel compression tests were performed at the Tsugaike study plot and a total of 75failure planes were observed.Weak layers were observed in 68of these and 7cases were identi fied as weak interfaces.The numbers for the grain types observed in the weak layers are shown in Fig.8.Fifty weak layers associated with low-cohesion new snow were observed (precipitation particles:31;decomposing:19),which represent 74%of the total number of weak layers observed.Only seven cases of faceted crystals were observed.Surface hoar,a weak layer that has been linked to skier triggered avalanches in the Columbia Mountains of Canada and the Swiss Alps (Schweizer and Jamieson,2001),was observed only once in 4winters.All of the weak interfaces are determined to be a combination type consisting of new snow and a melt-freeze layer.Our observation results agreed with the general characteristics of the weak layers and interfaces observed in maritime snow climates and are consistent with the findings of Haegeli and McClung (2007).5.DiscussionWhen comparing the observed meteorological parameters indi-vidually with those for North America reported in Mock and Birkeland (2000),the temperature at Tsugaike is comparable to the values generally observed at stations in a transitional climate,and the snow depth is found to be comparable to that in a maritime climate (Fig.4).Because of the large value for the SWE,the snow climate of Tsugaike is de finitely classi fied as a maritime snow climate based on the classi-fication procedure (Table 3).Furthermore,the snowpack observations show the typical characteristics of a maritime snow climate.These results clearly show that Tsugaike has characteristics identical to the maritime snow climate in North America.The temperature at Nishikoma is comparable to that in a transi-tional climate and the snow depth is comparable to that in a con-tinental snow climate (Fig.4).Further,the combination of the snow depth and temperature indicated that the climate in Nishikoma is similar to the continental climate of the southwestern United States,where a relatively warm continental climate is observed (i.e.,parts of southern Utah,Arizona,and southern California as described by LaChapelle,1966;Dexter,1981).However,the amount of rainfall observed in Nishikoma is signi ficantly larger than that at these American locations.Because of differences in the amounts of rainfall,the snow climates of the different winters at Nishikoma are classi fied into one or the other of the extreme classi fications,continental or maritime,without the transitional classi fication (Table 3).Among the 48stations in the United States that were studied by Mock and Birkeland (2000),no station had climate classi fication characteristics similar to those at Nishikoma.The snowpack observed in Nishikoma has characteristic similar to the continental snow climate of North America.However,it differs from a continental snow climate because of the high predominance of both faceted crystals and wet grains.The dominance of faceted crystals and the total rainfall are negatively correlated,while the correlation between the dominance of wet grains and the total rainfall is high and positive (Fig.9).TheresultsFig.6.Dominance (%)of each type of snow grain (●:Rounded grains,□:Faceted crystals,∧:Depth hoar,and ○:Wet grains)in the snow pits considered (Nishikoma:10winters,36pits;Tsugaike:4winters,47pits).Fig.7.Percentages of observed hardness pro files (Nishikoma:36pits;Tsugaike:47pits).Fig.8.Observed numbers of grain types in weak layers atTsugaike.Fig.9.Dominance of different types of snow grains during total rainfall at Nishikoma.The circles and squares indicate wet grains and facetted crystals,respectively.124S.Ikeda et al./Cold Regions Science and Technology 59(2009)119–125indicate that the high dominance of wet grains at Nishikoma is primarily a result of heavy rainfall.Furthermore,they show that the dominant type of grain is determined to be faceted crystals or wet grains,depending on whether the value for the total rainfall exceed90 to100mm.We believe that the snow climate of Nishikoma is characterized by a combination of continental and maritime influ-ences.Haegeli and McClung(2007)pointed out that the transitional snow climate is characterized by a combination of continental and maritime influences.However,in both the meteorological and snow pit data observed,Nishikoma shows distinctly different characteristics from a transitional snow climate.The snow climate of Nishikoma is characterized by the significant effects of heavy rainfall on a thin snowpack,which has continental snow climate characteristics. Therefore,we identify a new snow climate,the“rainy continental snow climate,”for Nishikoma.6.ConclusionWe examined the characteristics of snow climates in relation to possible avalanche safety concerns by using meteorological and snow pit data collected from two study plots belonging to the typical climates of the Japan Sea side mountains and the Pacific Ocean side mountains of the Japanese Alps.Our study suggested that the Japan Sea side mountains have characteristics comparable to the maritime snow climate of North America.This similarity was observed on the basis of both meteo-rological and snow pit data.On the other hand,the study also revealed that the Pacific Ocean side mountains have unique characteristics caused by a combination of continental and maritime climate in-fluences,and we identified a new snow climate,the“rainy continental snow climate,”for these mountains.The characteristics of the“rainy continental snow climate”are summarized as follows.-A relatively thin snowpack and cold air temperatures,which have the same range as continental snow climate regions.-Heavy rainfall with the same range or a greater range than mari-time snow climate regions.-Persistent structural weaknesses caused by facetted crystals and depth hoar similar to continental snow climate regions.-The dominance of both facetted crystals and wet grains.The results of our study suggested that the application of the avalanche safety management techniques that have been developed in the maritime climate regions of North America is valid at the Japan Sea side mountains in the Japanese Alps.Further,a similarity to the continental climate of North America in relation to the persistent weaknesses caused by facetted crystals and depth hoar was found at the pacific ocean side mountains in the Japanese Alps.However, because it is unclear what effect heavy rainfall on such a snowpack would have on the avalanche activity,there is doubt about the validity of applying the avalanche safety management technique used in a continental climate to these mountains.The instability related to the effect of water on such a snowpack with persistent weakness are well known phenomena in continental climates in the snow-melting season Brown,2008;Hartman and Borgeson,2008.However,there are significant differences between these mountains in relation to the timing,frequency,and amount of water.Heavy rainfall on a thin snowpack brings systematic variability in relation to the altitude range,while also bringing irregular variability on a smaller scale in the process of the percolation of water to the snowpack.Tofind a safety management technique for these mountains,it will be necessary to study the effect of heavy rainfall on the avalanche activity there.The extreme climatic difference found in only about100km of horizontal range brings many opportunities to encounter the situations that backcountry recreationists must cope with in the Japanese Alps.Therefore,the results of this research,and those expected in thisfield in the future in the Japanese Alps,will contribute to avalanche safety education.Furthermore,they provide useful information as a foundation for establishing modern avalanche forecasting and avalanche safety operations in the Japanese Alps.This study may be considered to be thefirst step in the study of the snow avalanche climatology of Japan.Additional studies involving the analysis of the data collected from other study sites with larger areas and over longer durations are necessary.Furthermore,the rela-tion to avalanche activities must undoubtedly be reviewed.However, our results are in agreement with the conclusions of Haegeli and McClung(2007)on the importance of process-oriented climate clas-sification with respect to local avalanche activity,and further stress the importance of snowpack observations for snow and avalanche climate analyses and the establishment of more detailed snow climate definitions.ReferencesArmstrong,R.L.,Armstrong,B.R.,1987.Snow and avalanche climate of the western United State:a comparison of maritime,transitional and continental conditions.Avalanche Formation,Movement and Effects.Proc.Davos Symposium:IAHS Publ., vol.162,pp.281–294.Atkins,R.,2005.An avalanche characterization checklist for backcountry travel decisions.Proceedings ISSW2004,International Snow Science Workshop,Jackson Hole,Wyoming,USA,19–DA Forest Service,Fort Collins,Co, pp.462–468.Birkeland,K.W.,1998.Terminology and predominant processes associated with the formation of weak layers of near-surface faceted crystals in the mountain snowpack.Arct.Alp.Res.30,193–199.Brown,A.,2008.On wet slab mechanics and yellow snow:a practitioner's observations.Proceedings ISSW2008,International Snow Science Workshop,Whistler,BC, Canada,21–27September2008,pp.299–305.Canadian Avalanche Association,1995.Observation guidelines and recording standards for weather,snowpack and avalanches.Canadian avalanche Association,Canada.p.97.Dexter,L.R.,1981.Snow avalanches on the San Francisco Peaks:Coconino County, Arizona.MS Thesis,Dept.of Geography and Public Planning,Northern Arizona University.Estoque,M.A.,Nimomiya,K.,1976.Numerical simulation of Japan Sea effect snowfall.Tellus28,243–253.Haegeli,P.,McClung,D.M.,2003.Avalanche characteristics of a transitional snow climate—Columbia Mountains,British Columbia,Canada.Cold Reg.Sci.Technol.37(3),255–276.Haegeli,P.,McClung, D.M.,2007.Expanding the snow climate classification with avalanche relevant information—initial description of avalanche winter regimes for south-western Canada.J.Glaciol.53(181),266–276.Hartman,H.,Borgeson,L.,2008.Wet slab instability at the ARAPAHOE BASIN SKI AREA.Proceedings ISSW2008,International Snow Science Workshop,Whistler,BC, Canada,21–27September2008,pp.163–169.Iida,M.,1970.Nihon no Sangaku Kisyou.Yama-to-keikoku-shya,Tokyo,Japan,p.258 (in Japanese).LaChapelle, E.R.,1966.Avalanche forecasting—a modern synthesis.International Association of Hydrological Sciences Publ.,vol.69,pp.350–356.Available from International Association of Hydrological Sciences Press,Centre for Ecology and Hydrology,Wallingford,Oxfordshire OX108BB,United Kingdom.Maejima,I.,1980.Seasonal and regional aspect of Japan's weather and climate.Geography of Japan.Teikoku-Shoin,Tokyo,Japan,pp.54–72.McClung,D.,Schaerer,P.,2006.The avalanche handbook.The Mountaineers Books, Seattle,WA.pp.271.Mock,C.J.,Birkeland,K.W.,2000.Snow avalanche climatology of the western United States mountain ranges.Bull.Am.Meteorol.Soc.81(10),2367–2392. Nakagawa,M.,Kawada,K.,Okabe,T.,Shimizu,H.,Akitaya,E.,1976.Physical property of the snow cover on Mt.Tateyama in Central Honshu.Japan.Seppyo38(4),1–8in Japanese.Ninomiya,K.,1968.Heat and Water Budget over the Japan Sea and the Japan Islands in Winter Season—with special emphasis on the relation among the supply from sea surf ace,the convective transfer and the heavy snowfall.J.Meteorol.Soc.Japan46, 343–372.Ogasawara,K.,1964.Kita–Alps Tateyama,Turugi no sekisetu chousa.Alps no shizen, Toyama daigaku chyousa dan,pp.123–152.in Japanese.Schweizer,J.,Jamieson,J.B.,2001.Snow cover properties for skier triggering of avalanches.Cold Reg.Sci.Technol.33(2–3),207–221.Schweizer,J.,Wiesinger,T.,2001.Snow profile interpretation for stability evaluation.Cold Reg.Sci.Technol.33(2–3),179–188.Shimizu,M.,Abe,O.,2001.Recentfluctuation of snow cover on mountainous area in Japan.Annals of Glaciology,vol.32.International Glaciological Society,pp.93–101. Takahashi,H.,Nakamura,T.,1986.Seppyo Bosai.Hakua-syobo,Tokyo,Japan,p.478.in Japanese.Tremper,B.,2000.Staying alive in avalanche terrain.The Mountaineers Books,Seattle, WA.pp.281.125S.Ikeda et al./Cold Regions Science and Technology59(2009)119–125。
力学发展史上的篇文献年以前
班锁 Louis Poinsot,1777-1859
36.《静力学原理》 (Eléments de statique),1803年以 法文出版。引进了力偶 的概念、系统讨论了力 系的简化,并且最终提 出刚体的平衡的条件是 力的主矢和主矩为零。
托马斯 杨 Thomas young,1773-1829
惠更斯 Christiaan Huygens,1629-1695
21.《摆钟论》 (Horologium oscillatorium), 1673年,拉丁文。 讨论了约束在圆上 的质点运动规律, 并论证了摆的等时 性,提出等时摆的 概念。
胡克 Robert Hooke,1635-1703
22.《论弹簧》 Lectures (de Potentia Restitutiva) (Lectures of springs),1678年, 拉丁文。研究了物 体的弹性。
15.《关于两门新科学的对话》 (Discourses and mathematical demonstrations concerning twonew sciences),1638年出版 意大利文版,1665年出版了第一个 英译本。总结了材料强度、自由落 体和抛体的运动规律。
托里拆利
Evangelista Torricelli,1608-1647
mechanicall, touching the spring of the air and its effects),1660年出 版,英文。以系统的 实验论证了气体的弹 性。
帕斯卡 Blaise Pascal,1623-1662
18.《论液体平衡和空 气的重量》(Traités de l’equilibre des liqueurs et de la pesenteur de la masse de l’air), 1663年出版,法文。总 结和提出帕斯卡原理, 并BC-322BC
Investment, Idiosyncratic Risk, and Ownership
THE JOURNAL OF FINANCE•VOL.LXVII,NO.3•JUNE2012Investment,Idiosyncratic Risk,and Ownership VASIA PANOUSI and DIMITRIS PAPANIKOLAOU∗ABSTRACTHigh-powered incentives may induce higher managerial effort,but they also exposemanagers to idiosyncratic risk.If managers are risk averse,they might underinvestwhenfirm-specific uncertainty increases,leading to suboptimal investment decisionsfrom the perspective of well-diversified shareholders.We empirically document that,when idiosyncratic risk rises,firm investment falls,and more so when managersown a larger fraction of thefirm.This negative effect of managerial risk aversion oninvestment is mitigated if executives are compensated with options rather than withshares or if institutional investors form a large part of the shareholder base.I N FRICTIONLESS CAPITAL MARKETS,only the systematic component of risk is rele-vant for investment decisions.By contrast,idiosyncratic risk should not affect the valuation of investment projects,as long asfirm owners are diversified and managers maximize shareholder value.However,the data indicate that there is a significant negative relation between idiosyncratic risk and investment for publicly tradedfirms in the United States.In addition,executives in publicly tradedfirms across the world hold a substantial stake in theirfirms,consistent with the predictions of agency bining these two observations sug-gests that,since investment decisions are undertaken by managers on behalf of shareholders,poorly diversified managers may cut back on investment when uncertainty about thefirm’s future prospects increases,even if this uncertainty is specific to thefirm.In this paper,we argue that managerial risk aversion induces a negative rela-tion between idiosyncratic volatility and investment.Wefind that the negative relation between investment and idiosyncratic risk is stronger when managers own a larger fraction of thefirm.This difference in investment–risk sensitivi-ties acrossfirms is economically large.For instance,idiosyncratic uncertainty increased during the2008to2009financial crisis.During this period,firms with higher fractions of insider ownership reduced investment by8%of their ∗Panousi is with the Federal Reserve Board,and Papanikolaou is with the Kellogg School of Management.We thank the editors and two anonymous referees for insightful comments and sug-gestions.We are grateful to George-Marios Angeletos;Janice Eberly;Jiro Kondo;Harald Uhlig; Toni Whited;and seminar participants at the Federal Reserve Board,the Chicago Fed,the Kel-logg School of Management,the London School of Economics,the University of South California, Michigan State University,and MIT for useful comments and discussions.We are grateful to Valerie Ramey for sharing her data.Dimitris Papanikolaou thanks the Zell Center forfinancial support.The views presented in this paper are solely those of the authors and do not necessarily represent those of the Board of Governors of the Federal Reserve System or its staff members.11131114The Journal of Finance Rexisting capital stock,compared to2%forfirms with a more diversified share-holder base.Our results suggest that forcing managers to bearfirm-specific risk induces a wedge between manager and shareholder valuations of investment opportuni-ties and may lead to underinvestment from the perspective of well-diversified shareholders.Shareholders can mitigate this effect through effective monitor-ing or by providing a convex compensation scheme using stock option grants. Wefind that the effect of insider ownership on the investment–risk relation is weaker forfirms with higher levels of institutional ownership.This is con-sistent with the notion that institutional investors are more effective monitors than individual shareholders.In addition,wefind that,controlling for the level of insider ownership,firms with more convex compensation contracts,which therefore increase in value with uncertainty,have lower investment–risk sen-sitivity.In fact,proponents of option-based compensation have used this argu-ment to justify providing executives with downside protection as a compromise between supplying incentives and mitigating risk-averse behavior.We estimatefirm-specific risk using stock return data,where we decom-pose stock return volatility into a systematic component and an idiosyncratic component.Ourfirst concern is that idiosyncratic volatility is endogenous and could be correlated with thefirm’s investment opportunities.If Tobin’s Q is an imperfect measure of investment opportunities,this would lead to omitted variable bias.We address this issue by considering alternative measures of growth opportunities,as well as estimation methods that directly allow for measurement error in Tobin’s Q.In addition,we instrument for idiosyncratic risk with a measure of thefirm’s customer base concentration.Our intuition is thatfirms selling to only a few customers are less able to diversify demand shocks for their product across customers and thus will be riskier.Wefind that idiosyncratic volatility remains a statistically significant predictor of in-vestment,even after addressing these endogeneity concerns.We consider this to be evidence supportive of a causal relation between idiosyncratic risk to investment.Our second main concern is that insider ownership is endogenous and could be correlated withfirm characteristics that might be affecting the investment-uncertainty relation.For instance,insider ownership may be correlated with costs of externalfinance:if afirm is unable to attract outside investors,then insiders will be forced to hold a substantial stake in thefirm.In this case, convex costs of externalfinance will lead to a negative relation betweenfirm-specific uncertainty and investment through a precautionary saving motive. Alternatively,insider ownership may be correlated with the degree of indus-try competition,since a competitive product market could serve as a substi-tute for high-powered incentives.Imperfect competition may then affect the investment-uncertainty relation through the convexity of the marginal product of capital.We address these concerns by comparing the investment offirms with different levels of insider ownership but with similar size,financial constraints, market power,industry competition,and degree of investment irreversibil-ity.Controlling for thesefirm characteristics,wefind thatfirms with higherInvestment,Idiosyncratic Risk,and Ownership1115 insider ownership display higher sensitivity of investment to idiosyncratic risk.The rest of the paper is organized as follows.Section I reviews the related research.Section II provides a simple model illustrating how idiosyncratic risk can affect capital investment.Section III documents empirically the negative relation between idiosyncratic risk and investment.Section IV shows how this relation varies with levels of insider ownership,convexity of executive com-pensation schemes,and institutional ownership.Section V addresses concerns about endogeneity of idiosyncratic volatility and insider ownership.Section VI explores whether our results hold out of sample,particularly during thefinan-cial crisis of2008to2009.Section VII concludes.Details on data construction are delegated to the Appendix.I.Related ResearchIn traditional economic theory,there is no role for managerial characteristics infirm decisions.However,several recent papers provide empirical evidence to the contrary.Bertrand and Schoar(2003)find a role for managerialfixed effects in corporate decisions.Malmendier and Tate(2005)construct a measure of overconfidence based on the propensity of CEOs to exercise options early andfind greater investment–cashflow sensitivity infirms with overconfident ing psychometric tests administered to corporate executives,Graham, Harvey,and Puri(2010)show that traits such as risk aversion,impatience,and optimism are related to corporate policies.Various studies indicate that the identity of afirm’s shareholders may be important.Himmelberg,Hubbard,and Love(2002)document that,even in publicly tradedfirms,insiders hold a substantial share of thefirm.In a cross-country analysis,theyfind that countries with higher levels of investor protec-tion are characterized by lower levels of insider ownership.Admati,Pfleiderer, and Zechner(1994)illustrate theoretically that large investors exert monitor-ing effort in equilibrium,even when monitoring is costly.Gillan and Starks (2000)and Hartzell and Starks(2003)provide empirical evidence supporting the view that institutional ownership can lead to more effective corporate gov-ernance.The theoretical research in real options has extensively examined the sign of the relation between investment and total uncertainty.The theoretical con-clusions are rather ambiguous,as the sign depends,among other things,on assumptions about the production function,the market structure,the shape of adjustment costs,the importance of investment lags,and the degree of invest-ment irreversibility.An incomplete list includes Hartman(1972),Abel(1983), and Caballero(1991).More recently,Chen,Miao,and Wang(2010)and De-Marzo et al.(2010)explore the effect of managerial risk aversion and idiosyn-cratic risk on investment decisions in dynamic models.The previous papers focus on thefirm’s partial equilibrium problem,while Angeletos(2007)and Bloom(2009)investigate the general equilibrium effects of an increase in un-certainty on investment.1116The Journal of Finance RMost of these theoretical papers make no distinction between idiosyncratic and systematic uncertainty.By contrast,we differentiate between idiosyncratic and systematic uncertainty,because managers can hedge exposure to system-atic but not to idiosyncratic risk.For instance,Knopf,Nam,and Thornton (2002)find that managers are more likely to use derivatives to hedge system-atic risk when the sensitivity of their stock and stock option portfolios to stock price is higher and the sensitivity of their option portfolios to stock return volatility is lower.Several empirical studies explore the predictions of real option models.An in-complete list includes Leahy and Whited (1996),Guiso and Parigi (1999),Bond and Cummins (2004),Bulan (2005),and Bloom,Bond,and VanReenen (2007).With the exception of Bulan (2005),these papers focus on the relation between investment and total or systematic uncertainty facing the firm.This branch of research mostly finds a negative relation between uncertainty and investment,though results appear to be somewhat sensitive to the estimation method.We contribute to this research by showing that managerial risk aversion may be an important channel behind the investment–uncertainty relation.II.ModelHere,we propose a simple two-period model that demonstrates how idiosyn-cratic risk can affect capital investment in the absence of adjustment costs or other investment frictions.We abstract from such frictions because we are in-terested in a different channel:investment decisions are taken by risk-averse managers who hold undiversified stakes in their firm.We focus on the idiosyn-cratic rather than the total uncertainty facing the firm because,as long as managers have access to the same hedging opportunities as shareholders,the presence of systematic risk need not lead to distorted investment decisions from the shareholders’perspective.By contrast,since top executives are not permit-ted to buy put options or short their own company’s stock,they cannot hedge away their exposure to firm-specific risk.Thus,idiosyncratic risk introduces a wedge between managers’and shareholders’optimal decisions.A firm starts with cash C at t =0and produces output at t =1according toy =X √K +e ,(1)where e is managerial effort,K is installed capital,and X ∼N (μ,σ2)is a shock specific to the firm.For simplicity,we assume that there is no aggregate uncer-tainty.The manager owns a fraction λof the firm,while the remaining shares are held by shareholders who are risk averse but hold the market portfolio.We assume that the manager cannot diversify his stake in the firm.The manager derives utility from consumption (c 0,c 1)and disutility from effort (e ):U 0=u (c 0)−v (e )+βE 0u (c 1).(2)Investment,Idiosyncratic Risk,and Ownership 1117Utility over consumption takes the form u (c )=−e −Ac ,where A is the coefficient of absolute risk aversion,and the disutility of labor is an increasing convex function,where v >0,v >0,and v (0)=v (0)=0.The manager’s contract consists of a choice of ownership,λ,and an initial transfer,T .Given the contract,the manager will then choose how much to invest in capital,K ,how much effort to provide,e ,and how much to save in the riskless asset,B ,to maximize (2),subject to (1)and the two following budget constraints:c 0=λ(C −K )−B +T(3)c 1=λ(X √K +e )+R B .(4)We assume that the principal cannot write contracts on K ,e ,or B .The as-sumption that K is not contractible may seem odd at first,since in practice capital expenditures are observable and reported by the firm.Note,however,that even though the level of investment may be observable,the amount of idiosyncratic risk undertaken is not,and what K captures here is the amount of risky investment.We could extend the model to allow for two types of capital,one risky and one riskless.We could then have the principal write contracts on the total investment undertaken by the firm,but not on the capital stock chosen.Nonetheless,doing so would not change our results,given that the manager can also save in the private market.P ROPOSITION 1:The manager’s optimal choice of capital ,K ∗,bonds ,B ∗,and effort ,e ∗,are such thatK ∗= μ2R +λA σ22,(5)R v (e ∗)=λu (c 0),(6)u (c ∗0)=βR E 0u (c ∗1).(7)The elasticity of investment to idiosyncratic risk is∂log K ∂log σ2=−λA σ2R +12λA σ2,(8)and is decreasing in λ,A ,and σ2.The first thing to note is that,as long as λ>0,the manager will underin-vest from the perspective of the shareholders,who are diversified and thus behave as if risk neutral with respect to X .Their optimal capital choice equals1118The Journal of Finance RK f b=μ2/(2R)2.By contrast,the manager holds an undiversified stake in the firm,and therefore his choice of capital stock will depend on the level of the idiosyncratic risk of thefirm,σ2.Ifλwere optimally chosen,it would depend,among other things,on the level of idiosyncratic risk and on the manager’s risk aversion and cost of effort.1 When the principal choosesλ,she faces a tradeoff:increasingλinduces higher effort on the part of the manager,but also leads to underinvestment,since the manager is risk averse.This is similar to the classic incentives versus insurance tradeoff.The difference here lies in the cost of providing incentives. For instance,in Holmstr¨o m(1979)the cost of providing incentives is simply the utility cost to the agent,whereas here there is an additional cost,namely, underinvestment in capital.In the following sections,we investigate two testable implications of the model:•PREDICTION1:Firm-level investment displays a negative relation with idiosyncratic risk.•PREDICTION2:The negative relation between investment and idiosyn-cratic risk is stronger forfirms with higher levels of insider ownership.In our empirical results,we take the variation in insider ownership as given. In reality,there are many reasons why shares of insider ownership may vary acrossfirms.The concern would then be thatλvaries endogenously with some unobservablefirm characteristics that are actually responsible for the negative investment–risk relation.Afirst candidate is risk aversion.We can obtain some intuition from the model regarding the effect of risk aversion on the endogeneity ofλ.Given that we do not observe the manager’s risk aversion,our results will be biased toward rejecting the second prediction,even if the model is true.For instance,suppose thatλis exactly inversely proportional to A,so that less risk-averse managers are given higher stakes in thefirm.Then the investment–risk relation will beflat along levels of insider ownership.There are,however,some additional candidates that are outside the model. Insider ownership may be correlated with the degree offinancial constraints, or it could be endogenously related to the level of competition in the product market.In Sections V.B and V.C,we explore these possibilities in more detail.III.Investment and Idiosyncratic RiskIn this section,we examine thefirst prediction of our model,namely,the response of investment to the volatility of idiosyncratic risk,controlling for several factors that might affect this relation.1In our numerical solution,provided in the Appendix,wefind that the optimalλis decreasing with the level of idiosyncratic risk and the manager’s risk aversion because in that case the cost of underinvestment is lower,and is increasing with the marginal cost of effort.This is consistent with the empiricalfindings of Graham,Harvey,and Puri(2010).Nonetheless,this decrease inλdoes not completely undo the effect of managerial risk aversion.In equilibrium,investment is more sensitive to risk infirms where managers are more risk averse.Investment,Idiosyncratic Risk,and Ownership1119 A.Data and ImplementationWe construct our baseline measure of idiosyncratic volatility using weekly data on stock returns from CRSP.2To estimate afirm’s idiosyncratic risk,we need to remove systematic risk factors that the manager can insure against. Therefore,for everyfirm i and every year t,we regress thefirm’s return on the value-weighted market portfolio,R MKT,and on the corresponding value-weighted industry portfolio,R IND,based on the Fama and French(1997) 30-industry classification.Our measure of yearly idiosyncratic investment volatility forfirm i is the volatility of the residuals across the52weekly observations.Thus,we de-compose the total weekly return of afirm i into a market-,industry-,and firm-specific or idiosyncratic component as followsR i,τ=a1,i+a2,i F i,τ+εi,τ,(9) whereτindexes weeks and F i,τ=[R MKT,R IND].Our measure of idiosyncratic risk is the log volatility of the regression residualslog(σi,t)=logτ∈tε2iτ.(10)Our measure of idiosyncratic risk is highly persistent,even though it is con-structed using a nonoverlapping window:The pooled autocorrelation of log(σi,t) is78%in the1970to2005sample.We also examine the robustness of our re-sults to alternative definitions of the volatility measure.As afirst alternative,we use the volatility of thefirm’s raw returns,σtotalt ,which does not isolate anyidiosyncratic risk.As a second alternative,we use the volatility of the resid-uals from a market model regression offirm returns on the market portfolioalone,σrmktt ,where F i,τ=[R MKT].As a third alternative,we use the volatilityof the residuals from a regression offirm returns on the Fama and French(1993)three factors,σr f f3t ,where F i,τ=[R MKT,R HML,R SMB].All three volatil-ity measures are highly correlated(in excess of95%)and lead to qualitatively and quantitatively similar results.We estimate the response of investment to idiosyncratic risk using the fol-lowing reduced-form equation:I i,tK i,t−1=γ0+βlog(σi,t−1)+γ1Z i,t−1+ηi+g t+v i,t,(11)2In the absence of any microstructure effects,using higher frequency data yields more precise estimates of volatility.Thus,when estimating volatility,in principle one should use the highest frequency data available(daily or even intraday).However,since not all stocks trade every day, using daily data would bias down our estimates of covariance with the market and other factors, thus yielding upward-biased estimates of idiosyncratic volatility.In addition,this bias would vary with the liquidity of thefirm’s traded shares.Given that most stocks trade at least once a week, we view weekly data as a compromise between getting more precise estimates and being free of microstructure effects.1120The Journal of Finance Rwhere the dependent variable is the firm’s investment rate (I t /K t −1)and Z i ,t is a vector of controls:(i)log Tobin’s Q ,defined as the ratio of a firm’s mar-ket value to the replacement cost of capital (log(V t /K t ))and measured as in Fazzari,Hubbard,and Petersen (1988);(ii)the ratio of cash flows to capital (C F t /K t −1),computed as in Salinger and Summers (1983);(iii)log firm size,measured as the firm’s capital stock,scaled by the total capital stock to ensurestationarity (log ˆK t =log(K i ,t /1N f N f i K i ,t ));(iv)the firm’s own stock return (R t );and (v)log firm leverage,measured as the ratio of equity to assets (log(E t /A t )).We control for variables that could jointly affect volatility and investment in order to address biases due to omitted variables.In papers focusing on invest-ment,it is standard to control for Tobin’s Q and cash flows.We control for firm size because smaller firms tend to be more volatile and to grow faster;we control for stock returns because volatility and stock returns are negatively correlated,and we want to ensure that we are picking up the effect of volatility rather than a mean effect due to news about future profitability;and we con-trol for firm leverage because equity volatility increases with leverage,while highly levered firms might invest less due to debt overhang (Myers (1977)).More details about the data construction are provided in the Appendix.We use a semi-log specification to capture the possibility that the investment-Q (or investment-σ)relation is not linear,as,for example,in Eberly,Rebelo,and Vincent (2008).Our results are robust to a linear specification for either Q or idiosyncratic volatility.Depending on the specification,we include firm dum-mies (ηi )or time dummies (g t ).Finally,the errors (v i ,t )are clustered at the firm level.Our sample includes all publicly traded firms in Compustat over the period 1970to 2005,excluding firms in the financial (SIC code 6000–6999),utilities (SIC code 4900–4949),and government-regulated industries (SIC code >9000).We also drop firm-year observations with missing SIC codes,with missing val-ues for investment,Tobin’s Q ,cash flows,size,leverage,stock returns,and with negative book values of capital.We also drop firms with fewer than 40weekly observations in that year.Our sample includes a total of 104,646firm-year observations.Finally,to eliminate the effect of outliers,we winsorize our data by year at the 0.5%and 99.5%levels in all specifications.B.Effect of Idiosyncratic Risk on InvestmentOur estimates of equation (11)are reported in Table I .The first column shows that,when we include only idiosyncratic volatility and firm fixed ef-fects,the coefficient on idiosyncratic volatility is −3.5%and statistically signif-icant.The second column presents the results of the benchmark estimation for equation (11),in which case the coefficient on idiosyncratic volatility is −2%and statistically significant.In the third column,we allow the time effects to vary by industry so as to capture any unobservable component varying at the industry level.In this case,identification comes from differences between a firm and its industry peers.To keep the number of fixed effects manageable,Investment,Idiosyncratic Risk,and Ownership1121Table I Idiosyncratic Risk and InvestmentTable I reports estimation results of equation (11),where the dependent variable is the investment rate (I t /K t −1).Our baseline measure of risk,σi ,t −1,is constructed from a regression of weekly firm-level returns on the CRSP VW index and the corresponding industry portfolio.Additional regressors include lagged values of:Tobin’s Q (Q t −1)defined as in Fazzari,Hubbard,and Petersen (1988);operating cash flows (C F t −1/K t −2)defined as the ratio of operating income (item 18)to the replacement cost of capital,computed as in Salinger and Summers (1983);the firm’s size (log(ˆK t −1))defined as the log value of its replacement cost of capital,scaled by average capital across all firms;the firm’s stock return (R t −1);leverage (E t −1/A t −1)defined as the ratio of book equity (item 216)to book assets (item 6);and systematic volatility (log(σsyst t −1))defined as the (log of the)square root of the difference between the firm’s total variance and its idiosyncratic variance.Item refers to Compustat items.The sample period is 1970to 2005.Here,F denotes firm fixed effects,T denotes time fixed effects,and I ×T denotes industry-time fixed effects.The standard errors are clustered at the firm-level,and t -statistics are reported in parentheses.I t /K t −1No Controls BENCH IND ×T Syst log(σi ,t −1)−0.0346−0.0196−0.0197−0.0244−(13.78)−(8.44)−(8.32)−(9.94)log(σsyst t −1)0.0063(5.51)log(Q t −1)0.06990.06920.0690(45.05)(43.41)(44.31)C F t −1/K t −20.02220.02180.0220(9.91)(9.88)(9.84)log(ˆKt −1)−0.1179−0.1243−0.1188−(31.80)−(31.67)−(32.09)R t −10.01460.01310.0148(8.66)(7.47)(8.75)log(E t −1/A t −1)0.03520.03460.0346(14.13)(13.82)(13.96)Observations104,646104,646104,646104,619R 20.3990.5630.5690.563Fixed effectsF F ,T F ,I ×T F ,T Estimation Method OLS OLS OLS OLS we use the two-digit SIC classification.In this specification,the coefficient on log(σi ,t −1)remains mostly unaffected at −2%.Our estimates imply that the sensitivity of investment to idiosyncratic risk is economically significant.The standard deviation of log idiosyncratic volatility in our sample is 49%,so a one-standard-deviation increase in log σis associated with a 1%to 1.75%decrease in the investment–capital ratio.This is a substan-tial drop,as the mean investment–capital ratio in our sample is approximately 10%.One concern is that idiosyncratic volatility may be positively correlated with systematic volatility,and thus the negative coefficient on idiosyncratic volatil-ity may simply capture the effect of time variation in systematic risk premia on investment.To address this issue,we include lagged systematic volatility1122The Journal of Finance Ras an additional regressor in the fourth column of Table I.3The coefficient on idiosyncratic volatility is still negative and significant(−2.4%),whereas the coefficient on systematic volatility is positive and significant(0.6%).The coef-ficient on systematic volatility is economically small.Given that the standard deviation of systematic volatility is73%in the sample,these estimates imply that a one-standard-deviation increase in systematic volatility is associated with a0.45%increase in the investment–capital ratio.The positive sensitivity of investment to systematic volatility might seem puzzling.All else equal,an increase in systematic volatility increases thefirm’s cost of capital and therefore should lead to a decrease in investment.Here,note that our measure of systematic volatility depends on thefirm’s systematic risk exposures(beta)as well as the amount of market risk(market and industry volatility).Hence,our results are consistent with Bloom(2009),whofinds a negative relation between investment and the volatility of the market portfo-lio.Furthermore,afirm’s exposure to systematic risk depends on its asset mix between investment opportunities and assets in place.In general,growth op-portunities have greater exposure to systematic risk than assets in place,and hencefirms with better investment opportunities will have higher systematic risk and also invest more on average(e.g.,Kogan and Papanikolaou(2010a)). If Tobin’s Q is not a perfect measure of investment opportunities,the resulting omitted variable problem could bias our results toward a positive coefficient on systematic volatility.We explore this possibility in Section V.A.IV.Managerial Ownership and Risk AversionIn this section,we explore the second prediction of our model,namely,that the effect of idiosyncratic risk on investment is stronger forfirms where managers hold a larger share in thefirm.We also examine two related predictions that are outside the model.First,over the last20years,severalfirms have switched to option-based pensating executives with options,rather than shares,pro-vides managers with a convex payoff whose value increases in the volatility of thefirm.Thus,all else equal,increasing the convexity of the compensation package should mitigate the effect of risk aversion on investment(Ross(2004)). We test this prediction by examining the investment–risk sensitivity forfirms with different levels of convexity in their compensation schemes.We expect that the negative effect of idiosyncratic risk on investment will be smaller for firms with more convex compensation schemes.Second,if the investment–risk relation is due to poor managerial diversi-fication,then managers are possibly destroying shareholder value by turning down high idiosyncratic risk but positive net present value projects.To mitigate this loss in value,shareholders may start monitoring managerial investment 3We compute systematic volatility as total volatility minus idiosyncratic volatility,that is,logσsysti,t−1≡log(σtotali,t−1)2−σ2i,t−1.Note that systematic volatility varies in the cross-section due tocross-sectional dispersion in betas with the market and industry portfolios.。
浅谈知识经济
篇名知識經濟淺談知識經濟作者林彥均。
台北市立育成高中。
二年七班壹●前言隨著全球化、資訊化的腳步發展「知識經濟」已是國際的潮流。
第一次看到這個新名詞,讓我感到很納悶,知識真的能帶動經濟發展嗎?經過在網路上的資料搜尋以及查閱許多相關的書籍,才讓我了解到,其實生活中的種種經濟現象,都和「知識經濟」有極大的關係。
傳統的經濟產業隨著時代的改變,已逐年沒落,現代所開發的經濟型態多以「知識經濟」為主。
過去,誰擁有最多的土地、誰擁有最多的勞工,誰就擁有財富;但現在,誰擁有知識,所擁有的財富將會是擁有土地、勞工的數倍,例如:微軟公司的比爾蓋茲就是知識創造者,創造了龐大的財富。
希望透過這份小論文,深入去了解「知識經濟」所帶來的影響。
貳●正文一、何謂知識經濟『「知識經濟知識經濟」」一詞是由經濟合作發展組織一詞是由經濟合作發展組織((OECD )首創首創,,一般也以他們的定義為主義為主,,該定義是該定義是「「以知識資源的擁有以知識資源的擁有、、配置配置、、產生和使用產生和使用,,為最重要生產因素的經濟型態的經濟型態。
」。
」』(註一)詳細一點的解釋,就是直接建立在知識及資訊的激發、擴散和應用之上的經濟,創造知識和應用知識的能力與效率,凌駕於土地、資金等傳統生產要素之上,成為支持經濟不斷發展的動力。
『202020世紀末發生於美國的世紀末發生於美國的知識經濟又叫新經濟知識經濟又叫新經濟,,它為全世界締造了一個新的經濟奇蹟它為全世界締造了一個新的經濟奇蹟。
「「新經濟新經濟新經濟」」是指跨越傳統的思維及運作越傳統的思維及運作,,以創新以創新、、科技科技、、資訊資訊、、全球化全球化、、競爭力競爭力………………為其成長的動為其成長的動力,而這些因素的運作必須依賴而這些因素的運作必須依賴「「知識知識」」的累積的累積、、應用及轉化應用及轉化。
」(註二)有別於過去的傳統經濟,傳統經濟以自然資源和人力來創造財富,但是在現代的社會中,運用「知識」所創造出的財富才是驚人的。
Stirner, Nietzsche, and the Critique of Truth
JEFFREY BERGNER
T o GAIN FREEDOM FROM INTELLECTUAL RESTRAINT has long been a theme of German thought. In his essay "What is Enlightenment7" Immanuel Kant defines enlightenment as "man's release from his self-incurred tutelage. ''x As he states later in the same essay, Kant was primarily concerned with self-incurred tutelage with respect to religious matters, as the arts and sciences seemed to him relatively more free. 2 Yet less than fifteen years later the romantic movement, as exemplified particularly in the work of Friedrieh Schlegel, revolted against what was felt to be a limiting standard in the arts. Classical canons of objectivity were no longer thought able to subsume the wealth of human experience and creativity; a Kunst des Unendlichen was to replace a Kunst der Begrenzung. s Romantic thought sought generally to complement the passive or analytic conception of being with the free play of the imagination as a creative factor. The motif of overcoming intellectual restriction continued throughout the early decades of the nineteenth century, manifesting itself in travels to broaden intellectual and artistic sensibilities, in frequent condemnation of narrow-minded and dogmatic German "philistinism," and perhaps most clearly in the apotheosis of Hegelian philosophy in Berlin. This paper takes up the theme with respect to a figure of the 1840's---the socalled philosopher of the ego, Max Stirner. Stirner, born in Bayreuth in 1806, was one of a group of "young Hegelians" known as "Die Freien" which frequented Hippel's restaurant in Berlin in the early 1840's. In this atmosphere germinated the ideas which formed the basis of his principal work, Der Einzige und sein Eigentum ( 1 8 4 4 ) . In Der Einzige Stirner offers a critique of intellectual confinement and fixity which reaches almost claustrophobic proportions. Stirner's critique is no doubt important in considering nihilism and some elements of existentialist thought, as R. W. K. Paterson has recently demonstrated; 4 yet the importance of Der Einzige lies no less in the fact that it characterizes a major shift in the grounds of the consideration of intellectual confinement, a shift which later became fully * This paper is partially indebted to several suggestions of Walter Kaufmann. : Immanuel Kant, "What is EnlightenmentT" in Foundations of the Metaphysics of Morals, Lewis White Beck, trans. (Indianapolis: Bobbs-Merrill Company, Inc, 1959), p. 85. Ibld., p. 91. s Arthur O. Lovejoy, "Schiller and the Genesis of German Romanticism" in Essays in the History of Ideas (New York: G. P. Putnam's Sons, 1948), calls this thesis the "generating and generic element in the Romantic doctrine" (p. 220). 4 R. W. K. Paterson, The Nihilistic Egoist: Max Stirner (Oxford, 1971). [523]
银川盆地岩石圈结构——长观测距宽角反射与折射剖面结果
银川盆地岩石圈结构——长观测距宽角反射与折射剖面结果林吉焱;刘保金;张先康;段永红【摘要】利用2014年完成的穿过银川盆地人工源宽角反射与折射剖面的3炮长观测距资料,采用基于地震波走时反演方法的Rayinvr算法得到了研究区地壳和上地幔的速度结构.结果表明:研究区地壳厚度为42-48 km,莫霍面沿剖面展布形态呈现出东西两侧浅、中部较深的特征,莫霍面最深的区段位于贺兰山下方.P波速度沿剖面随着深度的增加呈正梯度增大,然而在深度约为90-103 km的岩石圈地幔中,识别出两组较明显的反射界面,两组界面之间并未发现P波速度随深度而显著增加,表明研究区下方存在与地球平均模型中速度随深度增加而增大不相符的速度结构,推测银川盆地下方岩石圈与软流圈之间可能存在速度过渡带.%By using P-wave travel time data from three shots of the deep seismic sounding profiles passing through Yinchuan basin in 2014,based on travel time inversion method Rayinvr,we get the crustal and upper-mantle structure in the studied area.The results show that crustal thickness in the studied area varies from 42 km to 48 km,Moho depth is shallower in the east and west sides of the profile,and much deeper in the middle segment,the deepest Moho interface is beneath Helan mountain.P wave velocity increases with the increasing of depth in a positive gradient,but two distinct interfaces can be identified in the lithosphere mantle within the depths of 90-103 km.This layer does not exhibit the characteristic that the velocity increases obviously with the depth,suggesting the structure in the studied area present a characteristic that did not conform to the global average model,whose velocity increases according to the depth,therefore it isdeduced that a velocity transition zone exists between lithosphere and asthenosphere beneath Yinchuan basin.【期刊名称】《地震学报》【年(卷),期】2017(039)005【总页数】13页(P669-681)【关键词】银川盆地;岩石圈;宽角反射与折射;青藏高原东北缘【作者】林吉焱;刘保金;张先康;段永红【作者单位】中国郑州450002 中国地震局地球物理勘探中心;中国郑州450002 中国地震局地球物理勘探中心;中国郑州450002 中国地震局地球物理勘探中心;中国郑州450002 中国地震局地球物理勘探中心【正文语种】中文【中图分类】P313.2岩石圈是美国地质学家Joseph Barrell于1914年基于大陆地壳上方存在的明显重力异常提出来的地球圈层概念(Barrell, 1914).它是地球浅部上浮于软流圈之上的坚硬岩石圈层,厚度约为60—200 km,包括地壳和上地幔顶部.对于岩石圈厚度和性质的研究目前仍是一个较大的挑战,利用地震学接收函数方法所得结果显示,前寒武系地盾和地台区的岩石圈厚度约为90—100 km,构造活动区域的岩石圈厚度约为80 km,海洋区域的岩石圈厚度约为60—70 km,稳定克拉通区域的岩石圈厚度大于200 km(Chen, 2009; Chen et al, 2009; Rychert, Shearer, 2009).中国的华北克拉通区域,由于构造的多样性和破坏机制的复杂性,目前已成为国际地学界的研究热点,其东部、西部及太行山地区岩石圈结构的差异性揭示了其被破坏的不均匀性(Chen, 2009; Chen et al, 2009; 朱日祥等,2011, 2012).银川盆地位于我国南北地震带北段,地处华北克拉通西部的鄂尔多斯地块西缘与阿拉善地块之间.现有研究结果显示:银川盆地地壳厚度约为43 km,沉积盖层厚度约为5—7 km,结晶地壳P波平均速度约为6.35 km/s;下地壳厚度约为22 km, P波速度约为6.5—6.8 km/s(杨卓欣等, 2009; Jia et al, 2014; Tian et al, 2014).远震接收函数的结果显示,华北克拉通区域的岩石圈厚度在东部、中部和西部差异较大,从东部渤海湾盆地区域的80—90 km逐渐向西增厚至鄂尔多斯地块中部的200 km,而环鄂尔多斯地块的新生代银川—河套和汾渭裂陷区域的岩石圈减薄至80 km(Chen, 2009; Chen et al, 2009; 朱日祥等, 2011,2012).主动源宽角反射与折射方法由于受到炸药爆破震源的限制,仅在少数大药量炮点的长观测距剖面中可以记录到来自上地幔顶部的地震波震相.王帅军等(2014)采用射线追踪方法对文登—阿拉善左旗宽角反射与折射剖面数据进行处理,结果显示在鄂尔多斯地块西缘的岩石圈地幔中存在两组反射界面,其深度分别为80 km和160 km.刘志等(2015)利用地震波走时反演的方法对文登—阿拉善左旗宽角反射与折射剖面太行山以东的资料进行处理,结果表明华北克拉通东部华北裂陷盆地区域的岩石圈厚度约为75—80 km,在太行山隆起区岩石圈厚度加深至约90 km.上述结果为分析华北克拉通不同构造区域的岩石圈结构提供了大量有价值的资料,但仍缺少人工源宽角反射与折射方法获得银川盆地下方的岩石圈结构信息.鉴于此,本文利用中国地震局地球物理勘探中心2014年完成的穿过银川盆地的3炮长观测距人工地震宽角反射与折射剖面资料,分析鄂尔多斯地块西缘的银川盆地及其周缘的地壳和岩石圈结构,并采用基于地震波走时反演方法的Rayinvr算法建立穿过银川盆地和贺兰山的地壳和岩石圈速度模型,以期通过与其它探测结果在地壳和岩石圈地幔的速度结构特征及界面性质的比较,探究银川盆地下方及相邻地块结构特征、岩石圈地幔内部界面性质和速度分布特征.本研究的探测剖面自西向东依次穿过4个地质构造单元:阿拉善地块、贺兰山、银川盆地和鄂尔多斯地块(图1).阿拉善地块位于中朝古板块的西部,其西南侧为正在隆起的青藏高原,东侧为稳定的鄂尔多斯地块(张进等, 2007).银川盆地和贺兰山夹在阿拉善地块、鄂尔多斯地块与青藏高原之间.其形成受控于几个地块的相互作用,尤其是青藏高原北东向的持续挤压作用.银川地堑盆地形成于渐新世,经历了多期伸展与构造变形,盆地地层持续断陷沉降,新生界地层厚度达5000—8500 m(邓起东等, 1999;黄兴富等, 2013).西北侧的贺兰山与该盆地的地貌形成显著的反差,垂直落差可达2000余米,贺兰山自中生代晚期开始褶皱隆起,新生代经历明显的隆升剥蚀作用,现今构造活动依然强烈(Darby, Ritts, 2002;刘建辉等, 2010). 1739年1月3日平罗—银川M8.0地震发生在该探测区,其震中烈度大于Ⅹ度,发震构造为贺兰山和银川盆地之间的贺兰山东麓断裂.如图1所示,研究区周缘重要的断裂带自西向东依次为巴彦乌拉山东缘断裂(F1)、乌拉山北侧断裂(F2)、贺兰山东麓断裂(F3)和黄河断裂(F4), 其中贺兰山东麓断裂和黄河断裂控制着银川盆地的西部和东部边界,二者均为NNE走向正断层.人工源宽角反射与折射剖面东南端起自内蒙古鄂尔多斯乌审旗,向西北方向依次穿过鄂尔多斯地块西缘、银川盆地、贺兰山和阿拉善地块,西北端止于内蒙古阿拉善右旗阿拉腾敖包镇,全长约500 km.测线在300—350 km桩号段为银川盆地, 350—390 km 桩号段为贺兰山中部.沿测线共进行12次井下爆破激发,单炮药量为0.5—3.0 t,井深约为60—80 m,共投入PDS型地震仪器300余台,观测点距为2—3 km,其中银川盆地和贺兰山观测段观测点进行了加密观测,平均观测点距约0.8 km.该剖面的SP1, SP2和SP11 3炮点观测距较大,可以识别来自岩石圈地幔的反射波震相,故本文利用这3炮数据资料来研究银川盆地下方岩石圈结构.宽角反射与折射记录截面显示出地壳和上地幔不同深度的震相特征, 图2给出了SP1, SP2和SP11这3个炮点的宽角反射与折射记录截面图,可见主要震相为上地壳折射波Pg,结晶地壳底部莫霍界面反射波PmP,上地幔顶部折射波Pn,岩石圈地幔回折波PL1,岩石圈地幔内部界面的反射波PL1P和PL2P.上地壳折射波Pg一般被认为是来自结晶基底以下的折射波,它是记录截面中的初至波,震相非常清晰可靠. Pg波接收距离为几十至一百多千米,随炮检距增大视速度稳定在6.0—6.3 km/s(嘉世旭,张先康, 2008).该震相在研究区可清晰追踪至炮检距110—120 km处,视速度约为6.1—6.4 km/s,显示了上部地壳相对稳定的结构特征. SP11炮地表为中新生代沉积物,地表速度约为3.7—4.5 km/s,大于东部鄂尔多斯地块西缘的地表速度(2.9—3.3 km/s), SP11炮检距100 km处,受到贺兰山地表隆升和地表高速度介质的影响, Pg走时局部超前(图2c).PmP波是地壳和地幔分界面莫霍面的反射波和下地壳强速度梯度层的回折波,通常是整个记录截面中能量最强的波,反映了莫霍面的一级速度间断面特征.各炮点PmP追踪距离为80—300 km,视速度约为6.3—7.0 km/s,其中炮检距在150 km处PmP折合到时约为9.3—9.9 s,在炮检距250 km处折合到时约为10.9—11.4 s, SP11炮PmP的走时滞后显示了沿该测线地壳厚度向西逐渐增厚的特征. PmP波的波组复杂程度反映了不同地块中莫霍面的性质差异,有些区域表现为一级速度间断面,而有些区域可能表现为具有一定厚度的过渡带,此时的PmP震相不再为一简单的子波,而是一个复杂的波列.从SP11炮与SP1和SP3的对比看,阿拉善地块内部PmP波的波列复杂程度较鄂尔多斯西缘的高,显示出两个地块莫霍面性质的差异.上地幔顶部的折射波Pn是沿上地幔顶部传播的滑行波或上地幔顶部弱速度梯度层的回折波,视速度为7.9—8.1 km/s,追踪距离为200—300 km.当震源激发效果较好、观测距离较长时,炮检距大于一定范围后即可追踪到一组视速度略大于Pn波的折射震相;它是岩石圈地幔内部的回折波,穿透深度大于Pn波,反映了岩石圈地幔内部的速度梯度特征,这里定义其为PL1波. PL1波具有与Pn波不同的视速度,反映了岩石圈地幔不同深度速度梯度的差异.研究区PL1震相的追踪距离为280—450 km,视速度为8.2—8.3 km/s. Pn波和PL1波均属于折射波,反映了莫霍面下方不同深度的速度梯度值,但考虑到两组波视速度的差异,将这两组波分开进行处理,在后期的反演和模型构建时也更加方便.在折射波PL1震相之后约1 s可以追踪到一组反射震相PL1P,其振幅较PL1强,是岩石圈地幔中的第一组反射震相,其视速度约为8.3—8.4 km/s. PL1P震相在3炮记录截面中均可以被追踪,追踪距离为300—450 km,具有相遇点走时互换的特征. 3炮记录中以SP1炮的PL1P震相最为清晰,根据x2-t2方法计算,对应反射面深度约为93 km,上覆介质平均速度约为7.3 km/s. SP1炮记录截面在炮检距大于400 km范围内可以追踪到一组震相,较PL1P震相滞后0.5—1.0 s,走时曲线斜率稍小于PL1P震相,这里定义其为PL2P震相. PL2P震相是岩石圈地幔中的第二组反射震相,根据x2—t2方法计算,对应反射面深度约为112 km,上覆介质平均速度约为7.6 km/s.由于x2-t2方法是以水平层状均匀介质为假设条件,而真实的地下介质是横向非均匀的,故其计算结果仅可作为建立二维模型的一个初始模型,反射界面深度和界面以上介质的平均速度以二维走时反演的结果为准.人工源深地震测深资料的处理方法主要基于射线理论,模型参数化方法采用层状结构模型和块状结构模型徐涛等, 2004; Xu et al, 2006, 2010, 2014; 李飞等,2013;俞贵平等, 2017).本文采用基于地震波走时反演方法的Rayinvr算法来获取岩石圈结构模型,该方法可以同时反演介质速度和界面深度(Zelt, Ellis, 1988; Zelt, Smith, 1992),用尽量少的参数来描述模型,将模型用若干梯形地块表示,上下相邻的地块之间允许速度跳跃,左右相邻的地块之间速度连续变化.该方法实现了快速、高效的射线追踪和反演,其中反演采用阻尼最小二乘方法来实现.该方法能够较客观地给出模型参数,减小人为因素的影响,并可以给出走时拟合的残差情况.反演使用的初始模型主要基于已有的研究成果(杨卓欣等, 2009; Jia et al, 2014;Tian et al, 2014)和每炮的一维走时拟合结果(图3).鄂尔多斯地块内部结晶基底埋深约为3—4 km,沉积盖层平均速度为4.0—4.5 km/s; 西部的阿拉善地块内部基底埋深约为2—4 km,沉积盖层平均速度略高,约为4.7—5.0 km/s; 由于银川盆地和贺兰山地区没有Pg射线覆盖,故无法给出沉积盖层的结构信息,这里参考前人的研究成果(杨卓欣等, 2009).地壳厚度从东部鄂尔多斯地块的40 km 逐渐加深至贺兰山下方的50 km,在阿拉善地块下方,莫霍面埋深约为45 km.经过对岩石圈地幔中两组反射震相PL1P和PL2P的走时拟合,计算出深度为90 km和110 km处存在两个反射界面,其PL2P震相的走时曲线形态(图2a)显示出较PL1P震相略小的视速度特征,暗示了在90—110 km深度范围内可能存在低速层或速度不变层,介质速度并非按照正常的速度梯度随深度而增加.根据已构建的初始模型,按照从浅到深的原则,对地壳和上地幔结构进行速度和界面联合反演,使实测走时与理论走时得到最佳拟合, 3炮各组震相拟合的均方根走时残差为0.126—0.253,卡方值χ2为2.988—6.434,图4为各个震相反演最终结果的射线追踪图.反演得到的沿剖面速度结构(图5a)显示:鄂尔多斯地块结晶基底埋深约为4.0—4.3 km,沉积盖层平均速度为4.1—4.4 km/s;阿拉善地块沉积盖层横向变化较大,厚度约为1.8—4.5 km,平均速度为4.8—4.9 km/s. 沿测线莫霍面深度从两侧向中间加深,鄂尔多斯地块的莫霍面埋深约为42.5—45.0 km,阿拉善地块的莫霍面埋深约为43.4—46.5 km,贺兰山下方莫霍面埋深最深处达48.8 km,结晶地壳平均速度约为6.35—6.39 km/s(图6).鄂尔多斯与阿拉善地块之间的贺兰山和银川盆地下方的地壳显著增厚,结晶地壳平均速度降低,揭示了两个刚性地块内部的结构变形小,而刚性地块接触带吸收大量的变形,使海拔升高、地壳厚度增大且地壳平均速度降低.岩石圈地幔PL1P和PL2P震相走时反演的射线追踪(图4c)显示PL1P射线覆盖范围为230—400 km桩号段,而PL2P仅覆盖300—350 km桩号段,因此仅讨论射线覆盖范围内的岩石圈地幔结构.反演结果(图5a)显示:岩石圈地幔中的第一个反射界面L1的埋深约为90.1—91.8 km,第二个反射界面L2的埋深约为103.0 km.在莫霍面与L1界面之间,垂向速度梯度随深度增加而减小,以60—65 km深度为界,上部垂向速度梯度约为0.01 (km·s-1)/km,下部梯度略小,约为0.007 (km·s-1)/km,至L1界面处,速度增加至8.40—8.45km/s. L1界面与L2界面之间,显示为一个速度约为8.50 km/s,厚度约为13—14 km的恒定速度带,其内部垂向速度梯度为零.针对莫霍面与L1界面之间的速度梯度带使用4种速度模型进行测试(图7a),结果如图7b--e所示,可见当上层为恒定速度带而下层为梯度带时(模型Ⅰ),无法追踪莫霍面以下的折射波(图7b);当仅存在一个速度梯度带时(模型Ⅱ), Pn波可追踪至约200 km处(图7c), Pn波的视速度有明显差异;当上层为梯度带而下层为恒定速度带时(模型Ⅳ), Pn波可追踪至约200 km处(图7e),但模型Ⅱ和模型Ⅳ均未实现对200—300 km范围的走时拟合;模型Ⅲ表示使用两种不同速度梯度带,则给出了较好的走时拟合(图7d),根据震相视速度的差别,定义观测距大于200 km区域的折射波为PL1波.模型对比结果表示:在莫霍面与L1界面之间存在两个不同的速度梯度层,且上层速度梯度略大于下层. L1界面与L2界面之间由于没有折射波信息约束,速度值在一个可变范围内能实现反射波PL2P的走时拟合,但根据PL1P波和PL2P波的x2-t2方法结果和反射波视速度的差异,推测L1界面与L2界面之间介质速度并未显著增加,使用一个恒定速度带进行走时拟合比较合理.本文关于银川盆地下方地壳和岩石圈的走时反演结果显示, 研究区地壳厚度从东西两侧向中间增厚,在贺兰山下方莫霍面埋深增加至48.8 km,较东侧的鄂尔多斯地块深约4—5 km; 地壳平均速度也显示出中间低而东西两侧高的速度特征,结晶地壳平均速度约为6.35—6.39 km/s.岩石圈地幔中在90 km与103 km深度之间存在两组较明显的反射界面,介质速度从莫霍面下方的8.0 km/s呈正梯度增加至L1界面处的8.40—8.45 km/s,推测L1界面与L2界面之间的介质速度维持在8.50 km/s左右,显示出岩石圈地幔内部介质速度随深度增加大背景下,可能存在一个厚度约为13 km的速度不随深度增加的层位,意味着银川盆地下岩石圈与软流圈之间存在可能的过渡带.莫霍面与L1界面之间的速度值主要基于上地幔顶部的折射波Pn和L1界面上方的PL1波,由于二者均为折射波,可以得到较准确的速度值和速度梯度值.正是由于两组震相视速度的差异,显示出莫霍面与L1界面之间存在上部垂向速度梯度略大而下部梯度略小的结构特征.原始地震记录截面图中PL2P震相斜率小于PL1P 震相斜率(图2a),也反映了L1界面与L2界面之间存在一个速度不按正常梯度增加的介质层.但是,现有数据资料尚无法证明L1界面与L2界面之间介质速度较其上方速度降低. 本文使用一个恒定速度层(8.50 km/s)来构建该介质层.通过计算发现,使用这一恒定速度层对PL2P震相的拟合取得了较好的效果. PL1P射线和PL2P射线覆盖范围有限, L1界面和L2界面特征仅反映银川盆地下方岩石圈地幔的速度特征,并不能确定鄂尔多斯地块和阿拉善地块下方也存在相似的构造特征.近年来,围绕华北克拉通破坏这一科学问题,在华北克拉通东部和西部开展了一系列关于岩石圈厚度的研究(Chen, 2009; Chen et al, 2009;王帅军等, 2014;段永红等, 2015; 酆少英等, 2015;刘志等, 2015),结果显示出华北克拉通岩石圈具有强烈的横向非均匀性特征,东部岩石圈显著减薄,厚度约为60—100 km,西部的鄂尔多斯盆地下方仍保留着厚度约为200 km的古老岩石圈.最新研究结果显示岩石圈底界面可能是具有一定厚度的速度梯度过渡带,而并非之前认为的是一个尖锐的速度间断面,大陆和海洋下方、稳定的克拉通区域及克拉通遭受破坏区域的岩石圈结构对比显示出岩石圈底界面在地球不同区域的差异性和复杂性(O′Reilly, Griffin, 2010; Yuan, Romanowicz, 2010; Hamza, Vieira, 2012). Yuan和Romanowicz (2010)利用各向异性分析方法得出北美大陆下方岩石圈存在双层结构,认为之前由接收函数方法获得的具有尖锐界面特征的150 km深度界面很有可能是岩石圈内部的一个速度间断面,并非岩石圈底界面. Fuchs等(2002)通过有限差分法模拟俄罗斯核爆破超长观测距地震记录剖面中存在的高频Pn和Sn波,计算出莫霍面以下至上地幔100 km深度的精细速度结构,指出在莫霍面下方的岩石圈地幔中存在若干具有层叠特征的复杂界面.Thybo和(1997)利用理论地震图方法对欧亚大陆和北美大陆的多条高分辨地震剖面进行模拟,发现在约为100 km深度处存在一低速带,并认为该低速带的形成与上地幔部分熔融有关. Chen等(2014)关于接收函数的研究结果表明,在华北克拉通中西部岩石圈厚度较大的地区(160—200 km),在80—100 km深度范围内存在一个岩石圈内部的不连续面,显示为一个古老的、相对低速的机械软弱带,这一个软弱带在克拉通东部地区已被破坏,而在西部地区被保留下来.刘保金等(2017)通过对穿越银川盆地和贺兰山的长度为135 km的深地震反射剖面(图5b)的研究显示,在深度约为90 km处存在一组较强能量的上地幔反射波组.本文中观测到的L1界面深度为90—92 km,与上地幔反射波组和不连续面的深度基本一致,表明银川盆地下方90 km深度处存在一个较明显的速度不连续界面. L1界面与L2界面之间存在一厚约13 km的速度约为8.5 km/s的层位,与Thybo和(1997)及Fuchs等(2002)的计算结果存在一定的相似性,暗示了银川盆地下方的这一特殊层位存在的可能性较大.接收函数关于华北克拉通的研究结果显示出岩石圈底部存在地震波速度逆转(Chen et al, 2014),考虑到岩石圈与软流圈介质性质的差异,该速度逆转的存在对确定岩石圈厚度具有重要意义.本文中的L2界面深度为103 km,该界面下方的地震波速度是否降低, L2界面是否为银川盆地岩石圈的底界面,尚需更多的地球物理学证据.邓起东, 程绍平, 闵伟, 杨桂枝, 任殿卫. 1999. 鄂尔多斯块体新生代构造活动和动力学的讨论[J]. 地质力学学报, 5(3): 13--21.Deng Q D, Cheng S P, Min W, Yang G Z, Ren D W. 1999. Discussion on Cenozoic tectonics and dynamics of Ordos block[J]. Journal of Geomechanics, 5(3): 13--21 (in Chinese).段永红, 刘保金, 赵金仁, 刘保峰, 张成科, 潘素珍, 林吉焱, 郭文斌. 2015. 华北构造区岩石圈二维P波速度结构特征: 来自盐城—包头深地震测深剖面的约束[J]. 中国科学: 地球科学, 45(8): 1183--1197.Duan Y H, Liu B J, Zhao J R, Liu B F, Zhang C K, Pan S Z, Lin J Y, Guo W B. 2015. 2-D P-wave velocity structure of lithosphere in the North China tectonic zone: Constraints from the Yancheng-Baotou deep seismic profile[J]. Science China Earth Sciences, 58(9): 1577--1591.酆少英, 刘保金, 姬计法, 何银娟, 谭雅丽, 李怡青. 2015. 呼和浩特—包头盆地岩石圈细结构的深地震反射探测[J]. 地球物理学报, 58(4): 1158--1168.Feng S Y, Liu B J, Ji J F, He Y J, Tan Y L, Li Y Q. 2015. The survey on fine lithospheric structure beneath Hohhot-Baotou basin by deep seismicreflection profile[J]. Chinese Journal of Geophysics, 58(4): 1158--1168 (in Chinese).黄兴富, 施炜, 李恒强, 陈龙, 岑敏. 2013. 银川盆地新生代构造演化: 来自银川盆地主边界断裂运动学的约束[J]. 地学前缘, 20(4): 199--210.Huang X F, Shi W, Li H Q, Chen L, Cen M. 2013. Cenozoic tectonic evolution of the Yinchuan basin: Constraints from the deformation of its boundary faults[J]. Earth Science Frontiers, 20(4): 199--210 (in Chinese).嘉世旭, 张先康. 2008. 青藏高原东北缘深地震测深震相研究与地壳细结构[J]. 地球物理学报, 51(5): 1431--1443.Jia S X, Zhang X K. 2008. Study on the crust phases of deep seismic sounding experiments and fine crust structures in the northeast margin of Tibetan Plateau[J]. Chinese Journal of Geophysics, 51(5): 1431--1443 (in Chinese).李飞, 徐涛, 武振波, 张忠杰, 滕吉文. 2013. 三维非均匀地质模型中的逐段迭代射线追踪[J]. 地球物理学报, 56(10): 3514--3522.Li F, Xu T, Wu Z B, Zhang Z J, Teng J W. 2013. Segmentally iterative ray tracing in 3-D heterogeneous geological models[J]. Chinese Journal of Geophysics, 56(10): 3514--3522 (in Chinese).刘保金, 酆少英, 姬计法, 王帅军, 张建狮, 袁洪克, 杨国俊. 2017. 贺兰山和银川盆地的岩石圈结构和断裂特征: 深地震反射剖面结果[J]. 中国科学: 地球科学, 47(2): 179--190.Liu B J, Feng S Y, Ji J F, Wang S J, Zhang J S, Yuan H K, Yang G J. 2017. Lithospheric structure and faulting characteristics of the Helan Mountains and Yinchuan basin: Results of deep seismic reflection profiling[J]. ScienceChina Earth Sciences, 60(3): 589--601.刘建辉, 张培震, 郑德文, 万景林, 王伟涛, 杜鹏, 雷启云. 2010. 贺兰山晚新生代隆升的剥露特征及其隆升模式[J]. 中国科学: 地球科学, 40(1): 50--60.Liu J H, Zhang P Z, Zheng D W, Wan J L, Wang W T, Du P, Lei Q Y. 2010. Pattern and timing of Late Cenozoic rapid exhumation and uplift of the Helan Mountain, China[J]. Science China Earth Sciences, 53(3): 345--355. 刘志, 王夫运, 张先康, 段永红, 杨卓欣, 林吉焱. 2015. 华北克拉通东部地壳与地幔盖层结构: 长观测距深地震测深剖面结果[J]. 地球物理学报, 58(4): 1145--1157. Liu Z, Wang F Y, Zhang X K, Duan Y H, Yang Z X, Lin J Y. 2015. Seismic structure of the lithosphere beneath eastern North China Craton: Results from long distance deep seismic sounding[J]. Chinese Journal of Geophysics, 58(4): 1145--1157 (in Chinese).王帅军, 王夫运, 张建狮, 嘉世旭, 张成科, 赵金仁, 刘宝峰. 2014. 华北克拉通岩石圈二维P波速度结构特征:文登—阿拉善左旗深地震测深剖面结果[J]. 中国科学: 地球科学, 44(12): 2697--2708.Wang S J, Wang F Y, Zhang J S, Jia S X, Zhang C K, Zhao J R, Liu B F. 2014. The P-wave velocity structure of the lithosphere of the North China Craton: Results from the Wendeng-Alxa left banner deep seismic sounding profile[J]. Science China Earth Sciences, 57(9): 2053--2063.徐涛, 徐果明, 高尔根, 朱良保, 蒋先艺. 2004. 三维复杂介质的块状建模和试射射线追踪[J]. 地球物理学报, 47(6): 1118--1126.Xu T, Xu G M, Gao E G, Zhu L B, Jiang X Y. 2004. Block modeling and shooting ray tracing in complex 3-D media[J]. Chinese Journal of Geophysics, 47(6): 1118--1126 (in Chinese).杨卓欣, 段永红, 王夫运, 赵金仁, 潘素珍, 李莉. 2009. 银川盆地深地震断层的三维透射成像[J]. 地球物理学报, 52(8): 2026--2034.Yang Z X, Duan Y H, Wang F Y, Zhao J R, Pan S Z, Li L. 2009. Tomographic determination of the deep earthquake faults in Yinchuan basin by using three-dimensional seismic transmission technology[J]. Chinese Journal of Geophysics, 52(8): 2026--2034 (in Chinese).俞贵平, 徐涛, 张明辉, 白志明, 刘有山, 武澄泷, 滕吉文. 2017. 三维复杂地壳结构非线性走时反演[J]. 地球物理学报, 60(4): 1398--1410.Yu G P, Xu T, Zhang M H, Bai Z M, Liu Y S, Wu C L, Teng J W. 2017. Nonlinear travel-time inversion for 3-D complex crustal velocity structure[J]. Chinese Journal of Geophysics, 60(4): 1398--1410 (in Chinese). 张进, 李锦轶, 李彦峰, 马宗晋. 2007. 阿拉善地块新生代构造作用:兼论阿尔金断裂新生代东向延伸问题[J]. 地质学报, 81(11): 1481--1497.Zhang J, Li J Y, Li Y F, Ma Z J. 2007. The Cenozoic deformation of the Alxa block in central Asia: Question on the northeastern extension of the Altyn Tagh fault in Cenozoic time[J]. Acta Geologica Sinica, 81(11): 1481--1497 (in Chinese).朱日祥, 陈凌, 吴福元, 刘俊来. 2011. 华北克拉通破坏的时间、范围与机制[J]. 中国科学: 地球科学, 41(5): 583--592.Zhu R X, Chen L, Wu F Y, Liu J L. 2011. Timing, scale and mechanism of the destruction of the North China Craton[J]. Science China Earth Sciences,54(6): 789--797.朱日祥, 徐义刚, 朱光, 张宏福, 夏群科, 郑天愉. 2012. 华北克拉通破坏[J]. 中国科学: 地球科学, 42(8): 1135--1159.。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
INTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL Int.J.Robust.Nonlinear Control (2012)Published online in Wiley Online Library ().DOI:10.1002/rnc.2810Equivalence of sum of squares convex relaxations forquadratic distance problemsAndrea Garulli *,†,Alfio Masi and Antonio VicinoDipartimento di Ingegneria dell’Informazione,Universitàdi Siena,Via Roma 56,53100Siena,ItalySUMMARYThis paper deals with convex relaxations for quadratic distance problems,a class of optimization problems relevant to several important topics in the analysis and synthesis of robust control systems.Some classes of convex relaxations are investigated using the sum of squares paradigm for the representation of positive polynomials.The main contribution is to show that two different relaxations,based respectively on the Posi-tivstellensatz and on properties of homogeneous polynomial forms,are equivalent.Relationships among the considered relaxations are discussed and numerical comparisons are presented,highlighting their degree of conservatism.Copyright ©2012John Wiley &Sons,Ltd.Received 26November 2010;Revised 15January 2012;Accepted 12February 2012KEY WORDS :optimization,convex relaxations,SOS polynomials,robust control1.INTRODUCTIONThe relevance of quadratic distance problems to the analysis and synthesis of control systems is well recognized in the literature on robust control [1],nonlinear systems stability analysis [2],singularly perturbed systems [3].Actually,a number of problems can be formulated as the computation of the minimum distance in the `2norm,from a point to an algebraic surface in a finite dimensional space.The stability margin for systems with parametric uncertainty [4],the estimation of the domain of attraction for nonlinear systems [5–7],D-stability of real matrices [8],the computation of the region of validity of optimal H 1controllers for nonlinear systems [9],and the characterization of the fre-quency plots of an ellipsoidal family of rational functions [10],are distinguished problems falling in this class.A quadratic distance problem in the variables 2R n is defined asmin 0Qs.t.w. /D 0,(1)where Q 2R n n is a positive definite symmetric matrix,and w. /is an n -variate polynomial ofdegree m .Because of the generality of the polynomial constraint,(1)is in general a nonconvex optimization problem.Different approaches and techniques have been devised in the literature to generate convex relaxations allowing for approximation of the solution of problem (1).In [11–13],relaxations exploiting appealing properties of homogeneous forms,that is,polynomials with mono-mial terms of equal degree,have been proposed.An alternative family of relaxations for distance problems,not necessarily quadratic,has been proposed on the basis of results from algebraic geom-etry,specifically the so-called Positivstellensatz [14–16].With the use of these results,it is possible*Correspondence to:Andrea Garulli,Dipartimento di Ingegneria dell’Informazione,Universitàdi Siena,Via Roma 56,53100Siena,Italy.†E-mail:garulli@ing.unisi.itD o c uC o mP D F T ri a lw ww.p df w iz a r d .c o mA.GARULLI,A.MASI AND A.VICINOto construct several relaxations whose degree of conservatism depends on the specific choice ofthe structure of the polynomial multipliers involved in the relaxation and on the degree of such polynomials.Ongoing work in this research area is motivated by two facts.First,the considerable advances made in the last two decades in the solution of convex problems [17,18].Second,many relaxations classes like those recalled earlier,reduce the original problem to testing positivity of a homoge-neous form or a polynomial.In turn,positivity of polynomials can be tackled effectively through semidefinite programming problems (SDPs),a special class of convex problems.In fact,a suffi-cient condition for a polynomial to be positive semidefinite is that it can be expressed as a sum of squares (SOS)of polynomials [19,20].Because it is known that testing if a polynomial is an SOS is equivalent to solving a system of linear matrix inequalities (LMIs)[14,21],it is possible to generate a number of convex relaxations for problems involving positivity of polynomials (see [22,23]and references therein).Convex relaxations based on the theory of moments,which can be viewed as a dual approach to the SOS paradigm,have been widely investigated [24,25].An alternative approach based on the use of slack variables has been recently proposed in [26].Although the equivalence between different parameterizations of SOS polynomials is now well recognized,the variety of available relaxations for specific problems,which is wealth in itself,poses serious issues when choosing the right technique for the right problem,both in terms of computa-tional burden and level of conservatism.This paper represents an effort in the direction of studying relationships between different relaxations for quadratic distance problems.The main result is to show that the relaxation based on homogeneous forms introduced in [11]is equivalent to a spe-cific relaxation based on Positivstellensatz involving a polynomial of the same degree.Moreover,examples are presented in which Positivstellensatz relaxations of higher degree indeed allow one to achieve less conservative results.Finally,numerical comparisons between the considered relaxations are reported for randomly generated quadratic distance problems.The paper is organized as follows.Quadratic distance problems are formulated in Section 2.Section 3presents basic notions about the SOS representation of positive polynomials and intro-duces the convex relaxations.The main contribution is provided in Section 4,where equivalence between two different relaxations is established and numerical comparisons among all the considered relaxations are provided.Conclusions are drawn in Section 5.2.CANONICAL QUADRATIC DISTANCE PROBLEMSLet Q f 2R n n be such that Q D Q 0f Q f and introduce the new variables x D Q f .Definew.x/D w.Q 1fx/w. Q 1f x/and consider the new optimization problem min k x k 2s.t.w.x/D 0,(2)where the new constraint w.x/is defined asw.x/D w.x/w. x/Dm X i D 0w 2i .x/,(3)and w 2i .x/,16i 6m ,are homogeneous polynomials of degree 2i .Problem (2)is called aCanonical Quadratic Distance Problem (CQDP).The following result states that problems (1)and (2)are equivalent [11].Proposition 2.1Problems (1)and (2)attain the same minimum value c mi n .Moreover,if mi n is a minimizer of (1),then x mi n D Q f mi n is a minimizer of (2).The following assumptions on CQDPs are made.D o c uC o mP D F T ri a lw ww.p df w iz a r d .c o mEQUIV ALENCE OF SUM OF SQUARES CONVEX RELAXATIONSAssumption 2.1The set ¹x 2R n W w.x/D 0ºis not empty,and w.0/>0.Assumption 2.2Let x mi n be a minimizer of (2).For any ı>0,there exist y ,´2R n such that k x mi n y k <ı,k x mi n ´k <ı,and w.y/w.´/<0.Assumption 2.1is made without loss of generality and allows one to avoid trivial cases.Assumption 2.2states that in any neighborhood of the optimal point x mi n ,the constraint function w.x/changes sign.This assumption is not restrictive in most optimization problems of practical interest.By the way,notice that if w.x/does not satisfy Assumption 2.1,w.x/ satisfies it for any >0.3.CONVEX RELAXATIONSIn order to introduce the convex relaxations for the CQDP (2)–(3),it is necessary to recall some basic material about the SOS representation of positive polynomials.The main idea is that a polynomial is positive semidefinite if it can be expressed as the SOS of suitable polynomials.Such sufficient condition can in turn be expressed in terms of an LMI feasibility test,as explained in the following.Let us consider a homogeneous polynomial f.x/in x 2R n of degree 2m (hereafter,simply denoted as a form ).We say that f.x/is positive (semidefinite)if f.x/>08x .Such form can always be expressed asf.x/D x ¹m º0.F C L/x ¹m º,(4)where x ¹m º2R d denotes a vector containing all monomials x i 11 x i nn for which i 1C :::C i n D m ;F 2R d dis a suitable symmetric matrix;L is a matrix belonging to the linear subspaceL D °L D L 02R d d W x ¹m º0Lx ¹m ºD 08x 2R n±.(5)Let L.˛/be a parameterization of the subspace L (an algorithm for constructing such a parametriza-tion is reported in [12,Appendix B]).Then,feasibility of the LMI constraintF C L.˛/>0(6)implies that the form f.x/is positive.In the literature,feasibility of (6)is simply denoted by the statement ‘f.x/is SOS’,meaning that the form f.x/can be expressed as an SOS.It is known that feasibility of (6)is only a sufficient condition for positivity of f.x/.Indeed,there exist forms that are positive but are not SOS (see e.g.,[27,28]).However,there are families of forms for which positivity is equivalent to being SOS.In particular,the SOS representation is a necessary and suf-ficient condition for positivity in the following cases:(i)quadratic forms;(ii)two-variate forms of any degree;and (iii)three-variate forms of degree four.In general,it can be shown that a positive form can be written as the ratio of SOS forms.For a thorough treatment of these topics,see [29].When addressing positivity of a generic polynomial of degree 2m (including all lower degree terms),one can still exploit the expression (4),the only difference being that the base vector x ¹m ºmust contain all monomials in x of degree less or equal to m .Then,by following the same reason-ing previously mentioned,one can establish whether the considered polynomial is SOS via an LMI feasibility test.Although different parameterizations have been proposed in the literature for SOS representations of positive polynomials,it is now widely recognized that such formulations are indeed equivalent (details can be found in several textbooks,see e.g.,[12,30,31]).Nevertheless,relationships between different relaxations proposed in the literature for specific problems still require deep investigations.The aim of this paper is to study equivalence of some convex relaxations for CQDPs,which are reviewed next.D o c uC o mP D F T ri a lw ww.p df w iz a r d .c o mA.GARULLI,A.MASI AND A.VICINO3.1.Relaxation based on formsIn [11],it has been shown that CQDPs can be solved via a one-parameter family of SOS-based positivity tests.In the following,the main features of this convex relaxation are summarized and a more general family of less conservative relaxations is introduced.Let B c denote the boundary of the `2ball of radius pc ,that is,B c D ¹x W k x k 2D c º.By Assumption 2.1,for sufficiently small c ,w.x/>0for all x 2B c .Moreover,let c mi n denote the solution of (2).Then,Assumption 2.2guarantees that in any neighborhood of the intersection between B c min and w.x/D 0,there exist points in which w.x/<0.This suggests that the solution of the CQDP (2)is given byc mi n D sup ¹N c 2R W w.x/>0,8x 2B c 8c 2.0,N c º.(7)This means that c mi n can be found by solving a one-parameter family of non-negativity tests on thepolynomial w.x/,for x belonging to a given set B c .Unfortunately,such tests generally amount to solving nonconvex optimization problems.An equivalent but more compact characterization of c mi n involves non-negativity tests on forms.Indeed,let us introduce the function w. I /W R n R !R such thatw.x I c/Dm X i D 0k x k 2.m i/w 2i .x/c m i,(8)where w 2i .x/are the forms in (3).It turns out that w.x I c/is a form in x of degree 2m for all fixed c ¤0.In [11],it has been proven that,for any fixed c ,w.x/>0,8x 2B c ,if and only if w.x I c/>0,8x 2R n .In other words,non-negativity of a polynomial w.x/for x belonging to a given set B c can be checked by testing non-negativity of a suitable form.From the aforementioned discussion,it can be concluded that the solution of problem (2)is given byc mi n D sup ¹N c 2R W w.x I c/>08x 2R n 8c 2.0,N c º.(9)Now,the idea is to relax the inequality constraint in (9)to an SOS constraint,so that it can be formulated as an LMI.This allows one to obtain a lower bound on c mi n via a one-parameter fam-ily of LMI feasibility tests.The convex relaxation for CQDPs introduced in [11]simply replaces the inequality constraint in (9)by an SOS constraint.Here,a more general family of relaxations is presented,denoted as H 2d relaxation ,which provides the lower bound to c mi nO c.H 2d /D sup ´N c 2R W 9s.x/SOS s.t.s.x/w.x I c/ k x k2.m C d /c .m C d /is SOS 8c 2.0,N c μ,(10)where s.x/is an SOS form of degree 2d (this implies that s.x/w.x I c/ k xk2.m C d /c .m C d /is a form of degree 2.m C d /in x ).By the SOS constraint in (10),also s.x/w.x I c/is SOS,and hence non-negative.Being s.x/SOS,it turns out that w.x I c/must be non-negative;and therefore,accordingto (9),O c.H 2d /6c mi n .Notice that the term k x k 2.m C d /c .m Cd /is necessary in order to exclude the triv-ial solution s.x/D 0.The specific choice of this term is motivated by the development of the equivalence results in Section 4.The SOS condition in (10)turns out to be an LMI in the coefficients of the multiplier polynomial s.x/but in general,it may be not convex in c .Hence,the lower bound O c.H 2d /can be computed by solving a one-parameter family of LMI feasibility tests.Observe that the key idea of the H 2d relaxation is to represent w.x I c/not directly as an SOS but rather as a ratio of SOS forms.This guarantees that the relaxation is not conservative as d tends to infinity.Moreover,for fixed d ,the only source of conservativeness is due to the gap between positive forms and SOS forms of degree 2.m C d/.D o c uC o mP D F T ri a lw ww.p df w iz a r d .c o mEQUIV ALENCE OF SUM OF SQUARES CONVEX RELAXATIONSWhen d D 0,the H 2d relaxation boils down to the relaxation in [11],as shown by the next result.Theorem 3.1Let c 0D sup ¹N c 2R W w.x I c/is SOS 8c 2.0,N c º.Then,O c.H 0/D c 0.ProofLet w.x I c/be SOS for some fixed c .Then,there exists a matrix W >0such that w.x I c/Dx ¹m º0W x ¹m º.Let D >0be a diagonal matrix such that k x k 2m D x ¹m º0Dx ¹m º.From (8),one canwrite W D w 0c m D C ,for a suitable symmetric matrix .Then,one hass 0w.x I c/ k x k 2m c m D x ¹m º0Âs 0w 0 1c mD C s 0Ãx ¹m º.(11)By choosing s 0such that s 0>w 0C 1w 0and recalling that w 0>0(because of Assumption 2.1),onehast 0w 0 1c m D C t 0>w 0c mD C D Q >0and therefore,by (11),s 0w.x I c/k x k 2mc mis SOS.Hence,O c.H 0/>c 0.Conversely,assume that there exists s 0>0such that s 0w.x I c/ k xk2mc mis SOS for a fixed c .Then,w.x I c/is SOS and hence c 0>O c.H 0/.This concludes the proof.3.2.Positivstellensatz relaxationsA family of convex relaxations that has been widely used in recent years for tackling a num-ber of optimization problems relevant to control system analysis and design is based on the Positivstellensatz [14,15],a fundamental result in algebraic geometry that provides a necessary and sufficient condition for unfeasibility of a set of polynomial constraints.Just to illustrate the main idea,a simplified version is stated next.Proposition 3.1Let f.x/,g.x/,and h.x/be given polynomials in x 2R n .Then,the following conditions are equivalent(1)The set ¹x 2R n W f.x/>0,h.x/D 0,g.x/¤0ºis empty.(2)There exist two SOS polynomials s 0.x/,s 1.x/,a polynomial p.x/,and a non-negative integerk such thats 0.x/C s 1.x/f.x/C p.x/h.x/C g.x/2k D 0.The result in Proposition 3.1can be exploited in order to devise SOS-based convex relaxations of CQDPs.A first way to proceed,proposed in [14],is to notice that emptiness of the setE c D ¹x 2R n W c k x k 2>0,w.x/D 0,k x k 2 c ¤0º(12)implies that c is a lower bound to the solution c mi n of (2).By choosing in Proposition 3.1s 0.x/D 0,p.x/D .k x k 2 c/N p.x/,and k D 1,it turns out that a sufficient condition for emptiness of set E c in (12)is that there exist a polynomial N p.x/such that k x k 2 c C N p.x/w.x/is SOS.Hence,O c.P 2d /D sup ®N c 2R W 9N p.x/s.t.k x k 2 c C N p.x/w.x/is SOS 8c 2.0,N c ¯(13)is a lower bound of c mi n .Problem (13)will be denoted as a P 2d relaxation of the CQDP,whereN p.x/is a polynomial of degree 2d .The conservatism of the P 2d relaxation depends not only on the specific choice of the Positivstellensatz multipliers but also on the degree 2d of the polynomial N p.x/(notice that the degree must be even being the degree of w.x/even).It is worth observing that (13)is a convex problem,whose constraint is an LMI in both c and the coefficients of the multiplierD o c uC o mP D F T ri a lw ww.p df w iz a r d .c o mA.GARULLI,A.MASI AND A.VICINOpolynomial N p.x/.Problem (13)can be solved via a single SDP and does not require a search over the parameter c .It will be shown in Section 4that the degree of N p.x/which allows one to obtain a tight lower bound can be very high,thus requiring the solution of large SDPs.An alternative convex relaxation for CQDPs,based on the Positivstellensatz and generally less conservative than the P 2d relaxation,can be obtained by following a reasoning similar to that adopted in Section 3.1to derive the H 2d relaxation.Let us define the setc D ¹x 2R n W c k x k 2D 0, w.x/>0º.Then,by Assumption 2.2,one can write the solution of the CQDP (2)asc mi n D sup ¹N c2R W c is empty,8c 2.0,N c º.By applying Proposition 3.1,with s 1.x/D t.x/and k D 0,a sufficient condition for emptiness of the set c turns out to be the existence of a polynomial p.x/and an SOS polynomial t.x/such thatp.x/.k x k 2 c/C t.x/w.x/ 1is SOS.Therefore,it is possible to devise a convex relaxation of degree 2m C 2d for the CQDP (2),which returns the following lower bound to c mi nO c.GP 2d /D sup ¹N c 2R W 9t.x/SOS,p.x/s.t.p.x/.k x k 2 c/C t.x/w.x/ 1is SOS 8c 2.0,N c ¯,(14)where p.x/is a polynomial of degree 2m C 2d 2and t.x/an SOS polynomial of degree 2d ,thus resulting in an SOS constraint of degree 2m C 2d .We will denote this approach as GP 2d relaxation .The SOS condition in (14)is not convex in both c and the coefficients of p.x/.Therefore,the computation of the lower bound provided by the GP 2d relaxation requires a search over c and the solution of a one-parameter family of LMI feasibility tests,as in the case of the H 2d relaxation.Other relaxations can be devised by selecting the degrees of the polynomials p.x/and t.x/in dif-ferent ways;the choice made here returns the most general relaxation based on the Positivstellensatz for a fixed degree of the SOS constraint.For each considered relaxation,Table I reports the size of the LMIs and the number of free polynomial coefficients involved.The notation.n ,m/D.n C m 1/Š.n 1/ŠmŠdenotes the number of coefficients in a form of degree m in n variables.4.RELATIONSHIPS BETWEEN DIFFERENT RELAXATIONSIn this section,relationships between the SOS-based relaxations introduced in Section 3are investigated.The first result states that the relaxations H 2d and GP 2d are equivalent.4.1.Equivalence between relaxations H 2d and GP 2dTheorem 4.1Let c be fixed.Then,the following statements are equivalent:Table I.Dimensions of the linear matrix inequalities and number of freepolynomial coefficients in the considered relaxations.LMI sizePolynomial coefficientsH 2d.n ,m C d / .n ,2d /P 2d .n C 1,m C d / .n C 1,2d /GP 2d.n C 1,m C d /.n C 1,2d /C .n C 1,2.m C d 1//LMI,linear matrix inequality.D o c uC o mP D F T ri a lw ww.p df w iz a r d .c o mEQUIV ALENCE OF SUM OF SQUARES CONVEX RELAXATIONS(1)There exist a form s.x/of degree 2d such that8<:s.x/w.x I c/k x k 2.m C d /c .m C d /is SOS s.x/is SOS;(15)(2)There exist a polynomial p.x/of degree 2.m C d 1/and a polynomial t.x/of degree 2dsuch that²r.x/D t.x/w.x/C p.x/.k x k 2 c/ 1is SOS.t.x/is SOS.(16)With the aim of proving Theorem 4.1,let us first introduce the following lemma.Lemma 4.1Given an SOS polynomial q.x/D P 2m i D 0q i .x/of degree 2m ,where q i .x/are forms of degree i ,then,for any fixed c ,the formq.x I c/Dm X i D 1k x k 2.m i/q 2i .x/c m iis SOS in x .(17)Proof of Lemma 4.1Being q.x/SOS,there exists a positive semidefinite symmetric matrix Q such thatq.x/D 1x x ¹2º:::x ¹m º Q 1x x ¹2º:::x ¹m º 0.(18)Let us partition the matrix Q asQ D 0BB B B@Q 0,0Q 0,1Q 0,2:::Q 0,m Q 1,0Q 1,1Q 1,2:::Q 1,m Q 2,0Q 2,1Q 2,2:::Q 2,m ...............Q m ,0Q m ,1Q m ,2:::Q m ,m1CC C C Aso that,by grouping terms of the same (even)degree in (18),one gets the relationshipsq 2m .x/D x ¹m º0Q m ,m x ¹m º,(19)q 2.m i/.x/D2i Xj D 0x ¹m j º0Q m j ,m 2i C j x ¹m 2i C j º,for i D 1,:::,m .(20)By adding equation (19)and all equations (20),each one multiplied byk x k 2i c i,one obtainsq.x I c/D q 2m .x/Cm X i D 1k x k 2i c iq 2.m i/.x/D x¹m º0Q m ,m x ¹m ºCm X i D 1k x k 2i c i2i X j D 0x¹m j º0Q m j ,m 2i C j x ¹m 2i C j º.(21)Let us assume for the sake of exposition that m is even (the case when m is odd is analogous).Then,the right-hand side term of (21)can be rewritten asq.x I c/D v e .x/0Q e v e .x/C v o .x/0Q o v o .x/,(22)D o c uC o mP D F T ri a lw ww.p df w iz a r d .c o mA.GARULLI,A.MASI AND A.VICINOwherev e .x/D 0B B BB BB @k x k m c m=2k x k m 2c x¹2º...k x k2cx ¹m 2ºx ¹m º1C C CC C C A ,v o .x/D 0B B B B B B B @k x k m 1c x k x k m 3c x ¹3º...k x k 3c3=2x ¹m 3ºk x k c 1=2x ¹m 1º1C C C C C C C A,andQ e D 0BB B@Q 0,0Q 0,2:::Q 0,m Q 2,0Q 2,2:::Q 2,m ............Q m ,0Q m ,2:::Q m ,m1C C C A,Q o D 0BB B@Q 1,1Q 1,3:::Q 1,m 1Q 3,1Q 3,3:::Q 3,m 1............Q m 1,1Q m 1,3:::Q m 1,m 11CC C A .Being Q >0,one has also Q e >0and Q o >0.In fact,Q e .Q o /is obtained by selecting thecolumns and rows of Q corresponding to block Q ij such that i C j is even (odd).Then,from (22),it follows that q.x I c/is SOS. Therefore,Lemma 4.1states that the ‘homogenization’(17)of the even degree forms of an SOS polynomial is SOS itself.Now,we are ready to prove Theorem 4.1.Proof of Theorem 4.1a))b).Let s.x/w.x I c/ k x k 2.m C d /c be SOS with s.x/SOS.Then,let t.x/D P 2d i D 0t i .x/,where t i .x/are forms of degree i ,and let us choosep.x/Dm C d 1X i D 0p 2i .x/,(23)wherep 2i .x/Di Xk D 0t 2k .x/i k Xj D 0k x k2.i j k/w 2j .x/c .i j k C 1/!k x k 2i ci C 1,(24)for i D 0,:::,m C d 1.Consider r.x/in (16)and write it as r.x/D P 2.m C d /i D 0r i .x/,where the r i .x/are forms of degree i .By substituting (23)–(24)into (16),it turns out that the odd degreeforms in r.x/are r 2i C 1.x/D Pi k D 0t 2k C 1.x/w 2.i k/.x/for i D 0,:::,m C d 1,with w 2j .x/D 0for j >m and t 2k .x/D 0for k >d .The even degree forms in r.x/arer 0.x/D t 0w 0 t 0w 0C 1 1D 0,r 2i .x/D i X k D 0t 2k .x/w 2.i k/.x/ k x k 2 k x k 2.i 1/c i !Ck x k 2i 1X k D 0t 2k.x/i k 1X j D 0 k x k 2.i j k 1/w 2j .x/c .i j k/! ci X k D 0t 2k .x/i k X j D 0k x k 2.i j k/w 2j .x/c .i j k C 1/!C k x k 2i c i ,for i D 1,2,:::,m C d 1.Then,by noticing thatc i Xk D 0t 2k .x/i k Xj D 0k x k2.i j k/w 2j .x/c .i j k C 1/!C k x k 2ic i D i X k D 0t 2k .x/w 2.i k/.x/ i 1X k D 0t 2k .x/i k 1X j D 0k x k 2.i j k/w 2j .x/c .i j k/!C k x k 2i c i ,D o c uC o mP D F T ri a lw ww.p df w iz a r d .c o mEQUIV ALENCE OF SUM OF SQUARES CONVEX RELAXATIONSone gets,with this choice of multiplier structure,that r 2i .x/D 0for i D 0,1,:::,m C d 1.Moreover,by choosing t 2k C 1.x/D 0for k D 0,:::,d 1,the polynomial r.x/boils down to r.x/D r 2.m C d /.x/Dm C d X k D 0t 2k .x/w 2.m C d k/.x/ k x k 2.m C d /c .m C d /C m C d 1X k D 0t 2k .x/m C d k 1X j D 0 k x k 2.m C d j k/w 2j .x/c .m Cd j k/!D m C d X k D 0t 2k .x/m C d k X j D 0k x k 2.m C d j k/w 2j .x/c .m C d j k/! k x k 2.m C d /c .m Cd /.By recalling that w 2j .x/D 0for j >m and t 2k .x/D 0for k >d ,one obtainsr.x/D d X k D 0k x k 2.d k/t 2k .x/c .d k/m X j D 0k x k 2.m j /w 2j .x/c .m j /! k x k 2.m Cd /c .m C d /D t.x I c/w.x I c/k x k2.m C d /c .m Cd /.(25)In particular,if we choose t.x/D s.x/,which amounts to choosing t 2d .x/D s.x/and t i .x/D 08i <2d ,one has also t.x I c/D s.x/,and by (25),r.x/D s.x/w.x I c/ k x k 2.m C d /c .m Cd /,which is SOS by assumption.Hence,(16)holds.b))a).Let us assume that r.x/is SOS for some polynomial p.x/of degree 2.m C d 1/and some SOS polynomial t.x/of degree 2d ,and let us write p.x/asp.x/Dm C d 1X i D 0p 2i .x/C2.m C d 1/X i D 0ıi .x/,(26)where the forms p 2i .x/are the same as in (24),and the ıi .x/are forms of degree i .By substituting(26)into r.x/and following the same reasoning as in the first part of the proof leading to (25),one has thatr.x/D t.x I c/w.x I c/ k x k 2.m C d /c m C dC2.m C d 1/X i D 0ıi .x/ k x k 2c C m Cd 1X i D 0i X k D 0t 2k C 1.x/w 2.i k/.x/.The terms of even degree in r.x/arer 2.m C d /.x/D t.x I c/w.x I c/ k x k 2.m C d /cm C d C ı2.m C d 1/.x/k x k 2,(27)r 2.m C d i/.x/D ı2.m C d 1 i/.x/k x k 2 c ı2.m C d i/.x/,for i D 1,:::,m C d ,(28)where it has been assumed ı 1.x/D 0to obtain a more compact notation.By adding Equation (27)and each Equation (28)multiplied by k x k 2ic i for i D 1,:::,m Cd ,one obtains,according to (17),r.x I c/D t.x I c/w.x I c/ k x k 2.m C d /cm C d C ı2.m C d 1/.x/k x k 2C m C d X iD 1k x k 2i c i®ı2.m C d 1 i/.x/k x k 2 c ı2.m C d i/.x/¯D D t.x I c/w.x I c/k x k 2.m C d /c m C d.By Lemma 4.1,because r.x/is SOS,it follows that also r.x I c/is SOS.Moreover,because t.x/is SOS by assumption,still by Lemma 4.1,t.x I c/is SOS.Then,by the aforementioned conclusions,it follows that if (16)is satisfied,there always exist an SOS multiplier s.x/D t.x I c/such that (15)is satisfied.D o c uC o mP D F T ri a lw ww.p df w iz a r d .c o mA.GARULLI,A.MASI AND A.VICINOTable II.O c.P 2d /as a function of 2d computed for Example 4.1.2d 0246810O c.P 2d /2.0482 10 40.00610.05140.22070.63210.9980Because,according to Theorem 4.1,the relaxations GP 2d and H 2d are equivalent,one can con-clude that the H 2d relaxation is to be preferred from a computational point of view because itrequires the solution of LMIs of smaller size due to the fact that the SOS constraint is imposed on a form and hence lower degree monomials are not involved.4.2.Conservatism of the H 2d relaxationTheorem 4.1has shown that the conservatism of the two relaxations is the same,that is,O c.H 2d /DO c.GP 2d /.With the aim of investigating where the conservatism appears,let us notice that,by (10),the H 0relaxation is conservative only if there exist values of c 6c mi n such that the forms 0w.x I c/ k x k 2mc mis positive but is not SOS for any positive scalar s 0.In the following,an example of CQDP constructed in order to satisfy this condition is presented.Example 4.1Consider a CQDP with m D 3and n D 3,such that w.x/D 1 k x k 6C 10f ns .x/,where f ns .x/is a non-negative form which is not SOS [11].Let us consider the form f ns .x/D x 41x 22C x 42x 23 3x 21x 22x 23C x 43x 21,introduced in [27].It can be observed that w.x/>0for allx such that k x k 2<1,whereas w.x/D 0for x 1D x 2D x 3D 1=p 3.Hence,c mi n D 1.However,because w.x I c/D k x k 61c 3 1ÁC 10f ns .x/,it follows thatÂs 0w.x I c/ k x k 6c 3Ãˇˇˇˇc D 1D 10f ns .x/ k x k 6,(29)which is not an SOS for any s 0>0,because f ns .x/is not an SOS.Therefore,the H 0relaxation will return O c.H 0/<1.As expected,by solving the LMIs,one gets O c.H 0/D 0.9604.On the other hand,by increasing the degree 2d of the multipliers just up to 2,one obtains the tight estimate O c.H 2/D 0.9999.Table II reports the values of O c.P 2d /as a function of the degree 2d of the poly-nomial multiplier N p.x/.It can be noticed that the P 2d relaxation is significantly more conservative than H 2d ,for the same value of d .4.3.Numerical comparisonsIn order to compare the performance of the proposed relaxations,families of CQDPs have been ran-domly generated for different values of n and m .First,a set of N D 10000CQDPs with n D m D 2has been considered.The coefficients of the polynomial w.x/in (3)have been randomly generated from a uniform distribution in Œ 1,1 .Assumption 2.1has been fulfilled by choosing the coeffi-cients so that w.0/>0and w.x/D 0for x D .11/0.Being n D 2,it is known that for every c ,w.x I c/>0if and only if w.x I c/is an SOS.Therefore,in this case,the relaxations H 0and GP 0always return the optimal value c mi n .Conversely,the P 2d relaxation is in general conservative,and its conservatism reduces as the degree 2d of N p.x/is increased.Table III reports the number N P of CQDPs for which O c.P 2d /is strictly less than c mi n ,for different values of 2d .The average relative error of these N P CQDPs,P D 1N P N PX i D 1c mi n ,i O c i .P 2d /c mi n ,i,is also provided,where O c i .P 2d /is the lower bound obtained for the i-th CQDP.Even if this family of problems is relatively simple,it can be observed that the P 2d relaxation can require a very highD o c uC o mP D F T ri a lw ww.p df w iz a r d .c o m。