1983.4-Steady-state stability of an HVDC system using frequency-response methods

合集下载

产业结构和经济稳定外文翻译

产业结构和经济稳定外文翻译

本科毕业论文外文翻译外文题目:Industrial structure and economic stability出处:Applied economic letters作者:Sherrill Shaffer原文:Motivated by prior results predicting contrasting linkages between industrial structure and economic stability, we present exploratory empirical evidence on this important issue. Consistent with the turnover hypothesis, we find that employment grew more steadily where business establishments in all sectors were larger, suggesting an offsetting benefit to the first-moment costs of establishment size identified by previous research. Consistent with the job-matching hypothesis, we find that employment grew more steadily where more establishments per capita operated in all sectors. Similar but less consistent results were also found regarding the stability of income growth.I.Introduction and BackgroundThe fundamental importance of economic performance has spawned an extensive literature on the empirical determinants of economic growth (see Rajan and Zingales, 1998; Levine et al., 2000 for prominent examples). One recent discovery is that communities or local regions tend to experience more rapid growth of income or employment where businesses are smaller (Shaffer, 2002, 2006a, b). A separate strand of research, meanwhile, has demonstrated the importance of studying the volatility of growth rates rather than merely their means (Ramey and Ramey, 1995; Kurz, 2004; Ismihan, 2005; Bekaert et al., 2006). It is relevant in this regard that several studies (Davis and Haltiwanger, 1992; Rob, 1995; Davis et al., 1996) have found that jobs at smaller firms tend to be less permanent than at larger firms; while Ilmakunnas et al.(2005) have suggested that such turnover may actually be one reason for faster productivity growth, an unexplored implication is that the volatility of employment and income may be higher where businesses are smaller (the ‘turnover hypothesis’).Conversely, heterogeneity among decision-makers due to human fallibility can result in greater variability of economic performance in more centralized decision processes (Sah, 1991; Sah and Stiglitz, 1991; Almeida and Ferreira, 2002), possibly suggesting that a local economy may exhibit greater stability where the average firms are smaller or more numerous. A similar outcome is predicted to the extent that (a) centralized decision processes are less successful at managing conflict and (b) distributional conflicts impair efficient adjustments to exogenous shocks (Rodrik, 1999; Almeida and Ferreira, 2002).In view of these contrasting considerations, an important empirical question is whether establishment size and other measures of industrial structure may be systematically associated with second moment measures of economic performance.This article accordingly presents preliminary evidence of linkages between selected measures of industrial structure and the volatility of local income and employment growth rates. We use three measures of industrial structure, each distinguished by broad sector. As in Shaffer (2002), we measure average establishment size by number of employees and, alternatively, by dollars of value added, shipments or receipts. We also look at establishments per capita, motivated by two opposing considerations. Finding a new job should be easier in a market with more employers in a given sector, leading to more stable levels or growth rates of income and employment (the ‘job-matching hypothesis’). But, ceteris paribus, smaller firms will tend to permit the coexistence of larger numbers of firms, in which case the documented employment turnover at smaller firms (discussed above) will tend to offset the stabilizing benefit of more numerous firms. Each of our measures of industrial structure is compiled separately for the manufacturing, wholesale, retail and service sectors.We relate these measures of structure to two second-moment measures of economic performance, the SD of annual real per capita income growth rates and the SD of the annual growth rate of total establishment employment. The resultscontribute to two separate strands of the literature, on empirical covariates with growth volatility and on macroeconomic effects of establishment size and other measures of industrial structure. We find for all sectors that larger establishments and more establishments per capita are associated with more stable employment growth rates, consistent with the turnover and job-matching hypotheses. The same linkages are found for some but not all of the sectors with regard to the volatility of growth rates in real per capita income.The next section introduces the empirical model and the sample. Section III presents the results, while Section IV concludes.II.The Model and SampleWe embed our key variables in a standard linear empirical growth equation, βα+γε=xY(1)s++Where Y is a measure of economic performance as discussed above,αis an estimated intercept term, s is a measure of industrial structure as discussed above, x is a vector of control variables discussed below,βandγare estimated coefficients and εis a stochastic error term. As in Bekaert et al. (2006), our SDs of economic growth rates are measured over a 5-year period. As in prior studies of economic growth, the control vector includes the natural logarithm of population, the density of population per square mile of land area, a measure of education and initial median household income.Population is a measure of market size as in Cetorelli and Gambera (2001). It is also similar to the total labor force variable used in O’h uallachain and Satterthwaite (1992) and can be interpreted as measuring urbanization economies. If job-matching occurs more quickly or efficiently in more populous areas, then the estimated coefficient on this variable should be negative (another implication of the job-matching hypothesis introduced above).Population density has been found significantly related to several first-moment measures of economic performance, possibly due to scale effects or to superior matching between firms and workers in denser markets (Ciccone and Hall, 1996;Andersson et al., 2004; Carlino et al., 2007; Strumsky et al., 2005). If these benefits influence economic stability, as one might expect, then the estimated coefficient on population density should be negative in our model.Education is measured as the percentage of population aged 25 and over who have completed high school, and reflects the accumulated level of human capital. It is similar to measures used in previous studies of economic growth such as Rajan and Zingales (1998), Levine et al. (2000) and Cetorelli and Gambera (2001), with theoretical linkages to average growth rates explored by Teles (2005).A related measure of education has been used in at least one study of growth volatility (Bekaert et al., 2006). In addition, education was found to be positively associated with sectoral employment growth in US metropolitan areas by O huallachain and Satterthwaite (1992). If education contributes to economic stability as well, its estimated coefficient in our regressions should be negative.Initial median household income can reflect a convergence effect in first-moment measures of economic performance, as noted by Barro and Sala-i-Martin (1992). The same logic would not apply to second-moment measures of economic performance, rendering the sign and significance of the estimated coefficient on this variable an open empirical question. However, a similar variable (initial per capita GDP) has been used in at least one prior study of growth volatility (Bekaert et al., 2006).Table 1 summarizes the data. Our sample comprises more than 2000 US nonmetropolitan counties, measuring economic performance during 1991–1995 and structure measures as of 1987. Metropolitan areas are excluded because their borders are generally not coterminous with individual counties, confounding measurement problems for variables drawn from county-level data. Though not reported in the table, the pairwise correlation coefficients between average establishment size and establishments per capita ranged between -0.21 and 0.18, and were just -0.07 in the wholesale sector and 0.07 in the manufacturing sector. These small and highly variable correlations indicate that the two categories of structure variables reflect statistically separate dimensions of industrial structure in our sample.The selection of performance data as of several years following the structure data helps to reduce the likelihood of reverse causality although, common to all empiricalgrowth studies, causality cannot be definitively established. This lag structure also minimizes the potential for endogeneity bias, as the regressors are predetermined.III.ResultsTable 2 reports the regression estimates for the SD of real per capita income growth rates, while Table 3 reports estimates for the SD of employment growth rates. As industrial structure is measured in three ways for each of four sectors, each table reports 12 regressions. Due to missing or zero establishment data for a few counties in each sector, the various regressions utilize slightly differing numbers of observations, as reported in the tables. The significance levels are computed from SEs corrected for heteroscedasticity.In Table 2, the various structure measures are statistically significant in eight of the 12 regressions. Per capita income is found to grow more steadily where manufacturing, retail and wholesale establishments employ more workers, or where wholesale establishments are larger as measured by value of annual shipments. These findings are consistent with the turnover hypothesis discussed above.Conversely, steadier growth is seen where wholesale establishments employ fewer workers or in counties with fewer wholesale establishments per capita or more manufacturing establishments per capita. The contrasting estimates for the wholesale sector may reflect nonlinearities in the underlying relationships, an issue beyond the scope of this study. These findings indicate that industrial structure is systematically related to the volatility of income growth rates, in ways that are largely consistent but that do vary somewhat across major sectors.The fit of the equations in Table 2 is modest, with adjusted R2 ranging from 0.3 to nearly 0.5. The control variables are mostly significant at the 0.01 level. Initial median household income is never significant, in contrast to the significance of initial per capita GDP found in several cross-country regressions on consumption growth volatility by Bekaert et al. (2006). Income is found to grow more steadily in more populous counties. Population density and education are found to contribute to higher volatility of per capita income growth rates, though education loses its significancewhen controlling for wholesale establishments per capita. The sign of the education variable contrasts with that found in cross-country specifications by Bekaert et al.(2006).In Table 3, the various structure measures are again statistically significant in eight of the 12 regressions. Employment is seen to grow more steadily where there are more establishments per capita in any of the four sectors, consistent with the job-matching hypothesis. In addition, steadier employment growth occurred where wholesale or service establishments employ more workers, or where retail or service establishments have a larger average dollar volume of business, consistent with the turnover hypothesis. These latter findings indicate that the benefits of job longevity at larger firms more than offset the volatility theoretically predicted by more centralized decision-making, as discussed above.The fit of these equations is less tight than in Table 2, with adjusted R2 around 0.1. Population density is never significant but the other control variables are consistently significant. Employment grows more steadily in more populous counties, where education is higher, or where initial median household income is lower. The population result is consistent with the job-matching hypothesis discussed above, and the education and initial income results match those found in cross-country regressions for consumption growth volatility by Bekaert et al. (2006). As in Table 2, education loses its significance when controlling for wholesale establishments per capita.IV.ConclusionMotivated by prior results suggesting contrasting and important predictions relating industrial structure to economic stability, we have explored empirical linkages between multiple sector-specific structure measures and the volatility of subsequent growth rates in real per capita income and employment. Significant associations were found in two-thirds of the specifications, controlling for a standard set of environmental variables. Overall, the turnover hypothesis was found to dominate the contrasting predictions of inefficient centralized decision-making, asemployment and income both tended to grow more steadily where establishments were larger. These findings suggest an offsetting benefit to the first-moment costs of establishment size identified by Shaffer (2002, 2006a, b).The job-matching hypothesis is consistent with steadier growth where more establishments per capita operate, as found for all sectors in the employment regressions and for the manufacturing sector in the income regressions. Contrasting results were found in the income regressions for wholesale establishments measured by employment size and for wholesale establishments per capita.Although our multi-year lag structure reduces the likelihood of reverse causality between industrial structure and growth volatility, causality cannot be definitively established, a shortcoming common to all empirical growth studies. Consequently, policy inferences should be drawn with caution, pending additional research on the precise mechanisms underlying the observed patterns.译文:产业结构和经济稳定谢里尔谢弗经济及金融系电子邮箱:shaffer@通过预测产业结构和经济稳定之间的联系,在事先对比结果的启发下,我们提出探索这一重要问题的证据。

迎浪船舶参数横摇的理论研究

迎浪船舶参数横摇的理论研究
基于以上考虑,本文的研究旨在提出可以正确描述船舶此类非线性运动的数值 模型,并在正确模拟船舶参数横摇的行为的基础上,理解参数横摇的形成机理,分 析参数横摇的发生过程,研究参数横摇的作用因素,最终编制可应用于参数横摇模 拟计算和分析的整套程序,为参数横摇问题在工程上的研究应用提供方便友好的平 台。
1.2 参数横摇研究进展
long-crest waves,wave group
VII
上海交通大学硕士学位论文
上海交通大学 学位论文原创性声明
本人郑重声明:所呈交的学位论文,是本人在导师的指导下,独立 进行研究工作所取得的成果。除文中已经注明引用的内容外,本论文不 包含任何其他个人或集体已经发表或撰写过的作品成果。对本文的研究 做出重要贡献的个人和集体,均已在文中以明确方式标明。本人完全意 识到本声明的法律结果由本人承担。
4
上海交通大学硕士学位论文
时也导致了船舶在波浪上的稳性特征值的变化。其中,船舶横摇恢复力矩作为保证 船舶安全的最为重要的参数受此变化影响最为严重。传统理论对船舶各个运动模态 的数值估计和预报是在船舶线性运动理论框架下进行的,适应于微幅运动,对于船 舶发生大幅度运动时所呈现强烈的非线性运动无法适用。参数横摇的存在揭示了船 舶海上客货安全和航行效率上存在的危险隐患.其影响强度是船舶频域幅值理念下 安全预报的盲区,因此正确预报船舶参数横摇的发生范围和危险程度势在必行。 1.1.2 研究目的
手段保存和汇编本学位论文。
保密□,在 本学位论文属于
不保密□。
年解密后适用本授权书。
(请在以上方框内打“√”)
学位论文作者签名:常永全
日期: 年 月 日
指导教师签名:缪国平
日期: 年 月
IV
上海交通大学硕士学位论文

可持续性资本理论

可持续性资本理论

B OSTON U NIVERSITY Center for Energy and Environmental Studies Working Papers SeriesNumber 9501 September 1995 THE CAPITAL THEORY APPROACH TO SUSTAINABILITY:A CRITICAL APPRAISALbyDavid Stern675 Commonwealth Avenue, Boston MA 02215Tel: (617) 353-3083Fax: (617) 353-5986E-Mail: dstern@WWW: /sterncv.htmlThe Capital Theory Approach to Sustainability:A Critical AppraisalDavid I. SternBoston UniversityNovember 1995______________________________________________________________________________ Center for Energy and Environmental Studies, Boston University, 675 Commonwealth Avenue, Boston MA 02215, USA. Tel: (617) 353 3083 Fax: (617) 353 5986, E-Mail: dstern@The Capital Theory Approach to Sustainability:A Critical Appraisal______________________________________________________________________________ SummaryThis paper examines critically some recent developments in the sustainability debate. The large number of definitions of sustainability proposed in the 1980's have been refined into a smaller number of positions on the relevant questions in the 1990's. The most prominent of these are based on the idea of maintaining a capital stock. I call this the capital theory approach (CTA). Though these concepts are beginning to inform policies there are a number of difficulties in applying this approach in a theoretically valid manner and a number of critics of the use of the CTA as a guide to policy. First, I examine the internal difficulties with the CTA and continue to review criticisms from outside the neoclassical normative framework. The accounting approach obscures the underlying assumptions used and gives undue authoritativeness to the results. No account is taken of the uncertainty involved in sustainability analysis of any sort. In addition, by focusing on a representative consumer and using market (or contingent market) valuations of environmental resources, the approach (in common with most normative neoclassical economics) does not take into account distributional issues or accommodate alternative views on environmental values. Finally, I examine alternative approaches to sustainability analysis and policy making. These approaches accept the open-ended and multi-dimensional nature of sustainability and explicitly open up to political debate the questions that are at risk of being hidden inside the black-box of seemingly objective accounting.I.INTRODUCTIONThe Brundtland Report (WCED, 1987) proposed that sustainable development is "development that meets the needs of the present generation while letting future generations meet their own needs". Economists initially had some difficulty with this concept, some dismissing it1 and others proliferating a vast number of alternative definitions and policy prescriptions (see surveys by: Pezzey, 1989; Pearce et al., 1989; Rees, 1990; Lélé, 1991).In recent years, economists have made some progress in articulating their conception of sustainability. The large number of definitions of sustainability proposed in the 1980's have been refined into a smaller number of positions on the relevant questions in the 1990's. There is agreement that sustainability implies that certain indicators of welfare or development are non-declining over the very long term, that is development is sustained (Pezzey, 1989). Sustainable development is a process of change in an economy that does not violate such a sustainability criterion. Beyond this, the dominant views are based on the idea of maintaining a capital stock as a prerequisite for sustainable development. Within this school of thought there are opposing camps which disagree on the empirical question of the degree to which various capital stocks can be substituted for each other, though there has been little actual empirical research on this question.There is a consensus among a large number of economists that the CTA is a useful means of addressing sustainability issues.2 Capital theory concepts are beginning to inform policy, as in the case of the UN recommendations on environmental accounting and the US response to them (Beardsley, 1994; Carson et al., 1994; Steer and Lutz, 1993). There are, however, a growing number of critics who question whether this is a useful way to address sustainability (eg. Norgaard, 1991; Amir, 1992; Common and Perrings, 1992; Karshenas, 1994; Pezzey, 1994; Common and Norton, 1994; Faucheux et al., 1994; Common, 1995). The literature on sustainable development and sustainability is vast and continually expanding. There are also a large number ofsurveys of that literature (eg. Tisdell, 1988; Pearce et al., 1989; Rees, 1990; Simonis, 1990; Lélé, 1991; Costanza and Daly, 1992; Pezzey, 1992; Toman et al., 1994). I do not intend to survey this literature.The aim of this paper is to present a critique of the capital theory approach to sustainability (CTA henceforth) as a basis for policy. This critique both outlines the difficulties in using and applying the CTA from a viewpoint internal to neoclassical economics and problems with this approach from a viewpoint external to neoclassical economics. I also suggest some alternative approaches to sustainability relevant analysis and policy. The neoclasscial sustainability literature generally ignores the international dimensions of the sustainability problem. I also ignore this dimension in this paper.The paper is structured as follows. In the second section, I discuss the background to the emergence of the capital theory approach, while the third section briefly outlines the basic features of the approach. The fourth section examines the limitations of the CTA from within the viewpoint of neoclassical economics and the debate between proponents of "weak sustainability" and "strong sustainability". The following sections examine the drawbacks of this paradigm from a viewpoint external to neoclassical economics and discuss alternative methods of analysis and decision-making for sustainability. The concluding section summarizes the principal points.SHIFTING DEBATE: EMERGENCE OF THE CAPITAL THEORY II. THEAPPROACHMuch of the literature on sustainable development published in the 1980's was vague (see Lélé, 1991; Rees, 1990; Simonis, 1990). There was a general lack of precision and agreement in defining sustainability, and outlining appropriate sustainability policies. This confusion stemmed in part from an imprecise demarcation between ends and means. By "ends" I mean the definition ofsustainability ie. what is to be sustained, while "means" are the methods to achieve sustainability or necessary and/or sufficient conditions that must be met in order to do the same. As the goal of policy must be a subjective choice, considerable debate surrounded and continues to surround the definition of sustainability (eg. Tisdell, 1988). As there is considerable scientific uncertainty regarding sustainability possibilities, considerable debate continues to surround policies to achieve any given goal.Sharachchandra Lélé (1991) stated that "sustainable development is in real danger of becoming a cliché like appropriate technology - a fashionable phrase that everyone pays homage to but nobody cares to define" (607). Lélé pointed out that different authors and speakers meant very different things by sustainability, and that even UNEP's and WCED's definitions of sustainable development were vague, and confused ends with means. Neither provided any scientific examination of whether their proposed policies would lead to increased sustainability. "Where the sustainable development movement has faltered is in its inability to develop a set of concepts, criteria and policies that are coherent or consistent - both externally (with physical and social reality) and internally (with each other)." (613). Judith Rees (1990) expressed extreme skepticism concerning both sustainable development and its proponents. “It is easy to see why the notion of sustainable development has become so popular ... No longer does environmental protection mean sacrifice and confrontation with dominant materialist values” (435). She also argued that sustainable development was just so much political rhetoric. A UNEP report stated: "The ratio of words to action is weighted too heavily towards the former" (quoted in Simonis, 1990, 35). In the early days of the sustainability debate, vagueness about the meaning of sustainability was advantageous in attracting the largest constituency possible, but in the longer run, greater clarity is essential for sustaining concern.In the 1990's many people have put forward much more precisely articulated definitions of sustainable development, conditions and policies required to achieve sustainability, and criteria toassess whether development is sustainable. This has coincided with a shift from a largely politically-driven dialogue to a more theory-driven dialogue. With this has come a clearer understanding of what kinds of policies would be required to move towards alternative sustainability goals, and what the limits of our knowledge are. There is a stronger awareness of the distinction between ends and means. Most, but not all (eg. Amir, 1992), analysts agree that sustainable development is a meaningful concept but that the claims of the Brundtland Report (WCED, 1987) that growth just had to change direction were far too simplistic.There is a general consensus, especially among economists, on the principal definition of sustainable development used by David Pearce et al. (1989, 1991): Non-declining average human welfare over time (Mäler, 1991; Pezzey, 1992; Toman et al., 1994).3 This definition of sustainability implies a departure from the strict principle of maximizing net present value in traditional cost benefit analysis (Pezzey, 1989), but otherwise it does not imply a large departure from conventional economics. John Pezzey (1989, 1994) suggests a rule of maximizing net present value subject to the sustainability constraint of non-declining mean welfare. It encompasses many but not all definitions of sustainability. For example, it excludes a definition of sustainability based on maintaining a set of ecosystem functions, which seems to be implied by the Holling-sustainability criterion (Common and Perrings, 1992; Holling, 1973, 1986) or on maintaining given stocks of natural assets irrespective of any contribution to human welfare. A sustainable ecosystem might not be an undesirable goal but it could be too strict a criterion for the goal of maintaining human welfare (Karshenas, 1994) and could in some circumstances lead to declining human welfare. Not all ecosystem functions and certainly not all natural assets may be necessary for human welfare. Some aspects of the natural world such as smallpox bacteria may be absolutely detrimental to people. In the context of the primary Pearce et al. definition, the Holling-sustainability criterion is a means not an end.The advantage of formalizing the concept of sustainability is that this renders it amenable to analysis by economic theory (eg. Barbier and Markandya, 1991; Victor, 1991; Common and Perrings, 1992; Pezzey, 1989, 1994; Asheim, 1994) and to quantitative investigations (eg. Repetto et al., 1989; Pearce and Atkinson, 1993; Proops and Atkinson, 1993; Stern, 1995). Given the above formal definition of sustainability, many economists have examined what the necessary or sufficient conditions for the achievement of sustainability might be. Out of this activity has come the CTA described in the next section. The great attractiveness of this new approach is that it suggests relatively simple rules to ensure sustainability and relatively simple indicators of sustainability. This situation has seemingly cleared away the vagueness that previously attended discussions of sustainability and prompted relatively fast action by governments and international organizations to embrace specific goals and programs aimed at achieving this notion of the necessary conditions for sustainability.III. THE ESSENCE OF THE CAPITAL THEORY APPROACHThe origins of the CTA are in the literature on economic growth and exhaustible resources that flourished in the 1970s, exemplified by the special issue of the Review of Economic Studies published in 1974 (Heal, 1974). Robert Solow (1986) built on this earlier literature and the work of John Hartwick (1977, 1978a, 1978b) to formalize the constant capital rule. In these early models there was a single non-renewable resource and a stock of manufactured capital goods. A production function produced a single output, which could be used for either consumption or investment using the two inputs. The elasticity of substitution between the two inputs was one which implied that natural resources were essential but that the average product of resources could rise without bound given sufficient manufactured capital.The models relate to the notion of sustainability as non-declining welfare through the assumption that welfare is a monotonically increasing function of consumption (eg. Mäler, 1991). The path ofconsumption over time (and therefore of the capital stock) in these model economies depends on the intertemporal optimization rule. Under the Rawlsian maxi-min condition consumption must be constant. No net saving is permissible as this is regarded as an unjust burden on the present generation. Under the Ramsey utilitarian approach with zero discounting consumption can increase without bound (Solow, 1974). Here the present generation may be forced to accept a subsistence standard of living if this can benefit the future generations however richer they might be. Paths that maximize net present value with positive discount rates typically peak and then decline so that they are not sustainable (Pezzey, 1994). Pezzey (1989) suggested a hybrid version which maximizes net present value subject to an intertemporal constraint that utility be non-declining. In this case utility will first increase until it reaches a maximum sustainable level. This has attracted consensus as the general optimizing criterion for sustainable development. Geir Asheim (1991) derives this condition more formally.Under the assumption that the elasticity of substitution is one, non-declining consumption depends on the maintenance of the aggregate capital stock ie. conventional capital plus natural resources, used to produce consumption (and investment) goods (Solow, 1986). Aggregate capital, W t,and the change in aggregate capital are defined by:W t=p Kt K t + p Rt S t (1)∆W t=p Kt∆K t + p Rt R t (2)where S is the stock of non-renewable resources and R the use per period. K is the manufactured capital stock and the p i are the relevant prices. In the absence of depreciation of manufactured capital, maintenance of the capital stock implies investment of the rents from the depletion of the natural resource in manufactured capital - the Hartwick rule (Hartwick 1977, 1978a, 1978b). Income is defined using the Hicksian notion (Hicks, 1946) that income is the maximum consumption in a period consistent with the maintenance of wealth. Sustainable income is,therefore, the maximum consumption in a period consistent with the maintenance of aggregate capital intact (Weitzman, 1976; Mäler, 1991) and for a flow of income to be sustainable, the stock of capital needs to be constant or increasing over time (Solow, 1986).The initial work can be extended in various ways. The definition of capital that satisfies these conditions can be extended to include a number of categories of "capital": natural, manufactured, human, and institutional.4 Natural capital is a term used by many authors (it seems Smith (1977) was the first) for the aggregate of natural resource stocks that produce inputs of services or commodities for the economy. Some of the components of natural capital may be renewable resources. Manufactured capital refers to the standard neoclassical definition of "a factor of production produced by the economic system" (Pearce, 1992). Human capital also follows the standard definition. Institutional capital includes the institutions and knowledge necessary for the organization and reproduction of the economic system. It includes the ethical or moral capital referred to by Fred Hirsch (1976) and the cultural capital referred to by Fikret Berkes and Carl Folke (1992). For convenience I give the name 'artificial capital' to the latter three categories jointly. None of these concepts is unproblematic and natural capital is perhaps the most problematic. Technical change and population growth can also be accommodated (see Solow, 1986).Empirical implementation of the CTA tends to focus on measurement of sustainable income (eg. El Serafy, 1989; Repetto, 1989) or net capital accumulation (eg. Pearce and Atkinson, 1993; Proops and Atkinson, 1993) rather than on direct estimation of the capital stock.5 The theoretical models that underpin the CTA typically assume a Cobb-Douglas production function with constant returns to scale, no population growth, and no technological change. Any indices of net capital accumulation which attempt to make even a first approximation to reality must take these variables into account. None of the recent empirical studies does so. For example, David Pearce and Giles Atkinson (1993) present data from eighteen countries on savings and depreciation of natural andmanufactured capital as a proportion of GNP. They demonstrate that only eight countries had non-declining stocks of total capital, measured at market prices, and thus passed a weak sustainability criterion of a constant aggregate capital stock, but their methodology ignores population growth, returns to scale or technological change.IV.INTERNAL APPRAISAL OF THE CAPITAL THEORY APPROACHIn this section, I take as given the basic assumptions and rationale of neoclassical economics and highlight some of the technical problems that are encountered in using the CTA as an operational guide to policy. From a neoclassical standpoint these might be seen as difficulties in the positive theory that may lead to difficulties in the normative theory of sustainability policy. In the following section, I take as given solutions to these technical difficulties and examine some of the problems inherent in the normative neoclassical approach to sustainability.a.Limits to Substitution in Production and "Strong Sustainability"Capital theorists are divided among proponents of weak sustainability and strong sustainability. This terminology is confusing as it suggests that the various writers have differing ideas of what sustainability is.6 In fact they agree on that issue, but differ on what is the minimum set of necessary conditions for achieving sustainability. The criterion that distinguishes the categories is the degree of substitutability believed to be possible between natural and artificial capital.7The weak sustainability viewpoint follows from the early literature and holds that the relevant capital stock is an aggregate stock of artificial and natural capital. Weak sustainability assumes that the elasticity of substitution between natural capital and artificial capital is one and therefore that there are no natural resources that contribute to human welfare that cannot be asymptotically replaced by other forms of capital. Reductions in natural capital may be offset by increases inartificial capital. It is sometimes implied that this might be not only a necessary condition but also a sufficient condition for achieving sustainability (eg. Solow, 1986, 1993).Proponents of the strong sustainability viewpoint such as Robert Costanza and Herman Daly (1992) argue that though this is a necessary condition for sustainability it cannot possibly be a sufficient condition. Instead, a minimum necessary condition is that separate stocks of aggregate natural capital and aggregate artificial capital must be maintained. Costanza and Daly (1992) state: "It is important for operational purposes to define sustainable development in terms of constant or nondeclining total natural capital, rather than in terms of nondeclining utility" (39).8 Other analysts such as members of the "London School" hold views between these two extremes (see Victor, 1991). They argue that though it is possible to substitute between natural and artificial capital there are certain stocks of "critical natural capital" for which no substitutes exist. A necessary condition for sustainability is that these individual stocks must be maintained in addition to the general aggregate capital stock.The weak sustainability condition violates the Second Law of Thermodynamics, as a minimum quantity of energy is required to transform matter into economically useful products (Hall et al., 1986) and energy cannot be produced inside the economic system.9 It also violates the First Law on the grounds of mass balance (Pezzey, 1994). Also ecological principles concerning the importance of diversity in system resilience (Common and Perrings, 1992) imply that minimum quantities of a large number of different capital stocks (eg. species) are required to maintain life support services. The London School view and strong sustainability accommodate these facts by assuming that there are lower bounds on the stocks of natural capital required to support the economy, in terms of the supply of materials and energy, and in terms of the assimilative capacity of the environment, and that certain categories of critical natural capital cannot be replaced by other forms of capital.Beyond this recognition it is an empirical question as to how far artificial capital can substitute for natural capital. There has been little work on this at scales relevant to sustainability. However, the econometric evidence from studies of manufacturing industry suggest on the whole that energy and capital are complements (Berndt and Wood, 1979).In some ways the concept of maintaining a constant stock of aggregate natural capital is even more bizarre than maintaining a non-declining stock of total capital. It seems more reasonable to suggest that artificial capital might replace some of the functions of natural capital than to suggest that in general various natural resources may be substitutes for each other. How can oil reserves substitute for clean air, or iron deposits for topsoil? Recognizing this, some of the strong sustainability proponents have dropped the idea of maintaining an aggregate natural capital stock as proposed by Costanza and Daly (1992) and instead argue that minimum stocks of all natural resources should be maintained (Faucheux and O'Connor, 1995). However, this can no longer really be considered an example of the CTA. Instead it is an approach that depends on the concept of safe minimum standards or the precautionary principle. The essence of the CTA is that some aggregation of resources using monetary valuations is proposed as an indicator for sustainability.The types of models which admit an index of aggregate capital, whether aggregate natural capital or aggregate total capital, is very limited. Construction of aggregate indices or subindices of inputs depend on the production function being weakly separable in those subgroups (Berndt and Christensen, 1973). For example it is only possible to construct an index of aggregate natural capital if the marginal rate of substitution between two forms of natural capital is independent of the quantities of labor or capital employed. This seems an unlikely proposition as the exploitation of many natural resources is impractical without large capital stocks. For example, in the production of caught fish, the marginal rate of substitution, and under perfect competition the price ratio, between stocks of fresh water fish and marine fish should be independent of the number of fishingboats available. This is clearly not the case. People are not likely to put a high value on the stock of deep sea fish when they do not have boats to catch them with.If substitution is limited, technological progress might reduce the quantity of natural resource inputs required per unit of output. However, there are arguments that indicate that technical progress itself is bounded (see Pezzey, 1994; Stern, 1994). One of these (Pezzey, 1994) is that, just as in the case of substitution, ultimately the laws of thermodynamics limit the minimization of resource inputs per unit output. Stern (1994) argues that unknown useful knowledge is itself a nonrenewable resource. Technological progress is the extraction of this knowledge from the environment and the investment of resources in this activity will eventually be subject to diminishing returns.Limits to substitution in production might be thought of in a much broader way to include nonlinearities and threshold effects. This view is sometimes described as the "ecological" viewpoint on sustainability (Common and Perrings, 1992; Common, 1995) or as the importance of maintaining the "resilience" of ecological systems rather than any specific stocks or species. This approach derives largely from the work of Holling (1973, 1986). In this view ecosystems are locally stable in the presence of small shocks or perturbations but may be irreversibly altered by large shocks. Structural changes in ecosystems such as those that come about through human interference and particularly simplification, may make these systems more susceptible to losing resilience and being permanently degraded. There is clearly some substitutability between species or inorganic elements in the role of maintaining ecosystem productivity, however, beyond a certain point this substitutability may suddenly fail to hold true. This approach also asks us to look at development paths as much less linear and predictable than is implied in the CTA literature.All things considered, what emerges is a quite different approach to sustainability policy. It is probable that substitution between natural and artificial capital is limited, as is ultimately technicalchange. Additionally the joint economy-ecosystem system may be subject to nonlinear dynamics. This implies that eventually the economy must approach a steady state where the volume of physical economic activity is dependent on the maximum economic and sustainable yield of renewable resources or face decline ie. profit (or utility) maximizing use of renewable resources subject to the sustainability constraint. As in Herman Daly's vision (Daly, 1977) qualitative change in the nature of economic output is still possible. Sustainability policy would require not just maintaining some stocks of renewable resources but also working to reduce "threats to sustainability" (Common, 1995) that might cause the system to pass over a threshold and reduce long-run productivity.The notion of Hicksian income originally applied to an individual price-taking firm (Faucheux and O'Connor, 1995). However, even here it is not apparent that the myopic policy of maintaining capital intact from year to year is the best or only way to ensure the sustainability of profits into the future. If a competing firm makes an innovation that renders the firm's capital stock obsolete, the latter's income may drop to zero. This is despite it previously following a policy of maintaining its capital intact. The firm's income measured up to this point is clearly seen to be unsustainable. In fact its policy has been shown to be irrelevant to long-run sustainability. In the real world firms will carry out activities that may not contribute to the year to year maintenance of capital and will reduce short-run profits such as research and development and attempts to gain market share.10 These activities make the firm more resilient against future shocks and hence enhance sustainability.b.Prices for AggregationSupposing that the necessary separability conditions are met so that aggregation of a capital stock is possible, analysts still have to obtain an appropriate set of prices so that the value of the capital stock is a sustainability relevant value. The CTA is more or less tautological if we use the "right" prices. However, these correct "sustainability prices" are unknown and unknowable. A number of。

Taxes, Loans, Credit and Debts in the 15th Century

Taxes, Loans, Credit and Debts in the 15th Century

Economics World, ISSN 2328-7144 April 2014, Vol. 2, No. 4, 281-289 Taxes, Loans, Credit and Debts in the 15th Century Towns ofMoravia: A Case Study of Olomouc and Brno *Roman ZaoralCharles University, Prague, Czech RepublicThe paper explores urban public finance in the late medieval towns on the example of two largest cities inMoravia—Olomouc and Brno. Its purpose is to define similarities and differences between them, to express changeswhich have taken place in the course of the 15th century, and to distinguish financial administration and types ofinvestments in the towns situated in the Eastern part of the Holy Roman Empire from those in the West. The primarysources (municipal books, charters, and Jewish registers) are analyzed using quantitative and comparative methodsand the concept of the 15th century financial crisis is reconsidered. The analysis proved that each town within theEmpire paid a fixed percentage of the total tax sum of central direct taxation through a system of repartition so thateach tax increase caused an ever growing pressure on its finances. New taxes collected in Brno and Olomouc after1454 were not proportional to the economic power and population of both cities and gave preferential benefit toOlomouc. At the same time the importance of urban middle classes as tax-farmers started to grow. They increasinglygained influence on the financial and fiscal regime, both through political emancipation as well as by serving asfinancial officials. The Jewish registers document a general lack of money in the 1430s and 1440s which played intohands of the Jewish usurers. Accounting records from the 1480s and 1490s, to the contrary, give evidence of thegrowth of loans, debts and credit enterprise. The restructuring of urban elites, caused by financial crises and socialconflicts, was centered round the wish for a more efficient management of urban financial resources and moreintensive control rights. It was a common feature of towns in the West just as in the East of the Empire. On the otherside, the tax basis in the West was rather created by indirect taxes, while direct taxes prevailed in the East. Tradeactivities played more important role in the West, whereas rich burghers in the East rather invested into land estates.From the research also emerged that the establishment of separate cashes is documented in the West only, themanagement of urban finance in the East remained limited to a single-entry accounting.Keywords: urban public finance, financial crisis, taxation, Jewish capital, late medieval towns, Moravia IntroductionThe study of public finances has received considerable attention during the last decade because of its key role in European state formation by serving as an instrument to extract the capital needed for the realization of political goals from the economic systems that formed the base of all public finances. With Stasavage (2011) * The paper was supported by The Ministry of Education, Youth and Sports of the Czech Republic—Institutional Support for Long-Term Development of Research Organizations—Charles University, Faculty of Humanities. Roman Zaoral, Ph.D., Faculty of Humanities, Charles University. Correspondence concerning this article should be addressed to Roman Zaoral, Charles University, Faculty of Humanities, UAll Rights Reserved.TAXES, LOANS, CREDIT AND DEBTS IN THE 15TH CENTURY TOWNS OF MORA VIA 282recent publication States of credit, he has made a valuable contribution to the debate on the emergence ofpublic credit as a decisive element in the state formation processes that took place in late medieval and earlymodern Europe. In his work, Stasavage emphasizes the importance of geographic scale of political units and theform of political representation within polities for the access to capital markets and thus the possibility to createfunded public debt in order to finance the consolidation or expansion of their relative position within politicalnetworks and regions. The foundation of this public debt was provided by the fiscal revenues originating fromdirect or indirect taxation.Blockmans (1997) pointed out in this debate the importance of scale and timing with respect to local political representative structures. In the larger Flemish cities such as Ghent or Bruges, the participation ofmiddle classes in town governments and thus control over public finances developed in an earlier stage (the14th century), whereas these developments in less-urbanized regions with smaller urban populations such asHolland and Guelders (and in fact in the whole Holy Roman Empire) did not occur until the 15th century. Inthis way, the hypothesis can be stated tentatively that the position of urban elites influenced the managementof urban finances at large, and urban fiscal systems in particular. The degree to which urban elites were able tomonopolize urban government was also determining the room left for other intermediaries to have a say in thefinancial policies of a town and to function in the management of the fiscal systems that were the basis ofmost urban finances. The socio-political backgrounds and the interplay between the political elites, urbanofficials and tax farmers are thus an important topic for knowledge of the intricate mechanisms, which are atthe crossing point of the economic, social, political, and financial developments in the late-medieval urbansociety.All Rights Reserved.Research QuestionsThe author’s attention is paid to the towns situated in the East of the Holy Roman Empire, namely in the Czech lands, in order to show which similarities and differences can be found between towns in the Westernand Eastern part of the Empire, in which way and to what degree the 15th century economic and financial crisesand social conflicts influenced the management of urban fiscal systems (and the closely-linked system of publicdebt) of two traditional capitals1and at the same time the largest cities in Moravia—Brno and Olomouc, theeconomic potential of which remarkably started to differ, particularly during the second half of the 15th century.In the period between 1420 and 1500, Olomouc as the seat of Moravian bishopric grew from round 5,000 tomore than 6,000 inhabitants at the turn of the 16th century, whereas Brno as the former seat of MoravianMargraves decreased from round 8,000 to less than 6,000 inhabitants during the same time (Šmahel, 1995, p.360; Macek, 1998, p. 27).Research MethodsIt is the purpose of this paper to quantify data obtained from the analysis of municipal books, charters and Jewish registers relating to urban public finance in the late medieval cities of Moravia (Czech Republic),particularly in Olomouc and Brno, and to compare so the financial situation of towns situated in the Easternpart of the Holy Roman Empire with those in the West. The concept of the 15th century financial crisis isreconsidered.1The Moravian Diet, starting in the 13th century, the Moravian Land Tables and the Moravian Land Court were all seated in bothTAXES, LOANS, CREDIT AND DEBTS IN THE 15TH CENTURY TOWNS OF MORA VIA 283 The political and economic difficulties which troubled the Margraviate of Moravia during the 15th century (the Hussite wars in the 1420s and 1430s and the Bohemian-Hungarian wars in the 1460s and 1470s) did notonly influence the fiscal system itself, mainly by creation of new taxes and by increase of the tax burden tocover the growing urban public debt. The financial crises, bankruptcies, and financial reforms also had animpact on the official involvement of the burghers and guilds in the management of the urban fiscal systems,following their relatively late political emancipation in the 15th century (Marek, 1965). Until that time, thelocal elites had been formed from the closed merchant oligarchy, which monopolized the town government,defended its own particularistic interests through privileged autonomy and controlled the urban finances.In the 15th century, the importance of urban middle classes as tax-farmers started to grow, they increasingly gained influence on the financial and fiscal regime, both through political emancipation as well asby serving as financial officials. They also demanded more insight in the financial management, both ofindirect taxation and the management of urban debt. They were given a central role in the financial reformsnecessary to face the growing tension between economic stagnation and the financial demands. In this way, theimpact of these socio-political changes on the management of the urban fiscal systems can be displayed.The concept of a financial crisis has recently been addressed by what is now known as the “New fiscal history”. The emergence of public finance, fiscal systems, and the creation of public debt are at the heart ofthese discussions: In this sense, a financial crisis occurs when expenditure structurally outweighs the normalrevenues from taxation and the ability to borrow money in order to meet current financial obligations (Bonney,1995, pp. 5-8). The 15th century is generally seen as a period of structural political and economic crisis notonly in the West of the Empire, in the Low Countries (Van Uytven, 1975), but also in the East, in the Czech All Rights Reserved.lands (Šmahel, 1995, pp. 208-220). This crisis also had consequences for urban public finance and itsmanagement. Each town within the Empire had to pay a fixed percentage of the total tax sum of central directtaxation through a system of repartition and so the increased tax burden had forced several towns to sellannuities on an unprecedented scale, because these sums were paid directly through the urban finances(Blockmans, 1999, pp. 287, 297-304). Thus, central direct taxation indirectly tapped into the financial resourcesof the towns, which in turn led to an ever growing pressure on the urban finances causing an increase of urbanindirect taxation to cover the funded debt caused by these annuity sales.AnalysisUrban Public Finance During the 15th Century Financial CrisisOlomouc as leading royal city in Moravia, which exceeded Brno in population size at the end of the 15th century, represented a craft town producing for export on one side and a consume town on the other side(Marek, 1965, p. 125). The urban population grew particularly in the 1450s and 1460s due to new incomersfrom Silesia and North Moravia. Among them, there were craftsmen from 85 percent, mostly cloth weavers, butalso representatives of other textile, food, shoe, leather, and wood processing crafts, ranking into sociallyweaker groups, while the number of merchants was much smaller (Mezník, 1958, pp. 350-353). From theviewpoint of the economic structure, Olomouc was close to Breslau in Silesia.In the 1420s, catholic Olomouc spent a lot of money for its defense against attacks of the Hussite troops, for the building wooden fortifications and for its own mercenary troops. During fights, its burghers had todispatch city troops and to get armor for the king, all beyond the usual yearly tax. So for example, in 1424interests from the war debts of the city exceeded an amount of 200 marks which substantially strangled itsTAXES, LOANS, CREDIT AND DEBTS IN THE 15TH CENTURY TOWNS OF MORA VIA 284trading activities (Nešpor, 1998, p. 79). In connection with the blockade of Olomouc in the second half of the1420s, the long-distance trade was put at risk. The city council covered financial expenses by the sale of realestate, of yearly pays of altar servers and by loans from the Jews as well as from own burghers. The Jewishloans were, however, burdened with a high interest and the council used them only once for the war with theHussites.2Loans from own burghers for the so-called fair credit up to 10 percent were more advantageous.The Role of Jewish CapitalYet the Jewish capital represented an important reservoir of financial means for many inhabitants. The surviving Olomouc Jewish register dated back to 1413-1428, which makes it possible to look into the practiceof lending money, gives evidence on the Jewish loans of craftsmen, merchants, and shopkeepers (Kux, 1905).3However, many other hidden loans, which had been going on with the active participation of Christianinhabitants, did not get into the register at all. The credit had three forms: loans in cash, pledge loans, or tradetransactions with goods. As it was necessary to sell unpaid pawns, the usurer became a shopkeeper and hissmall shop was a junk shop at the same time.The Jews usually required one groschen a week as the interest from each shock or mark of silver which they justified by high taxes and other charges. The average yearly credit taxation reached 86 percent from oneshock (= 60 Prague groschen), respectively 81 percent from one mark(= 64 Prague groschen, a half of pound).The debtor paid so 112 groschen per year from one shock and 116 groschen from one mark (Kux, 1905, pp.24-25). At lower installments, the interest could farther go up and when installments have not been paid at allthe debt grew in geometric progression.All Rights Reserved.Unless the debtor was not able to refund an amount, which he had loaned, the creditor could exact some goods in the loan value, such as expensive cloth, furs, gold jewels, silver dishes, horses, or cattle. Carpets,armor, wine barrels, or real estate are documented among pledges as well. A number of the Olomouc Jews,having been engaged in trade with money, ranged between 12 in 1413 and 20 in 1420. Some of them granted10, the others 100 and more loans. Forty percent of all deposits were entered by Solomon, the richest Jewishcreditor in Olomouc. The most frequently lent amounts ranged between one and six pounds, the lowest loanmade 10 groschen, the largest 100 marks, which was a value of two or even three houses located in the centerof Olomouc(Veselý & Zaoral, 2008, pp. 40-41).A general lack of money among inhabitants, particularly in the post-Hussite period in the 1430s and 1440s,played into hands of the Jewish usurers. A high number of small debts in range between one and 10 shocks(mostly three-five shocks) and a small number of big debts were typical for that period. The fact that amountsof the two thirds of debtors represented only 13 percent of all debts gave evidence on the general becomingpoor of population. On the other side, the only entry of the sum of 600 Hungarian florins, which Johann vonAachen owed to Johann Weigle, represented 42 percent of all declared owed money in the 1440s (Zaoral, 2009,p. 111). In the 1450s, a number of the highest (above 100 shock of silver) and of the lowest debts (under oneshock of silver) increased, which was the evidence on a slight economic recovery and on the graduallyincreasing social differentiation of the Olomouc inhabitants. Superiority in single-entry accounting, onedebtor-one creditor relations, attests, however, insufficiently developed finance in the milieu of guild craftsmenand shopkeepers. The Jewish credit represented, to the contrary, a source of more flexible forms of enterprise.2Olomouc District State Archive, Olomouc City Archive, books, sign. 164, fol. 235r.TAXES, LOANS, CREDIT AND DEBTS IN THE 15TH CENTURY TOWNS OF MORA VIA 285 Despite a danger of large indebtedness, some wealthy people were lending money from the Jews for more times. The owner of the magistrate mansion Wenceslas Greliczer is entered into the Jewish register even 26times. He made loans from more usurers at the same time, going once in cash, going twice he bought horses oncredit and at another time he pledged silk bedding or pearl bracelet of his wife. The Greliczer family, whichplayed a leading role in the city for some 10 years, had at the end to sell all its property and after 1430 itdisappeared out of stage (Kux, 1905, p. 27). The presence of a number of other prominent councilors in theJewish register was a symptom of their later financial bankruptcy, which strengthened anti-Jewish mood in thecity.A lack of money among burghers occurred again in the 1440s as it was evident from the Olomoucmemorial book dated back to 1430-1492 (Spáčilová & Spáčil, 2004). Particularly the year 1442 was critical formany inhabitants as it was evident from the number of loans. In response to rapidly worsening financialsituation, the council decided in 1446 to grant loans in an amount of 10 pounds of silver for damage reduction(Zaoral, 2009, p. 112). Some years later, in 1454, Ladislaus Posthumus, King of Bohemia (1453-1457),expelled the Jews from Moravian royal towns.Urban TaxesThe annual tax in an amount of 600 marks of groschen, collected in Brno and Olomouc, was slightly changing during the 15th century. In 1437 the margrave Albrecht II of Austria (1437-1439) cut Olomouc urbantaxes as a reward for help in fight against the Hussites. After 1454 the annual tax in Olomouc decreased from600 to 587 marks and 40 groschen and this amount remained unchanged until 1526. On the contrary, the tax in All Rights Reserved.Brno increased from 600 to 656 pounds 16 groschen as a recompense for the expelled Jews. Such a tax burdenwas not proportional to the economic power and population of both cities and gave preferential benefit toOlomouc (Dřímal, 1962, pp. 86-87, 116; Zaoral, 2009, pp. 107-109). The different economic potential ofOlomouc and Brno could be also seen from a number of yearly markets which increased in Olomouc from twoup to three and decreased in Brno from four to two (Šebánek, 1928, p. 51; Čermák, 2002, pp. 25-27).Despite the fact that the city had to pay war debts to private creditors with difficulty and for a long time, the standard of living of the urban population in Olomouc was gradually increasing during the 15th century.The municipal tax-payers growth was so big in the second half of the 15th century that collected moneyexceeded the municipal tax amount more than three times (Dřímal, 1962, pp. 122-123). To the contrary, anumber of taxpayers in Brno was decreasing during the whole 15th century and still in 1509 their number wasunder the level of the year 1432. At the same time a number of members of the middle strata, poor craftsmenand sole traders even decreased on one half of the state in 1365 (Dřímal, 1964, pp. 277-280).The administration of urban finances was characterized by a disorganized evidence of assets and liabilities.The municipal collection (the so-called losunga), collected from all town inhabitants with a sufficient propertybase, represented a relevant quota of municipal incomes. The “losunga” amount, paid from a concrete house,was determined by three criteria: a built-up surface, a house location, and an existence of the certain rightconnected with the house. This amount was intentionally undervalued; it did not reflect price fluctuations ofreal estates and remained more or less the same. Craft plying, beer and wine sale, lucrative cloth trade, andannuities were a subject of taxation as well. But only a part of collected money flew to the royal treasury, whichwas often used as pledges for aristocrats. In 1514, pledges in Olomouc were even higher than an amount of themunicipal tax (Dřímal, 1962, p. 93).TAXES, LOANS, CREDIT AND DEBTS IN THE 15TH CENTURY TOWNS OF MORA VIA 286The increasing purchases of houses on the basis of the Law of Emphytheusis4represented another serious problem. Some noblemen ignored compulsory payments from these properties. It caused conflicts withburghers, but the pressure put by the urban representation was only partly successful.In the 1490s, the citycouncil expressed concern about the fact that it supported high nobility and clergy from its own money andjoined insurgent burghers. In the early 16th century, the city found a solution how to get rid of unwelcomecreditors. In 1508, it offered the Bohemian king Vladislaus II (1471-1516) to pay off pledged revenues. Thecity used them to its own benefit for 20 years (Dřímal, 1962, pp. 89-94).Under these circumstances, taxes and administrative fees, which the town succeeded to buy back from the ruler, gained importance all the more. Provincial castle tolls, customs duties (ungelt), and bridge tolls belongedto the most important. The incomes from the town overhead business and from various financial operationsincreased. The town council bought up villages and yards in the immediate walls surroundings. However, thereal value of charges from the town villages was gradually decreasing because charges did not reflect adecrease of the payment power of money in circulation. Thanks to completion of the large farm system in thetown ownership at the end of the 15th century, Olomouc was offered a considerable space for series ofactivities. The incomes from brewing and fish farming were not negligible as well. But the main share ofmunicipal incomes was represented by money paid from the town property and toll (Kux, 1918, pp. 12-13).Despite a varied scale of incomes, the town was never endowed with large sums in cash. Practically all gained money was immediately given out. Particularly taxes as a relevant phenomenon of municipal economicswere draining big amounts of money from the city budget.Superiority of weight unites (marks) over numeric units (shocks of groschen) in all types of written All Rights Reserved.sources gave evidence on a general lack of quality coins. In the 1450s, when the financial crisis culminated, theking Ladislaus gave the burghers of Olomouc permission to repay loans in the petty coins and thereby made awidely used practice legal.5At the same time, gold coins, which replaced counting in marks, have started topenetrate into everyday life since the 1440s. The Olomouc burghers repaid two thirds of their loans in silverand one third in gold. The creditors accepted as a general rule groschen coins from the craftsmen andHungarian florins from the merchants. Payments in goldguldens occurred rarely in the sources. The increaseddemand for gold coins reflected a contradiction between a long-term lack of gross silver coins in circulationand a necessity of the financial covering of trade transactions with real estates and credit (Zaoral, 2009, pp.118-119).The oldest surviving tax register in Olomouc came from 1527. According to it, about two thirds of taxpayers paid less than eight groschen, middle class (about 25 per cent) paid 10 to 26 groschen from the taxbase of 1.5 to four marks and eight percent of wealthy people paid 32 to 102 groschen from the tax base of fiveto 16 marks. These 89 richest burghers owned more than 40 percent of all taxed property. The tax was paid by1,096 persons. Among that, there were about 25 percent of cottars, who did not own any immovable property.Even when it takes into account that most payers also paid the craft tax in an amount of six groschen, a totalaverage levy did not exceed 20 groschen per head (Szabó, 1983, p. 57). Thus the tax burden itself was not highin the case of at least minimal incomes. It ranged on the level of some percent of yearly income. Much bigger4The Law of Emphyteusis is a feudal form of a hereditary land lease. It is a right, susceptible of assignment and of descent,charged on productive real estate, the right being coupled with the enjoyment of the property on condition of taking care of theestate and paying taxes, and sometimes the payment of a small rent.TAXES, LOANS, CREDIT AND DEBTS IN THE 15TH CENTURY TOWNS OF MORA VIA 287 damage was caused to city population and to the royal treasury by reduction of the groschen value and by riseof prices. Unlike Brno, in the long-term low share of persons, having been unable to pay a municipal tax, andan increasing share of poor journeymen and cottars in the urban population are apparently other valuableindicators of growing prosperity in Olomouc (Veselý & Zaoral, 2008, pp. 48-51).Urban Public Finance at the Turn of the 16th CenturyLoans and debts started to increase after the overcoming of financial crisis and the losses reduction from the Bohemian-Hungarian war. The total volume of money in circulation increased particularly in the 1480s and1490s, when debt amounts usually reached even some hundreds of florins.6Since the second half of the 15thcentury, the credit enterprise has been closely connected with trade. Bills of debt and entries into shopkeeper’sregisters became the most common record form of loans. Objections to Christian usurers, which lent moneyinstead of the Jews, were frequent. A number of wealthy burghers sold pays with interest, lent money on credit,or practised open usury. Some amounts were quite high, when, for example, the town Mohelnice borrowed 300marks of silver on 10 percent interest from Nicolas Erlhaupt, burgher of Olomouc.7The city council ofOlomouc also conducted financial business. In 1509, for example, it bought from Wolfgang of Liechtenstein(1475-1525), the owner of Mikulov (Nikolsburg), annuities from the South Bohemian town Pelhřimov(Pilgrams) and became a tax collector there (Zemek & Turek, 1983, p. 507).Tradesmen who could not invest money to trade started to speculate with land estates, for example, the Salzer family held a hereditary magistrate mansion and two villages. Speculations with land estates weretypical also for the city council. While in the early 15th century Olomouc owned six villages, at the end of the All Rights Reserved.same century the extent of the city landed property increased twofold to 12 villages and this upward trendcontinued also in the 16th century (Papajík, 2003, p. 51). A lot of money was spent for various buildingactivities (reconstruction of the municipal hall, new buildings of monasteries, churches and chapels) as well asfor hospitals and other forms of social care (Kuča, 2000, pp. 650-652).Corruption, monopolization of the brewing and other rights as well as bias in favor of guilds were the causes of disputes between elites and other segments of the urban population. It led to open revolts of thecommunity against shopkeepers in 1514 and once again in 1527. A letter of complaint, sent to the king in1514, referred to economic privileges of shopkeepers, free market right, beer prices, and mile right(Dřímal, 1963, pp. 133-142). A strong core of the old type patricians in Olomouc caused the craftsmengained control over urban finances only in the mid-16th century, while in Brno they occurred in the citycouncil already in the 15th century (Kux, 1942, pp. 190-197; Mezník, 1962, pp. 291, 302-306; Szabó, 1984,pp. 68-73).ConclusionsFinancial crises and social conflicts directed the aim of restructuring power of the old ruling elites, and finally were centered round the wish for a more efficient management of urban financial resources and moreintensive control rights for those urban social groups that provided the capital for the realization and protectionof “common” urban interests. It was a common feature for the cities in the West just as in the East of the6Disputes over repayments of debts, debated before the councillors of Breslau, can serve as an example. In 1485-1496 an amountof bills of debt of Olomouc burghers ranged between 30 and 700 florins. See Olomouc District State Archive, Olomouc CityArchive, books, sign. 6671, fol. 2r-18v.。

Uniqueness of steady states for a certain chemical reaction

Uniqueness of steady states for a certain chemical reaction

a r X i v :q -b i o /0512046v 1 [q -b i o .M N ] 28 D ec 2005Uniqueness of steady states for a certain chemical reactionLiming Wang and Eduardo SontagDepartment of Mathematics,Rutgers UniversityIn [1],Samoilov,Plyasunov,and Arkin provide an example of a chemical reaction whose full stochastic (Master Equation)model exhibits bistable behavior,but for which the deterministic (mean field)version has a unique steady state.The reaction that they provide consists of an enzymatic futile mechanism driven by a second reaction which induces “deterministic noise”on the concentration of the forward enzyme (through a somewhat artificial activation and deactivation of this enzyme).The model is as follows:N +N k 1−→←−k −1N +E N k 2−→←−k −2E S +E k 3−→←−k −3C 1k 4−→P +E P +F k 5−→←−k −5C 2k 6−→S +F .Actually,[1]does not prove mathematically that this reaction’s deterministic model has a single-steady state property,but shows numerically that,for a particular value of the kinetic constants k i ,a unique steady state (subject to stoichiometric constraints)exists.In this short note,we provide a proof of uniqueness valid for all possible parameter values.We use lower case letters n,e,s,c 1,p,c 2,f to denote the concentrations of the corresponding chemicals,as functions of t .The differential equations are,then,as follows:n ′=−k 1n 2+k −1ne −k 2n +k −2e e ′=−k 3se +k −3c 1+k 4c 1+k 1n 2−k −1ne +k 2n −k −2e s ′=−k 3se +k −3c 1+k 6c 2c ′1=k 3se −k −3c 1−k 4c 1p ′=k 4c 1−k 5pf +k −5c 2c ′2=k 5pf −k −5c 2−k 6c 2f ′=−k 5pf +k −5c 2+k 6c 2.Observe that we have the following conservation laws:e +n +c 1≡α,f +c 2≡β,s +c 1+c 2+p ≡γ.Lemma 1.For each positive α,β,γ,there is a unique (positive)steady state,subject to the conservation laws.Proof.Existence follows from the Brower fixed point theorem,since the reduced system evolves on a compact convex set (intersection of the positive orthant and the affine subspace given by the stoichiometry class).We now fix one stoichiometry class and prove uniqueness.Let ¯n ,¯e ,¯s ,¯c 1,¯p ,¯c 2,¯fbe any steady state.From dn/dt=0,we obtain that:k1¯n2+k2¯n¯e=.k3¯eSolving dc2/dt=0for p and then substituting f=β−c2gives:(k−5+k6)¯c2¯p=¯c1.k6The derivative of¯e with respect to¯n is:k1k−1¯n2+2k1k−2¯n+k2k−2。

Laser Ranging to the Moon, Mars and Beyond

Laser Ranging to the Moon, Mars and Beyond

a r X i v :g r -q c /0411082v 1 16 N o v 2004Laser Ranging to the Moon,Mars and BeyondSlava G.Turyshev,James G.Williams,Michael Shao,John D.AndersonJet Propulsion Laboratory,California Institute of Technology,4800Oak Grove Drive,Pasadena,CA 91109,USAKenneth L.Nordtvedt,Jr.Northwest Analysis,118Sourdough Ridge Road,Bozeman,MT 59715USA Thomas W.Murphy,Jr.Physics Department,University of California,San Diego 9500Gilman Dr.,La Jolla,CA 92093USA Abstract Current and future optical technologies will aid exploration of the Moon and Mars while advancing fundamental physics research in the solar system.Technologies and possible improvements in the laser-enabled tests of various physical phenomena are considered along with a space architecture that could be the cornerstone for robotic and human exploration of the solar system.In particular,accurate ranging to the Moon and Mars would not only lead to construction of a new space communication infrastructure enabling an improved navigational accuracy,but will also provide a significant improvement in several tests of gravitational theory:the equivalence principle,geodetic precession,PPN parameters βand γ,and possible variation of the gravitational constant G .Other tests would become possible with an optical architecture that would allow proceeding from meter to centimeter to millimeter range accuracies on interplanetary distances.This paper discusses the current state and the future improvements in the tests of relativistic gravity with Lunar Laser Ranging (LLR).We also consider precision gravitational tests with the future laser rangingto Mars and discuss optical design of the proposed Laser Astrometric Test of Relativity (LATOR)mission.We emphasize that already existing capabilities can offer significant improvements not only in the tests of fundamental physics,but may also establish the infrastructure for space exploration in the near future.Looking to future exploration,what characteristics are desired for the next generation of ranging devices,what is the optimal architecture that would benefit both space exploration and fundamental physics,and what fundamental questions can be investigated?We try to answer these questions.1IntroductionThe recent progress in fundamental physics research was enabled by significant advancements in many technological areas with one of the examples being the continuing development of the NASA Deep Space Network –critical infrastructure for precision navigation and communication in space.A demonstration of such a progress is the recent Cassini solar conjunction experiment[8,6]that was possible only because of the use of Ka-band(∼33.4GHz)spacecraft radio-tracking capabilities.The experiment was part of the ancillary science program–a by-product of this new radio-tracking technology.Becasue of a much higher data rate transmission and, thus,larger data volume delivered from large distances the higher communication frequency was a very important mission capability.The higher frequencies are also less affected by the dispersion in the solar plasma,thus allowing a more extensive coverage,when depp space navigation is concerned.There is still a possibility of moving to even higher radio-frequencies, say to∼60GHz,however,this would put us closer to the limit that the Earth’s atmosphere imposes on signal transmission.Beyond these frequencies radio communication with distant spacecraft will be inefficient.The next step is switching to optical communication.Lasers—with their spatial coherence,narrow spectral emission,high power,and well-defined spatial modes—are highly useful for many space applications.While in free-space,optical laser communication(lasercomm)would have an advantage as opposed to the conventional radio-communication sercomm would provide not only significantly higher data rates(on the order of a few Gbps),it would also allow a more precise navigation and attitude control.The latter is of great importance for manned missions in accord the“Moon,Mars and Beyond”Space Exploration Initiative.In fact,precision navigation,attitude control,landing,resource location, 3-dimensional imaging,surface scanning,formationflying and many other areas are thought only in terms of laser-enabled technologies.Here we investigate how a near-future free-space optical communication architecture might benefit progress in gravitational and fundamental physics experiments performed in the solar system.This paper focuses on current and future optical technologies and methods that will advance fundamental physics research in the context of solar system exploration.There are many activities that focused on the design on an optical transceiver system which will work at the distance comparable to that between the Earth and Mars,and test it on the Moon.This paper summarizes required capabilities for such a system.In particular,we discuss how accurate laser ranging to the neighboring celestial bodies,the Moon and Mars,would not only lead to construction of a new space communication infrastructure with much improved navigational accuracy,it will also provide a significant improvement in several tests of gravitational theory. Looking to future exploration,we address the characteristics that are desired for the next generation of ranging devices;we will focus on optimal architecture that would benefit both space exploration and fundamental physics,and discuss the questions of critical importance that can be investigated.This paper is organized as follows:Section2discusses the current state and future per-formance expected with the LLR technology.Section3addresses the possibility of improving tests of gravitational theories with laser ranging to Mars.Section4addresses the next logical step—interplanetary laser ranging.We discuss the mission proposal for the Laser Astrometric Test of Relativity(LATOR).We present a design for its optical receiver system.Section5 addresses a proposal for new multi-purpose space architecture based on optical communica-tion.We present a preliminary design and discuss implications of this new proposal for tests of fundamental physics.We close with a summary and recommendations.2LLR Contribution to Fundamental PhysicsDuring more than35years of its existence lunar laser ranging has become a critical technique available for precision tests of gravitational theory.The20th century progress in three seem-ingly unrelated areas of human exploration–quantum optics,astronomy,and human spaceexploration,led to the construction of this unique interplanetary instrument to conduct very precise tests of fundamental physics.In this section we will discuss the current state in LLR tests of relativistic gravity and explore what could be possible in the near future.2.1Motivation for Precision Tests of GravityThe nature of gravity is fundamental to our understanding of the structure and evolution of the universe.This importance motivates various precision tests of gravity both in laboratories and in space.Most of the experimental underpinning for theoretical gravitation has come from experiments conducted in the solar system.Einstein’s general theory of relativity(GR)began its empirical success in1915by explaining the anomalous perihelion precession of Mercury’s orbit,using no adjustable theoretical parameters.Eddington’s observations of the gravitational deflection of light during a solar eclipse in1919confirmed the doubling of the deflection angles predicted by GR as compared to Newtonian and Equivalence Principle(EP)arguments.Follow-ing these beginnings,the general theory of relativity has been verified at ever-higher accuracy. Thus,microwave ranging to the Viking landers on Mars yielded an accuracy of∼0.2%from the gravitational time-delay tests of GR[48,44,49,50].Recent spacecraft and planetary mi-crowave radar observations reached an accuracy of∼0.15%[4,5].The astrometric observations of the deflection of quasar positions with respect to the Sun performed with Very-Long Base-line Interferometry(VLBI)improved the accuracy of the tests of gravity to∼0.045%[45,51]. Lunar Laser Ranging(LLR),the continuing legacy of the Apollo program,has provided ver-ification of GR improving an accuracy to∼0.011%via precision measurements of the lunar orbit[62,63,30,31,32,35,24,36,4,68].The recent time-delay experiments with the Cassini spacecraft at a solar conjunction have tested gravity to a remarkable accuracy of0.0023%[8] in measuring deflection of microwaves by solar gravity.Thus,almost ninety years after general relativity was born,Einstein’s theory has survived every test.This rare longevity and the absence of any adjustable parameters,does not mean that this theory is absolutely correct,but it serves to motivate more sensitive tests searching for its expected violation.The solar conjunction experiments with the Cassini spacecraft have dramatically improved the accuracy in the solar system tests of GR[8].The reported accuracy of2.3×10−5in measuring the Eddington parameterγ,opens a new realm for gravitational tests,especially those motivated by the on-going progress in scalar-tensor theories of gravity.1 In particular,scalar-tensor extensions of gravity that are consistent with present cosmological models[15,16,17,18,19,20,39]predict deviations of this parameter from its GR value of unity at levels of10−5to10−7.Furthermore,the continuing inability to unify gravity with the other forces indicates that GR should be violated at some level.The Cassini result together with these theoretical predictions motivate new searches for possible GR violations;they also provide a robust theoretical paradigm and constructive guidance for experiments that would push beyond the present experimental accuracy for parameterized post-Newtonian(PPN)parameters(for details on the PPN formalism see[60]).Thus,in addition to experiments that probe the GR prediction for the curvature of the gravityfield(given by parameterγ),any experiment pushingthe accuracy in measuring the degree of non-linearity of gravity superposition(given by anotherEddington parameterβ)will also be of great interest.This is a powerful motive for tests ofgravitational physics phenomena at improved accuracies.Analyses of laser ranges to the Moon have provided increasingly stringent limits on anyviolation of the Equivalence Principle(EP);they also enabled very accurate measurements fora number of relativistic gravity parameters.2.2LLR History and Scientific BackgroundLLR has a distinguished history[24,9]dating back to the placement of a retroreflector array onthe lunar surface by the Apollo11astronauts.Additional reflectors were left by the Apollo14and Apollo15astronauts,and two French-built reflector arrays were placed on the Moon by theSoviet Luna17and Luna21missions.Figure1shows the weighted RMS residual for each year.Early accuracies using the McDonald Observatory’s2.7m telescope hovered around25cm. Equipment improvements decreased the ranging uncertainty to∼15cm later in the1970s.In1985the2.7m ranging system was replaced with the McDonald Laser Ranging System(MLRS).In the1980s ranges were also received from Haleakala Observatory on the island of Maui in theHawaiian chain and the Observatoire de la Cote d’Azur(OCA)in France.Haleakala ceasedoperations in1990.A sequence of technical improvements decreased the range uncertainty tothe current∼2cm.The2.7m telescope had a greater light gathering capability than thenewer smaller aperture systems,but the newer systemsfired more frequently and had a muchimproved range accuracy.The new systems do not distinguish returning photons against thebright background near full Moon,which the2.7m telescope could do,though there are somemodern eclipse observations.The lasers currently used in the ranging operate at10Hz,with a pulse width of about200 psec;each pulse contains∼1018photons.Under favorable observing conditions a single reflectedphoton is detected once every few seconds.For data processing,the ranges represented by thereturned photons are statistically combined into normal points,each normal point comprisingup to∼100photons.There are15553normal points are collected until March2004.Themeasured round-trip travel times∆t are two way,but in this paper equivalent ranges in lengthunits are c∆t/2.The conversion between time and length(for distance,residuals,and dataaccuracy)uses1nsec=15cm.The ranges of the early1970s had accuracies of approximately25cm.By1976the accuracies of the ranges had improved to about15cm.Accuracies improvedfurther in the mid-1980s;by1987they were4cm,and the present accuracies are∼2cm.One immediate result of lunar ranging was the great improvement in the accuracy of the lunarephemeris[62]and lunar science[67].LLR measures the range from an observatory on the Earth to a retroreflector on the Moon. For the Earth and Moon orbiting the Sun,the scale of relativistic effects is set by the ratio(GM/rc2)≃v2/c2∼10−8.The center-to-center distance of the Moon from the Earth,with mean value385,000km,is variable due to such things as eccentricity,the attraction of the Sun,planets,and the Earth’s bulge,and relativistic corrections.In addition to the lunar orbit,therange from an observatory on the Earth to a retroreflector on the Moon depends on the positionin space of the ranging observatory and the targeted lunar retroreflector.Thus,orientation ofthe rotation axes and the rotation angles of both bodies are important with tidal distortions,plate motion,and relativistic transformations also coming into play.To extract the gravitationalphysics information of interest it is necessary to accurately model a variety of effects[68].For a general review of LLR see[24].A comprehensive paper on tests of gravitationalphysics is[62].A recent test of the EP is in[4]and other GR tests are in[64].An overviewFigure1:Historical accuracy of LLR data from1970to2004.of the LLR gravitational physics tests is given by Nordtvedt[37].Reviews of various tests of relativity,including the contribution by LLR,are given in[58,60].Our recent paper describes the model improvements needed to achieve mm-level accuracy for LLR[66].The most recent LLR results are given in[68].2.3Tests of Relativistic Gravity with LLRLLR offers very accurate laser ranging(weighted rms currently∼2cm or∼5×10−11in frac-tional accuracy)to retroreflectors on the Moon.Analysis of these very precise data contributes to many areas of fundamental and gravitational physics.Thus,these high-precision studies of the Earth-Moon-Sun system provide the most sensitive tests of several key properties of weak-field gravity,including Einstein’s Strong Equivalence Principle(SEP)on which general relativity rests(in fact,LLR is the only current test of the SEP).LLR data yielded the strongest limits to date on variability of the gravitational constant(the way gravity is affected by the expansion of the universe),and the best measurement of the de Sitter precession rate.In this Section we discuss these tests in more details.2.3.1Tests of the Equivalence PrincipleThe Equivalence Principle,the exact correspondence of gravitational and inertial masses,is a central assumption of general relativity and a unique feature of gravitation.EP tests can therefore be viewed in two contexts:tests of the foundations of general relativity,or as searches for new physics.As emphasized by Damour[12,13],almost all extensions to the standard modelof particle physics(with best known extension offered by string theory)generically predict newforces that would show up as apparent violations of the EP.The weak form the EP(the WEP)states that the gravitational properties of strong and electro-weak interactions obey the EP.In this case the relevant test-body differences are their fractional nuclear-binding differences,their neutron-to-proton ratios,their atomic charges,etc. General relativity,as well as other metric theories of gravity,predict that the WEP is exact. However,extensions of the Standard Model of Particle Physics that contain new macroscopic-range quantumfields predict quantum exchange forces that will generically violate the WEP because they couple to generalized‘charges’rather than to mass/energy as does gravity[17,18]. WEP tests can be conducted with laboratory or astronomical bodies,because the relevant differences are in the test-body compositions.Easily the most precise tests of the EP are made by simply comparing the free fall accelerations,a1and a2,of different test bodies.For the case when the self-gravity of the test bodies is negligible and for a uniform external gravityfield, with the bodies at the same distance from the source of the gravity,the expression for the Equivalence Principle takes the most elegant form:∆a= M G M I 2(1)(a1+a2)where M G and M I represent gravitational and inertial masses of each body.The sensitivity of the EP test is determined by the precision of the differential acceleration measurement divided by the degree to which the test bodies differ(position).The strong form of the EP(the SEP)extends the principle to cover the gravitational properties of gravitational energy itself.In other words it is an assumption about the way that gravity begets gravity,i.e.about the non-linear property of gravitation.Although general relativity assumes that the SEP is exact,alternate metric theories of gravity such as those involving scalarfields,and other extensions of gravity theory,typically violate the SEP[30,31, 32,35].For the SEP case,the relevant test body differences are the fractional contributions to their masses by gravitational self-energy.Because of the extreme weakness of gravity,SEP test bodies that differ significantly must have astronomical sizes.Currently the Earth-Moon-Sun system provides the best arena for testing the SEP.The development of the parameterized post-Newtonian formalism[31,56,57],allows one to describe within the common framework the motion of celestial bodies in external gravitational fields within a wide class of metric theories of gravity.Over the last35years,the PPN formalism has become a useful framework for testing the SEP for extended bodies.In that formalism,the ratio of passive gravitational to inertial mass to thefirst order is given by[30,31]:M GMc2 ,(2) whereηis the SEP violation parameter(discussed below),M is the mass of a body and E is its gravitational binding or self-energy:E2Mc2 V B d3x d3yρB(x)ρB(y)EMc2 E=−4.64×10−10andwhere the subscripts E and m denote the Earth and Moon,respectively.The relatively small size bodies used in the laboratory experiments possess a negligible amount of gravitational self-energy and therefore such experiments indicate nothing about the equality of gravitational self-energy contributions to the inertial and passive gravitational masses of the bodies [30].TotesttheSEP onemustutilize planet-sizedextendedbodiesinwhichcase theratioEq.(3)is considerably higher.Dynamics of the three-body Sun-Earth-Moon system in the solar system barycentric inertial frame was used to search for the effect of a possible violation of the Equivalence Principle.In this frame,the quasi-Newtonian acceleration of the Moon (m )with respect to the Earth (E ),a =a m −a E ,is calculated to be:a =−µ∗rM I m µS r SEr 3Sm + M G M I m µS r SEr 3+µS r SEr 3Sm +η E Mc 2 m µS r SEMc 2 E − E n 2−(n −n ′)2n ′2a ′cos[(n −n ′)t +D 0].(8)Here,n denotes the sidereal mean motion of the Moon around the Earth,n ′the sidereal mean motion of the Earth around the Sun,and a ′denotes the radius of the orbit of the Earth around the Sun (assumed circular).The argument D =(n −n ′)t +D 0with near synodic period is the mean longitude of the Moon minus the mean longitude of the Sun and is zero at new Moon.(For a more precise derivation of the lunar range perturbation due to the SEP violation acceleration term in Eq.(6)consult [62].)Any anomalous radial perturbation will be proportional to cos D .Expressed in terms ofη,the radial perturbation in Eq.(8)isδr∼13ηcos D meters [38,21,22].This effect,generalized to all similar three body situations,the“SEP-polarization effect.”LLR investigates the SEP by looking for a displacement of the lunar orbit along the direction to the Sun.The equivalence principle can be split into two parts:the weak equivalence principle tests the sensitivity to composition and the strong equivalence principle checks the dependence on mass.There are laboratory investigations of the weak equivalence principle(at University of Washington)which are about as accurate as LLR[7,1].LLR is the dominant test of the strong equivalence principle.The most accurate test of the SEP violation effect is presently provided by LLR[61,48,23],and also in[24,62,63,4].Recent analysis of LLR data test the EP of∆(M G/M I)EP=(−1.0±1.4)×10−13[68].This result corresponds to a test of the SEP of∆(M G/M I)SEP=(−2.0±2.0)×10−13with the SEP violation parameter η=4β−γ−3found to beη=(4.4±4.5)×10−ing the recent Cassini result for the PPN parameterγ,PPN parameterβis determined at the level ofβ−1=(1.2±1.1)×10−4.2.3.2Other Tests of Gravity with LLRLLR data yielded the strongest limits to date on variability of the gravitational constant(the way gravity is affected by the expansion of the universe),the best measurement of the de Sitter precession rate,and is relied upon to generate accurate astronomical ephemerides.The possibility of a time variation of the gravitational constant,G,wasfirst considered by Dirac in1938on the basis of his large number hypothesis,and later developed by Brans and Dicke in their theory of gravitation(for more details consult[59,60]).Variation might be related to the expansion of the Universe,in which case˙G/G=σH0,where H0is the Hubble constant, andσis a dimensionless parameter whose value depends on both the gravitational constant and the cosmological model considered.Revival of interest in Brans-Dicke-like theories,with a variable G,was partially motivated by the appearance of superstring theories where G is considered to be a dynamical quantity[26].Two limits on a change of G come from LLR and planetary ranging.This is the second most important gravitational physics result that LLR provides.GR does not predict a changing G,but some other theories do,thus testing for this effect is important.The current LLR ˙G/G=(4±9)×10−13yr−1is the most accurate limit published[68].The˙G/G uncertaintyis83times smaller than the inverse age of the universe,t0=13.4Gyr with the value for Hubble constant H0=72km/sec/Mpc from the WMAP data[52].The uncertainty for˙G/G is improving rapidly because its sensitivity depends on the square of the data span.This fact puts LLR,with its more then35years of history,in a clear advantage as opposed to other experiments.LLR has also provided the only accurate determination of the geodetic precession.Ref.[68]reports a test of geodetic precession,which expressed as a relative deviation from GR,is K gp=−0.0019±0.0064.The GP-B satellite should provide improved accuracy over this value, if that mission is successfully completed.LLR also has the capability of determining PPNβandγdirectly from the point-mass orbit perturbations.A future possibility is detection of the solar J2from LLR data combined with the planetary ranging data.Also possible are dark matter tests,looking for any departure from the inverse square law of gravity,and checking for a variation of the speed of light.The accurate LLR data has been able to quickly eliminate several suggested alterations of physical laws.The precisely measured lunar motion is a reality that any proposed laws of attraction and motion must satisfy.The above investigations are important to gravitational physics.The future LLR data will improve the above investigations.Thus,future LLR data of current accuracy would con-tinue to shrink the uncertainty of˙G because of the quadratic dependence on data span.The equivalence principle results would improve more slowly.To make a big improvement in the equivalence principle uncertainty requires improved range accuracy,and that is the motivation for constructing the APOLLO ranging facility in New Mexico.2.4Future LLR Data and APOLLO facilityIt is essential that acquisition of the new LLR data will continue in the future.Accuracies∼2cm are now achieved,and further very useful improvement is expected.Inclusion of improved data into LLR analyses would allow a correspondingly more precise determination of the gravitational physics parameters under study.LLR has remained a viable experiment with fresh results over35years because the data accuracies have improved by an order of magnitude(see Figure1).There are prospects for future LLR station that would provide another order of magnitude improvement.The Apache Point Observatory Lunar Laser-ranging Operation(APOLLO)is a new LLR effort designed to achieve mm range precision and corresponding order-of-magnitude gains in measurements of fundamental physics parameters.For thefirst time in the LLR history,using a3.5m telescope the APOLLO facility will push LLR into a new regime of multiple photon returns with each pulse,enabling millimeter range precision to be achieved[29,66].The anticipated mm-level range accuracy,expected from APOLLO,has a potential to test the EP with a sensitivity approaching10−14.This accuracy would yield sensitivity for parameterβat the level of∼5×10−5and measurements of the relative change in the gravitational constant,˙G/G, would be∼0.1%the inverse age of the universe.The overwhelming advantage APOLLO has over current LLR operations is a3.5m astro-nomical quality telescope at a good site.The site in southern New Mexico offers high altitude (2780m)and very good atmospheric“seeing”and image quality,with a median image resolu-tion of1.1arcseconds.Both the image sharpness and large aperture conspire to deliver more photons onto the lunar retroreflector and receive more of the photons returning from the re-flectors,pared to current operations that receive,on average,fewer than0.01 photons per pulse,APOLLO should be well into the multi-photon regime,with perhaps5–10 return photons per pulse.With this signal rate,APOLLO will be efficient atfinding and track-ing the lunar return,yielding hundreds of times more photons in an observation than current√operations deliver.In addition to the significant reduction in statistical error(useful).These new reflectors on the Moon(and later on Mars)can offer significant navigational accuracy for many space vehicles on their approach to the lunar surface or during theirflight around the Moon,but they also will contribute significantly to fundamental physics research.The future of lunar ranging might take two forms,namely passive retroreflectors and active transponders.The advantages of new installations of passive retroreflector arrays are their long life and simplicity.The disadvantages are the weak returned signal and the spread of the reflected pulse arising from lunar librations(apparent changes in orientation of up to10 degrees).Insofar as the photon timing error budget is dominated by the libration-induced pulse spread—as is the case in modern lunar ranging—the laser and timing system parameters do√not influence the net measurement uncertainty,which simply scales as1/3Laser Ranging to MarsThere are three different experiments that can be done with accurate ranges to Mars:a test of the SEP(similar to LLR),a solar conjunction experiment measuring the deflection of light in the solar gravity,similar to the Cassini experiment,and a search for temporal variation in the gravitational constant G.The Earth-Mars-Sun-Jupiter system allows for a sensitive test of the SEP which is qualitatively different from that provided by LLR[3].Furthermore,the outcome of these ranging experiments has the potential to improve the values of the two relativistic parameters—a combination of PPN parametersη(via test of SEP)and a direct observation of the PPN parameterγ(via Shapiro time delay or solar conjunction experiments).(This is quite different compared to LLR,as the small variation of Shapiro time delay prohibits very accurate independent determination of the parameterγ).The Earth-Mars range would also provide for a very accurate test of˙G/G.This section qualitatively addresses the near-term possibility of laser ranging to Mars and addresses the above three effects.3.1Planetary Test of the SEP with Ranging to MarsEarth-Mars ranging data can provide a useful estimate of the SEP parameterηgiven by Eq.(7). It was demonstrated in[3]that if future Mars missions provide ranging measurements with an accuracy ofσcentimeters,after ten years of ranging the expected accuracy for the SEP parameterηmay be of orderσ×10−6.These ranging measurements will also provide the most accurate determination of the mass of Jupiter,independent of the SEP effect test.It has been observed previously that a measurement of the Sun’s gravitational to inertial mass ratio can be performed using the Sun-Jupiter-Mars or Sun-Jupiter-Earth system[33,47,3]. The question we would like to answer here is how accurately can we do the SEP test given the accurate ranging to Mars?We emphasize that the Sun-Mars-Earth-Jupiter system,though governed basically by the same equations of motion as Sun-Earth-Moon system,is significantly different physically.For a given value of SEP parameterηthe polarization effects on the Earth and Mars orbits are almost two orders of magnitude larger than on the lunar orbit.Below we examine the SEP effect on the Earth-Mars range,which has been measured as part of the Mariner9and Viking missions with ranging accuracy∼7m[48,44,41,43].The main motivation for our analysis is the near-future Mars missions that should yield ranging data, accurate to∼1cm.This accuracy would bring additional capabilities for the precision tests of fundamental and gravitational physics.3.1.1Analytical Background for a Planetary SEP TestThe dynamics of the four-body Sun-Mars-Earth-Jupiter system in the Solar system barycentric inertial frame were considered.The quasi-Newtonian acceleration of the Earth(E)with respect to the Sun(S),a SE=a E−a S,is straightforwardly calculated to be:a SE=−µ∗SE·r SE MI Eb=M,Jµb r bS r3bE + M G M I E b=M,Jµb r bS。

经济学翻译

经济学翻译

高学费、高资助政策下低收入家庭学生高等教育入学机会探析——以弗吉尼亚大学为例过去20年里,美国经济在经受了全球性的经济危机冲击之后,陷入了漫长的经济衰退期。

教育质量的下降和财政紧缩的要求,办学成本的上升及经费使用效率较低等因素的影响,使美国社会、政府和国会开始重新审视美国高等教育政策。

在弗吉尼亚州,一些优秀的公办高校都开始积极采取措施来改善与政府之间的状况。

面对不景气的政府资助,弗吉尼亚州的公办教育机构都开始转向采用收取高额学费以应对提供给学生的高额资助。

在《高校财政管理运行法案》重建之后,弗吉尼亚州的公立高校在学费设置方面有了更多的自主权,以此减少对政府的依赖。

这一举措使得中低收入者家庭的学生有更多的机会进入好的大学。

弗吉尼亚大学还颁布了一项名为Acess UVa(弗吉尼亚大学入学条例)的计划,以此来发布更多公共信息,招募更多学生以及为低收入家庭学生提供更多基于需求的财政资助。

众所周知,包括弗吉尼亚大学在内的公立学校,一旦拥有了更多的自主权,就会提高学费收入,以便为那些州内低收入家庭学生提供资助。

简言之,这些优秀的公办高校一方面通过提出“高学费、高资助”政策来增加收入以及为低收入家庭学生提供入学机会,另一方面也通过这项政策来增加教学科研的整体水平。

但是,学费的高低与低收入家庭学生的入学机会难道真有直接联系吗?提出问题:高学费政策真的可以为低收入家庭学生提供更多的入学机会吗?(Can a high-tuition strategy really increase opportunities for low-income students?)文章分为三部分:第一部分描述了过去20年里弗吉尼亚大学学费的高低和政府资助之间的变化,以及这一变化直接导致的政府与公立高校间财政安排的重建。

第二部分提出了《重建高等教育法案》以及《弗吉尼亚大学入学条例》,并且讨论了这些政策是如何影响高校学费以及为不同经济条件家庭的学生提供财政资助的。

[C(NH2)3][BH4]

[C(NH2)3][BH4]

New chemical hydrogen storage materials exploiting the self-sustaining thermal decomposition of guanidinium borohydride wThomas J.Groshens*and Richard A.HollinsReceived(in Cambridge,UK)16th January2009,Accepted25th March2009First published as an Advance Article on the web14th April2009DOI:10.1039/b900376bGuanidinium borohydride(GBH)was structurally characterized by single-crystal X-ray diffraction and found to release more than10wt%H2as a fairly pure stream during a self-sustaining thermal decomposition reaction both with and without additives that were identified to reduce the concentration of the main ammonia impurity and control the reaction sustainability.The ability to store hydrogen at high volumetric and gravimetric density and release it on demand provides enabling technology vital to the widespread implementation of fuel cells as high power density portable systems.One approach is chemical hydrogen storage where hydrogen is generated on demand through a chemical reaction.For small scale PEM fuel cell systems it may be advantageous to employ exothermic self-sustaining reactions to generate hydrogen coupled with a small H2storage tank as a buffer.The design of systems based on this approach using a sequentially initiated pellet array can be found in the patent literature.1 The heat from such reactions may also be used to generate additional hydrogen from storage materials based on endothermic processes.In our research efforts to develop new manageable chemical hydrogen storage sources we investigated hydrogen generation from the autogenous aminolysis reactions of boron hydrides including guanidinium borohydride(GBH)and ethylenedi-amine bisborane(EDB).Our resultsfind that these compounds provide a promising,low cost,reliable,and safe high-density chemical hydrogen storage source for applica-tions where fast hydrogen generation on demand is required. Boron–nitrogen–hydrogen compounds are of general interest for chemical hydrogen storage2a providing high weight percent hydrogen materials where the corresponding protic and hydridic character of the hydrogens on the nitrogen and boron,respectively,allows a facile H2elimination pathway. In particular,the borohydride ion(BH4À)at27.2wt%H2 can provide exceptionally high H2density if an appropriate counter-cation is chosen that contributes sufficient protons for hydrogen formation without adding excess dead weight. Unfortunately,the ideal exemplar,ammonium borohydride [NH4][BH4],at24.5wt%H2is thermally unstable.As a result, research on discrete compounds for chemical hydrogen storage has focused on the ammonia borane(AB)adduct,2 NH3BH3,with19.6wt%hydrogen.Few stable borohydride salts are known with a suitable balance of protic and hydridic hydrogens as potential alternatives to AB for chemical hydrogen storage.One example,[Mg(NH3)2][BH4]2with16.0wt%H2,was success-fully demonstrated as a H2source for a portable chemical laser application.3A Rietveld refinement of the crystal structure and the endothermic decomposition of this compound to yield 13.1wt%hydrogen as6equiv.H2along with NH3impurity has been reported.4A second example,guanidinium boro-hydride(GBH)[C(NH2)3]+[BH4]À,with13.5wt%H2 (10.8wt%H2thermally accessible by aminolysis as4equivalents of H2)has not been investigated as a chemical hydrogen storage source.With four hydridic borohydride B–H bonds and six protic N–H bonds,GBH has only a modest dead weight penalty in the guanidinium ion.The higher thermal stability of the borohydride salt of the guanidinium ion, [C(NH2)3]+,in comparison to the ammonium salt may be a result of the lower acidity5of guanidinium(p K a=13.71)than ammonium(p K a=9.21).The synthesis of GBH wasfirst reported in1954by Schechter.6 Titov and co-workers7later published additional synthetic methods,properties(e.g.D H1f=À111kJ molÀ1),and studies showing decomposition of GBH begins at1001C.The structure of GBH determined by single-crystal X-ray diffraction in our laboratory8is consistent with the results of Custelcean and Jackson9where the crystal packing is controlled by dihydrogen bonding interactions and can be described in terms of stacks and layers of one-dimensional GBH tapes(Fig.1).Within the tapes,four of the six hydrogens on the guanidinium are involved in close dihydro-gen bonding to the hydrides of the borohydride providing a low activation energy path for elimination of hydrogen in the solid.The two out of plane borohydride hydrogens are essentially bridging two guanidinium ions in an otherwise planar tape.The calculated density of GBH is0.905g cmÀ3, Fig.1ORTEP of a one-dimensional GBH tape depicting close hydrogen–hydrogen bonding interactions.Naval Air Warfare Center,Weapons Division,China Lake,CA93555,USA.E-mail:thomas.groshens@;Fax:+91(760)9391617w Electronic supplementary information(ESI)available:Experimentaldetails and structure report for DC705066.For ESI andcrystallographic data in CIF or other electronic format see DOI:10.1039/b900376bThis journal is c The Royal Society of mun.,2009,3089–3091|3089 COMMUNICATION /chemcomm|ChemCommsignificantly less than the pycnometrically measured value of 0.99g cmÀ3reported by Titov et al.7a Based on our crystallo-graphic results,and a theoretical hydrogen yield of10.8wt%, GBH potentially provides a material based chemical hydrogen storage density of97.7g lÀ1.For our studies GBH was synthesized in65–75%yield from metathesis of either guanidinium carbonate or the sulfate and sodium borohydride in anhydrous isopropanol at room temperature.As previously reported,7the product is a slightly hygroscopic,air stable,colorless crystalline solid.A melting point of1021C with decomposition was observed while heating the sample at a rate of about101C minÀ1.For GBH samples maintained at elevated temperature in dry nitrogen no gas evolution was evident after several days at551C.Very slow hydrogen evolution begins at about601C. Less than0.25%of the available hydrogen evolved after 24hours at601C(less than0.025wt%loss)and2.1%of available H2was generated after48hours at601C.The lower temperature(55–601C)thermal decomposition appears to be self-catalyzing where the rate of hydrogen evolution slowly increases with time at elevated temperature.When initiated by a hot point source(greater than1801C) samples of neat guanidinium borohydride undergo a self-sustaining thermal decomposition(SSTD)reaction rapidly generating hydrogen gas and producing a low density white residue.It is apparent from examination of the resulting foam that solid GBH generates a transient liquid phase in the SSTD reaction zone which froths as hydrogen is released prior to production of thefinal infusible solid product.Hydrogen yields above10wt%were routinely obtained with greater than95mol%purity.A maximum reaction zone temperature of4501C was measured using thermocouples positioned in the reacting solid and exit gas.The only volatile products identified from the reaction were hydrogen(95–97%)and ammonia(3–5%).Gas chromatographic analysis of the evolved gases was negative for the presence of nitrogen reported by Titov et al. to have been generated during pyrolysis of an intermediate product at3001C in a previous study.7The SSTD reaction was observed for samples of GBH in a vacuum,under a nitrogen atmosphere(from15to4000psi) and in air on samples as large as10g without incident.z In the absence of an external ignition source no auto-ignition of the evolved hydrogen was observed when the reaction was conducted unconfined in air.The only other reported example of a borane amine compound exhibiting a self-sustaining thermal decomposition reaction is hydrazine bisborane10 (HBB).The self-sustaining thermal decomposition of HBB,however,readily transitions from a deflagration to a detonation.This behavior,combined with the toxicity of hydrazine,has discouraged further consideration of HBB for hydrogen storage applications.The DSC trace of GBH(Fig.2)heated at a rate of81C minÀ1 exhibits a sharp exotherm at1101C followed by another broad exothermic peak extending out to approximately1751C. The heat of reaction estimated from the DSC measurements is À60kJ molÀ1(integrated heatflow from1101C to1651C is 803J gÀ1).This provides an estimated average heat of reaction per mole H2generated ofÀ15kJ molÀ1,which is less than30%of the value for the hydrolysis of sodium borohydride(À52kJ molÀ1)and only slightly higher than the value reported for the elimination of thefirst two equivalents of H2from AB(À11kJ molÀ1).2b TGA analysis (Fig.2)shows a stepwise release of products with the expected15%mass loss(hydrogen and ammonia)reached at approximately1401C.While the results of our investigation on the use of GBH as a chemical hydrogen storage material are promising,the amount of ammonia in the product stream is problematic for PEM fuel cell applications.Therefore,to improve the yield and purity of the hydrogen product as well as control the reaction rate and thermal characteristics,preliminary screening studies were conducted on mixtures of GBH with other hydride additives.In general,addition of up to25wt% of active hydrides(MgH2,NaBH4,LiAlH4,Me4NBH4)had little or no effect on the amount of ammonia produced and did not influence the hydrogen production.With the addition of as little as1wt%tetra-n-butylammonium borohydride (n-Bu4NBH4),however,the mixture failed to sustain a thermal decomposition reaction.A remarkable improvement in the purity of the hydrogen product was observed from mixtures of GBH and ethylene-diamine bisborane11(EDB).The rather high hydrogen generating capacity of EDB alone,with9.2wt%H2thermally available by aminolysis to produce4equivalents of H2,makes it a useful chemical hydrogen source.The compound is a white crystalline solid(density of0.82g cmÀ3)that is stable in dry air and decomposes rapidly above approximately901C.The addition of EDB to GBH resulted in a decrease in the ammonia production without significantly affecting the overall gravimetric hydrogen yield of the mixture.The observed reaction zone temperature of the40%EDB mixture was 4301C,not significantly changed from the neat GBH reaction. Since GBH contributes excess N–H bonds in the mix and EDB excess B–H bonds,a stoichiometric mixture of46wt% GBH and54wt%EDB provides an equivalent number of protic and hydridic hydrogens for complete reaction of all thermally accessible hydrogens(12.4wt%hydrogen at 5equivalents of H2per mole of reactant).The hydrogen yields obtained in our studies amounted to just slightly less than to slightly above4equivalents H2per mole of GBH and EDB (Table1).Preliminary NMR and X-ray structural investigations of the products obtained from hydrolysis of the reactionresidue3090|mun.,2009,3089–3091This journal is c The Royal Society of Chemistry2009identified guanidinium and ethylenediammonium salts as the major products indicating that the C–N skeletal structure of the cations remains mostly intact during the hydrogen elimination reaction.While no special precautions were taken in the storage of GBH during our investigation,and only a slight odor was noted after a year,mixtures of GBH–EDB stored unprotected in air underwent a slow reaction and became inactive after several months.Samples of the mixture stored under dry nitrogen for the same period,however,were still reactive.In conclusion,GBH was found to reliably undergo a tractable SSTD reaction when initiated by a heated bridge wire providing a chemical hydrogen storage material with a lower adiabatic reaction temperature(B4501C)and a higher gravimetric storage density than previous borohydride compositions that used,for example,mixtures of sodium borohydride with an oxidizer.1The only gaseous products identified from the SSTD reaction of GBH were H2and NH3 where the hydrogen yield was nearly quantitative(above 10wt%).The SSTD reaction of mixtures of GBH and EDB rapidly produces a H2gas stream suitable for PEM fuel cell applications with minimal NH3scrubbing required.Notes and referencesz Caution!The decomposition reaction of GBH is rapid,producing heat and large amounts of combustible gas.The reaction is capable of generating sufficiently high pressure to burst vessels if confined.Our safety tests show GBH powder is insensitive to impact and friction initiation but was found to be sensitive to initiation by electrostatic discharge.At high wt%EDB,the reaction of GBH–EDB mixtures in air will consistently ignite the mixture and hydrogen product and should be avoided.1N.Desgardin,C.Perut and J.Renouard,US Pat.,7094487,2006. 2(a)C.W.Hamilton,R.Tom Baker,A.Staubitz and I.Manners, Chem.Soc.Rev.,2009,38,279–293;(b)F.H.Stephens,V.Pons and R.Tom Baker,Dalton Trans.,2007,2613–2626;(c)T.B.Marder,Angew.Chem.,Int.Ed.,2007,46,8116–8118. 3G.D.Artz,Grant and R.Louis,US Pat.,4673528,1987.4G.Soloveichik,J.-H.Her,P.W.Stephens,Y.Gao,J.Rijssenbeek, M.Andrus and J.-C.Zhao,Inorg.Chem.,2008,47,4290–4298. 5Our attempts to isolate borohydride salts of the conjugate acids of other strong bases,either formamidinium,[HC(NH2)2]+(p K a=11.5),or acetamidinium[CH3C(NH2)2]+(p K a=12.52),have sofar been unsuccessful.As yet uncharacterized viscous liquids were obtained devoid of[BH4]Àperhaps containing borane adducts of acetamidine and formamidine.6(a)W.H.Schechter,C.B.Jackson and R.M.Adams,Boron Hydrides and Related Compounds,Callery Chemical Co.,Callery, PA,1954;(b)R.M.Adams and A.R.Siedle,in Boron,Metallo-Boron Compounds and Boranes,ed.R.M.Adams,John Wiley& Sons,New York,1964,pp.461–462.7(a)L.V.Titov,M.D.Makarova and V.Ya.Rosolovskii,Dokl.Akad.Nauk SSSR,1968,180,381–382;(b)L.V.Titov and M. D.Levicheva,Zh.Neorg.Khim.,1969,14,2886–2887;(c)E.P.Kirpichev,Yu.I.Rubtsov and L.V.Titov,Zh.Neorg.Khim.,1971,16,56–60;(d)L.V.Titov,M.D.Levicheva andG.N.Dubikhina,Zh.Neorg.Khim.,1972,17,1181–1182.8Crystal data for GBH.CH10BN3:M=74.93,tetragonal,:I4(1)/ amd,a=6.7433(8)A,b=6.7433(8)A,c=24.195(3)A,a=901, b=901,g=901,V=1100.2(2)A3,Z=8,absorption coefficient:0.060mmÀ1,wavelength:0.71073A,T:296(2)K,F(000):336,reflections collected:4900,independent reflections:287[R(int)=0.0179,GOF on F2:1.180,final R indices:[235data;I42s(I)]R1=0.0545,w R2=0.1461;[all data]R1=0.0597,w R2=0.1559. 9R.Custelcean and J. E.Jackson,Chem.Rev.,2001,101, 1963–1980.10(a)H.J.Emeleus and F.G. A.Stone,J.Chem.Soc.,1951, 840–841;(b)F.C.Gunderloy,Jr.,Inorg.Synth.,1967,9,13–16;(c)L.R.Grant and J.E.Flanagan,US Pat.,4381206,1983.11(a)H.C.Kelly and J.O.Edwards,J.Am.Chem.Soc.,1960,82, 4842–4846;(b)H.C.Kelly and J.O.Edwards,Inorg.Chem.,1963, 2,226–227;(c)H.Y.Ting,W.H.Watson and H.C.Kelly,Inorg.Chem.,1972,11,374–377.Table1GBH–EDB SSTD reaction products awt%GBH wt%EDB wt%H2yield mol%NH3b mol H2generatedmolðGBHþEDBÞ100010.6 4.1 3.9489.511.510.4 2.7 3.8960.040.010.40.10 4.1146.054.0c9.600.069 3.8740.060.0d10.10.026 4.12a For pellets compressed to approximately60%TMD.b NH3concentration in gas stream.c Stoichiometric mix.d Mixture is notself-sustaining,external heat supplied(reaction initiated by a1201Coil bath).This journal is c The Royal Society of mun.,2009,3089–3091|3091。

恒稳态宇宙学英文

恒稳态宇宙学英文

恒稳态宇宙学英文The Steady-State Theory of the UniverseThe concept of a steady-state universe has been a topic of intense scientific debate for decades. This theory, which was proposed by renowned physicists Fred Hoyle, Thomas Gold, and Thomas Bondi in the 1940s, offers an alternative to the widely accepted Big Bang theory of the universe's origin and evolution.At the heart of the steady-state theory is the idea that the universe is not only expanding but also maintaining a constant average density over time. This means that as the universe expands, new matter is continuously being created to fill the void, ensuring that the overall appearance of the cosmos remains largely unchanged. This stands in contrast to the Big Bang theory, which suggests that the universe began from a single, infinitely dense point and has been expanding and evolving ever since.One of the key arguments in favor of the steady-state theory is the observed uniformity of the universe. Observations of the cosmic microwave background radiation, the faint glow of radiation leftover from the early universe, have shown that the universe is remarkablyhomogeneous and isotropic, meaning that it looks the same in all directions and at all locations. This is consistent with the steady-state model, which predicts that the universe should maintain a constant appearance over time.Another supporting factor for the steady-state theory is the lack of evidence for a definitive beginning of the universe, as proposed by the Big Bang theory. While the Big Bang theory is supported by the observed redshift of distant galaxies, which suggests an expanding universe, and the existence of the cosmic microwave background radiation, the steady-state theorists argue that these observations can be reconciled with their model through the continuous creation of matter.However, the steady-state theory has faced significant challenges over the years, particularly with the discovery of the cosmic microwave background radiation in 1964. This observation, which was a key prediction of the Big Bang theory, dealt a significant blow to the steady-state model, as it provided strong evidence for a hot, dense, and rapidly expanding early universe.Furthermore, the discovery of quasars, extremely luminous and distant objects, in the 1960s also posed a challenge to the steady-state theory. Quasars were found to be much more abundant in the distant past, suggesting that the universe has indeed evolved overtime, rather than maintaining a constant appearance.Despite these challenges, the steady-state theory has continued to be a topic of discussion and debate within the scientific community. Some physicists have proposed modified versions of the theory, incorporating elements of the Big Bang model, in an attempt to reconcile the observed features of the universe with the steady-state concept.In recent years, the development of the Lambda-CDM (Lambda-Cold Dark Matter) model, which combines the Big Bang theory with the concept of dark energy and dark matter, has become the dominant cosmological model. This model is able to explain a wide range of observations, including the cosmic microwave background radiation, the large-scale structure of the universe, and the observed accelerated expansion of the universe.Nevertheless, the steady-state theory remains an intriguing and thought-provoking alternative to the standard Big Bang model. It continues to inspire scientific discussions and serves as a reminder that our understanding of the universe is an ongoing process, with room for new ideas and perspectives to emerge.。

Laser phase and frequency stabilization using an optical resonator

Laser phase and frequency stabilization using an optical resonator
Appl. Phys. B 31,Physics B C h e ~
9 Springer-Verlag1983
Laser Phase and Frequency Stabilization Using an Optical Resonator
Development of Techniques
Before considering the uItimate performance capability of frequency-stabilized lasers, we first discuss some practical problems and review the technical progress which has been made previously. In view of the very rapid time scale of fluctuations associated with the dye laser's free-flowing jet and with plasma movement in the ion laser, it is understandable that efforts to improve their frequency-stabilization performance have centered on developing faster transducers and
Abstract. We describe a new and highly effective optical frequency discriminator and laser
stabilization system based on signals reflected from a stable Fabry-Perot reference interferometer. High sensitivity for detection of resonance information is achieved by optical heterodyne detection with sidebands produced by rf phase modulation. Physical, optical, and electronic aspects of this discriminator/laser frequency stabilization system are considered in detail. We show that a high-speed domain exists in which the system responds to the phase (rather than frequency) change of the laser; thus with suitable design the servo loop bandwidth is not limited by the cavity response time. We report diagnostic experiments in which a dye laser and gas laser were independently locked to one stable cavity. Because of the precautions employed, the observed sub-100 Hz beat line width shows that the lasers were this stable. Applications of this system of laser stabilization include precision laser spectroscopy and interferometric gravity-wave detectors. PACS: 06, 07.60, 07.65 The adoption of high-finesse Fabry Perot cavities for a prototype gravitational wave detector [1] requires the development of very high precision short term stabilizing techniques for an argon-ion laser. Similarly, there is considerable incentive to improve the frequency stabilization of dye lasers for spectroscopic applications. Optical resonators [2-4] have been used to provide frequency discriminator functions for servo control of both types of laser [-3, 5-81 and form the basis for at least three commercially-available frequency-stabilized dye laser systems. In this paper we describe an improved rf sideband type of optical discriminator capable of high precision, low-noise * Staff Member, Quantum Physics Division, National Bureau of Standards ** Present address: Colorado Schoolof Mines, Golden,Colorado performance and having a response time not limited by the optical resonator. We illustrate the technique with several experiments including demonstration of sub100 Hz laser line widths.

toward a theory of stakeholder identification and salience (definitive dependent stakeholders)

toward a theory of stakeholder identification and salience (definitive dependent stakeholders)

Since Freeman (1984) published his landmark book. Strategic Management: A Stakeholder Approach, the concept oí "stakeholders" has become embedded in management scholarship and in managers' thinking. Yet, as popular as the term has become and as richly descriptive as it is, there is no agreement on what Freeman (1994) calls "The Principle oí Who or What Really Counts." That is, who (or what) are the stakeholders of the firm? And to whom (or what) do managers pay attention? The first question calls for a normaiive iheory of stakeholder identiiication, to explain logically why managers should consider certain classes oí entities as stakeholders. The second question calls for a descriptive theory oí stakeholder salience, to explain the conditions under which managers do consider certain classes of entities as stakeholders. Stakeholder theory, reviewed in this article, oífers a maddening variety oí signals on how questions oí stakeholder identification might be answered. We will see stakeholders identified as primary or secondary

Interplay between soft and hard hadronic components for identified hadrons

Interplay between soft and hard hadronic components for identified hadrons

Interplay between soft and hard hadronic components for identified hadronsin relativistic heavy ion collisionsTetsufumi Hirano1,2and Yasushi Nara31Physics Department,University of Tokyo,Tokyo113-0033,Japan2RIKEN BNL Research Center,Brookhaven National Laboratory,Upton,New York11973,USA 3Department of Physics,University of Arizona,Tucson,Arizona85721,USA(Received10July2003;published25March2004)We investigate the transverse dynamics in Au+Au collisions atͱs NN=200GeV by emphasis upon the interplay between soft and hard components through p T dependences of particle spectra,ratios of yields, suppression factors,and ellipticflow for identified hadrons.From hydrodynamics combined with traversing minijets which go through jet quenching in the hot medium,we calculate interactions of hard jets with the soft hydrodynamic components.It is shown by the explicit dynamical calculations that the hydrodynamic radial flow and the jet quenching of hard jets are the keys to understand the differences among the hadron spectra for pions,kaons,and protons.This leads to the natural interpretation for N p/N␲ϳ1,R AAտ1for protons,and v2pϾv2␲recently observed in the intermediate transverse momentum region at Relativistic Heavy Ion Collider. DOI:10.1103/PhysRevC.69.034908PACS number(s):24.85.ϩp,25.75.Ϫq,I.INTRODUCTIONA vast body of data has already been collected and ana-lyzed during the past few years at Relativistic Heavy Ion Collider(RHIC)[1]toward a complete understanding of the dense QCD matter which is created in high energy heavy-ion collisions.At collider experiments,it is well known that high p T perturbative QCD(pQCD)processes become so large as to observe jet spectra.One of the most important new physics revealed in heavy ion collisions at RHIC energies is to study propagation of(mini)jets in dense QCD matter.Jet quench-ing has been proposed[2]as a possible signal of deconfined nuclear matter,the quark gluon plasma(QGP)(for a recent review,see Ref.[3]).Over the past years,a lot of work has been devoted to study the propagation of jets through QCD matter[4–7].Recent data at RHIC indicate that both the neutral pion [8,9]and the charged hadron[10–12]spectra at high p T in central Au+Au collisions are suppressed relative to the scaled pp or large centrality spectra by the number of binary collisions.However,protons do not seem to be quenched in the moderate p T range[13].Furthermore,the proton yield exceeds the pion yield around p Tϳ2–3GeV/c which is not seen in elementary hadronic collisions[12].The STAR Col-laboration also shows that⌳/K0ϳ1at a transverse momen-tum of2–3GeV/c[14].pQCD calculations are successful in describing hadron spectra in Au+Au collisions as well as pp collisions by taking account of nuclear effects such as Cronin effect,nuclear shadowing effect,and energy loss of jets[15].However,large uncertainty of the proton fragmen-tation function makes the understanding of the baryon pro-duction mechanism unclear[16]even in pp collisions.On the other hand,several models have been proposed by con-sidering interplay between nonperturbative soft physics and pQCD hard physics:baryon junction[17,18],parton coales-cence[19–23],medium modification of the string fragmen-tation[24],and a parametrization with hydrodynamic com-ponent combined with the nonthermal components[25]in order to explain the anomalous baryon productions and/or large ellipticflow discovered at RHIC.It is said that hydrodynamics[26–29]works very well for explanation of ellipticflow data at RHIC energies,in the low p T region,in small centrality events,and at midrapidity,in-cluding the mass dependence of hadrons(for recent reviews, see Ref.[30]).This suggests that hydrodynamics could be reliable for the description of the time evolution of soft sec-tor of matter produced in high energy heavy-ion collisions at RHIC.Certainly,it is more desirable to describe the time evolution of the whole stage in high energy heavy-ion colli-sions by simulating collisions of initial nuclear wave func-tions.Instead,they simply assume that the system created in heavy-ion collisions reaches local thermal equilibrium state at some time.Due to the above two reasons,a model which treats a soft sector by hydrodynamics and a hard sector based on a pQCD parton model is turned out to be useful in order to understand experimental data at RHIC from low to high p T.Indeed,first attempts based on this concept has been done by pQCD cal-culations which include hydrodynamic features[31–33].Mo-tivated by these works,we have recently developed a two component dynamical model(hydroϩjet model)[34]with a fully three-dimensional hydrodynamic model[28]for the soft sector and pQCD jets for the hard sector which are com-puted via the PYTHIA code[35].Usually,it is possible tofit hadron spectra up to high momentum,say p Tϳ2–3GeV/c,within hydrodynamics by adjusting kinetic freeze-out temperature T th which is a free parameter in the model[36].Thus it is unclear which value of T th should be used when one wants to add jet components into hydrodynamic components for the description of high p T part.However,we are free from this problem thanks to in-clusion of the early chemical freeze-out picture into hydro-dynamics.One of the authors studied the effects of chemical freeze-out temperature T ch which is separate from kinetic one T th in hydrodynamic model in Ref.[29].It was found that thePHYSICAL REVIEW C69,034908(2004)p T slope for pions remains invariant under the variation of T th and that the hydrodynamic model with early chemical freeze out is able tofit the transverse momentum distribution of pions up to1–2GeV/c.Therefore,it is certain to incor-porate hard partons into the hydrodynamics with early freeze out in order to account for the high transverse momentum part of the hadronic spectrum.We note that,since we do not assume thermalization for the high p T jets,a hydrodynamical calculation with the initial conditions taken from pQCD ϩfinal state saturation model[37]is different from ours.In this paper,we shall study identified hadron spectra from low to high p T within the hydroϩjet model.In particu-lar,we focus on the influence of the hydrodynamic radial flow on the pQCD predictions for the transverse spectra.Pa-rameters in the hydrodynamic part of the model have been alreadyfixed byfitting the pseudorapidity distribution.Pa-rameters related to the propagation of partons are also ob-tained byfitting the neutral pion suppression factor by PHENIX and are found to be consistent[38]with the back-to-back correlation data from STAR[39].The paper is organized as follows.In Sec.II,we describe the main features of our model.We will represent results of transverse momentum distributions for pions,kaons,and protons in Sec.III A.Nuclear modification factor(suppres-sion factor)for identified hadrons and particle ratios are dis-cussed in Sec.III B.Ellipticflow for identified hadrons is discussed in Sec.III C.Section IV summarizes this paper.II.MODEL DESCRIPTIONIn this section,we explain in some detail the hydro+jet model as a dynamical model to describe relativistic heavy ion collisions.A.HydrodynamicsLet us start with the review of our hydrodynamics.Main features of the hydrodynamic part in the hydro+jet model are the following.Although initial conditions and prethermalization stages are very important subjects in the physics of heavy ion col-lisions(see,e.g.,Refs.[40,41]),these are beyond the scope of this paper.Instead,assuming local thermal equilibrium of partonic/hadronic matter at an initial time␶0,we describe afterward the space-time evolution of thermalized matter by solving the equations for energy-momentum conservation ץ␮T␮␯=0,T␮␯=͑e+P͒u␮u␯−Pg␮␯͑1͒in the full three-dimensional Bjorken coordinate͑␶,x,y,␩s͒. Here e,P,and u␮are,respectively,energy density,pressure, and local four velocity.␶=ͱt2−z2is the proper time and ␩s=͑1/2͒ln͓͑t+z͒/͑t−z͔͒is the space-time rapidity. Throughout this paper,we consider baryon free matter n B=0at RHIC energies.In order to obtain reliable solu-tions of Eq.͑1͒especially in the longitudinal direction at collider energies,␶and␩s are substantial choices for time and longitudinal directions rather than the Cartesian coor-dinate.Assuming N f=3massless partonic gas for the QGP phase,an ideal gas equation of state with a bag constant B1/4=247MeV is used in the high temperature phase.We use ahadronic resonance gas model with all hadrons up to ⌬͑1232͒for later stages of collisions.Possiblefinite baryonic effects such as a repulsive meanfield[42]are not includedbecause of the low baryon density at RHIC[43].Phase tran-sition temperature is set to be T c=170MeV.For the had-ronic phase,a partial chemical equilibrium model withchemical freeze-out temperature T ch=170MeV is employedto describe the early chemical freeze-out picture of hadronicmatter.Although chemical freeze-out temperatureT ch͑ϳ160–170MeV͒is usually found to be larger than ki-netic freeze-out temperature T th͑ϳ100–140MeV͒from sta-tistical model analyses and thermal modelfitting[44],the sequential freeze out is not considered so far in the conven-tional hydrodynamics except for a few work[29,45–47].As a consequence of this improvement,the hadron phase cools down more rapidly than the one in usual hydrodynamic cal-culations in which T ch=T th is assumed[29,45].It should be emphasized that the slope of pions in the transverse momen-tum distribution becomes insensitive to the choice of the kinetic freeze-out temperature T th and that the hydrodynam-ics with early chemical freeze out reproduces the RHIC data of the pion transverse momentum only up to1.5GeV/c[48]. This is one of the strong motivations which leads us to com-bine our hydrodynamics with nonthermalized hard compo-nents.From hydrodynamic simulations,we evaluate hadronicspectra which originate from thermalized hadronic matter.For hadrons directly emitted from freeze-out hypersurface⌺,we calculate spectra through the Cooper-Frye formula[49] EdN id3p=d i͑2␲͒3͵⌺p␮d␴␮exp͓͑p␮u␮−␮i͒/T th͔ϯ1,͑2͒where d i is a degeneracy factor,␮i is a chemical potential,p␮is a four-momentum in the center of mass frame of colliding two nuclei,and−͑+͒sign is taken for bosons͑fermions͒.We should note the existence of chemical potentials␮i for all hadrons under consideration due to early chemical freeze out.Typical values at T th=100MeV are as follows:␮␲=83MeV,␮K=181MeV,and␮p=␮p¯=349MeV.For had-rons from resonance decays,we use Eq.͑2͒for resonance particles at freeze out and afterward take account of decay kinematics.Here these resonances also have their own chemical potentials at freeze out.We call the sum of the above spectra the soft component or the hydro component throughout this paper.Initial energy density at␶0=0.6fm/c is assumed to be factorizede͑x,y,␩s;b͒=e max W͑x,y;b͒H͑␩s͒.͑3͒Here the transverse profile W͑x,y;b͒is proportional to the number of binary collisions and normalized as W͑0,0;0͒=1,whereas longitudinal profile H͑␩s͒is flat and unity near midrapidity and falls off smoothly at large rapidity.In H͑␩s͒, we have two adjustable parameters␩flat and␩Gauss which parametrize the length of flat region near midrapidity andTETSUFUMI HIRANO AND YASUSHI NARA PHYSICAL REVIEW C69,034908(2004)the width of Gaussian in the forward/backward rapidity region,respectively.These parameters are chosen so as to reproduce the shape of dN /d ␩or dN /dY .We choose e max =40GeV/fm 3,␩flat =4.0,and ␩Gauss =0.8.As shown in Fig.1,the pseudorapidity distribution of charged hadrons in 5%central collisions observed by the BRAHMS Collaboration [50]is satisfactory reproduced by using the above parameters.Here we choose an impact pa-rameter as b =2fm for this centrality.These initial param-eters give us an average initial energy density about 5GeV/fm 3in the transverse plane ␩s =0at ␶=1fm/c [51].A contribution from minijets is neglected in the hydrody-namic fitting,since it is less than 5%effect to the total hadron yield at RHIC when we define minijets as particles with transverse momentum larger than 2GeV/c .Initial con-ditions for transverse profile are scaled by the number of binary collisions.It is found that the 20–30%semicentral collision data is also reproduced simply by choosing b as 7.2fm in the transverse profile W [52].In Fig.2,we show the transverse spectra for negative pions,negative kaons,and protons in Au+Au collisions at ͱs NN =200GeV from the hydrodynamic model for impact parameters b =2.0fm and 7.2fm.Thermal freeze-out tem-perature T th =100MeV is used in the calculation.This choice is consistent with the data at ͱs NN =130GeV [29].The flatter behavior at low p T for kaons and protons is indeed a conse-quence of the radial flow effect.A remarkable feature on the hydrodynamical result is that p /␲−Ͼ1and K −/␲−ϳ1above p T ϳ2GeV/c .It is,however,questionable to assume ther-malization at high p T region.In fact,hydrodynamical predic-tions overestimate elliptic flow data at the large transverse momentum region.It is interesting to ask at which p T hydro-dynamic behavior ceases and switches to pQCD results.We will see in the following section how these hydrodynamical results are modified by including the pQCD hard component.B.Jet propagationsFor the hard part of the model,we generate hard partonsaccording to a pQCD parton model.The number of jets at an impact parameter b are calculated fromN hard ͑b ͒=͵d 2r Ќ␴jet T A ͑r Ќ−b /2͒T B ͑r Ќ+b /2͒,͑4͒where ␴jet is a hard cross section from leading order pQCD convoluted by the parton distribution functions and mul-tiplied by a K factor which takes into account higher order contributions.T A and r Ќare,respectively,a nuclear thick-ness function normalized to be ͐d 2r ЌT A =A and a trans-verse coordinate vector.Here we use the Woods-Saxon distribution for the nuclear density profile.We use PYTHIA 6.2[35]for the generation of momentum spectrum of jets through 2→2QCD hard processes.Initial and final state radiations are used to take into account the enhancement of higher-order contributions associated with multiple small-angle parton emission.Scale Q 2dependent nuclear shadowing effect is included for the mass number A nucleus assuming the impact param-eter dependence [53]:S ͑A ,x ,Q 2,r Ќ͒=1+͓S ͑A ,x ,Q 2͒−1͔AT A ͑r Ќ͒͵d 2r ЌT A ͑r Ќ͒2,͑5͒where the EKS98parametrization ͓54͔is used for S ͑A ,x ,Q 2͒.Then the nuclear parton distribution function in this model has the formf A ͑A ,x ,Q 2,r Ќ͒=S ͑A ,x ,Q 2,r Ќ͒ϫͫZA f p ͑x ,Q 2͒+͑A −Z ͒Af n ͑x ,Q 2͒ͬ,͑6͒where f p ͑x ,Q 2͒and f n ͑x ,Q 2͒are the parton distribution functions for protons and neutrons.We simply assume the charge of a nucleus to be Z =A /2in consistency with the soft part,since our fluids are assumed to be isospin symmetric as well as baryon free matter.Cronin effect [55],which has also been discovered in re-cent RHIC experiments [56],is usually considered astheFIG.1.(Color online )Pseudorapidity distribution of charged particles in Au+Au collisions at ͱs NN =200GeV is compared to data from BRAHMS [50].Solid (dashed )line represents the hydro-dynamic result at b=2.0͑7.2͒fm.FIG.2.(Color online )Transverse momentum spectra for nega-tive pions,negative kaons,and protons from the hydro model with early chemical freeze out in Au+Au collisions at ͱs NN =200GeV.We choose an impact parameter as b =2.0͑7.2͒fm corresponding to 0–5%͑20–30%͒centrality.Yields are divided by 103for b =7.2fm results.INTERPLAY BETWEEN SOFT AND HARD HADRONIC …PHYSICAL REVIEW C 69,034908(2004)multiple initial state scattering effect.Understanding this ef-fect becomes an important subject in RHIC physics[57–59]. We employ the model in Ref.[57]to take into account the multiple initial state scatterings,in which initial k T is broad-ened proportional to the number of scatterings:͗k T2͘NA=͗k T2͘NN+␦2͑Q2͓͒␴NN T A͑rЌ͒−1͔,͑7͒where␴NN is the inelastic nucleon-nucleon cross section and ␦2͑Q2͒is the scale dependent k T broadening per nucleon-nucleon collision whose explicit form can be found in Ref.͓57͔.We need to specify a scale which separates a soft sector from a hard sector,in other words,a thermalized part from a nonthermalized part in our model.We include minijets with transverse momentum p T,jet larger than2GeV/c just after hard scatterings in the simulation.These minijets explicitly propagate throughfluid elements.Since we only pick up high p T partons from PYTHIA and throw them intofluids,there is ambiguity to connect color flow among partons.Thus we use an independent fragmen-tation model option in PYTHIA to convert hard parton to had-rons instead of using the default Lund string fragmentation model.We note that the independent fragmentation model should not be applied at low transverse momentum region. We have checked that the neutral pion transverse spectrum in pp collisions at RHIC[60]is well reproduced by selecting the K factor K=2.5,the scale Q=p T,jet/2in the CTEQ5lead-ing order parton distribution function[61],and the primor-dial transverse momentum͗k T2͘NN=1GeV2/c2as shown in Fig.3.As shown in the bottom panel of the Fig.3,indepen-dent fragmentation model predictions for pions and kaons are very close to those from the Lund string fragmentation model in p TϾ2GeV/c,where K=2and Q=p T,jet/2is used in the Lund string model case and non-perturbative inelastic soft processes are included.However,the yield of protons from the independent fragmentation scheme becomes much less than that from Lund string model predictions as seen in Fig.3.We found that the Lund fragmentation scheme is fa-vored in terms of the recent STAR data of protons in pp collisions[62].In what follows,we make corrections for our p T spectra of kaons and protons in Au+Au collisions accord-ing to the result in the bottom panel of Fig.3.In order to see the theoretical uncertainties on the frag-mentation scheme deeply,we also plot the results from NLOpQCD calculations[63]with the MRST99[64]set of parton distribution functions.In Fig.3,we show results from two different fragmentation functions.The solid lines are ob-tained from Kniehl-Kramer-Potter(KKP)fragmentation functions[65]with renormalization scale␮,factorization scale M,and fragmentation scale M f equal to p T.NLOpQCD prediction with KKP fragmentation functions is consistent with the pion data.NLOpQCD predictions with the Kretzer fragmentation functions[66]assuming␮=M=M F=p T/2un-derestimate pion yields,while yields for kaons and protons are the same as the predictions from the PYTHIA default Lund string fragmentation model.Initial transverse positions of jets at an impact parameter b are determined randomly according to the probability P͑rЌ,b͒specified by the number of binary collision distribu-tion,P͑rЌ,b͒ϰT A͑rЌ+b/2͒T A͑rЌ−b/2͒.͑8͒Initial longitudinal position of a parton is approximated by the boost invariant distribution͓67͔:␩s=Y,where Y=͑1/2͒ln͓͑E+p z͒/͑E−p z͔͒is the rapidity of a parton.Jets are freely propagated up to the initial time␶0of hydrody-namic simulations by neglecting the possible interactions in the prethermalization stages.Jets are assumed to travel with straight line trajectory in a time step:⌬r i=p im T cosh͑Y−␩s͒⌬␶,͑i=x,y͒,͑9͒⌬␩s=1␶tanh͑Y−␩s͒⌬␶,͑10͒where m T=ͱm2+p T2is a transverse mass.Jets can suffer interaction withfluids and lose their ener-gies.We employ the approximatefirst order formula [Gyulassy-Levai-Vitev(GLV)formula]in opacity expansion from the reaction operator approach[7]for the energy loss of partons throughout this work.The opacity expansion isrel-FIG.3.(Color online)Comparison with various models for in-clusive pion,kaon,and proton transverse momentum distributions in pp collisions atͱs=200GeV.Solid and dotted histograms cor-respond to the results from PYTHIA with independent fragmentation, and default Lund fragmentation,respectively.Solid and dotted lines are,respectively,from NLOpQCD calculations with KKP and Kretzer fragmentation functions.TETSUFUMI HIRANO AND YASUSHI NARA PHYSICAL REVIEW C69,034908(2004)evant for the realistic heavy ion reactions where the number of jet scatterings is small.The energy loss formula for coher-ent scatterings in matter has been applied to analysis of heavy-ion reactions taking into account the expansion of the system [15,31–33].The approximate first order formula in this approach can be written as⌬E =C͵␶0ϱd ␶␳͑␶,x ͑␶͒͒͑␶−␶0͒lnͩ2E 0␮2Lͪ.͑11͒Here C is an adjustable parameter and ␳͑␶,x ͒is a thermali-zed parton density in the local rest frame of fluid elements in the hydro+jet approach ͓68͔.x ͑␶͒and E 0are the position and the initial energy of a jet,respectively.The initial energy E 0in Eq.͑11͒is Lorentz-boosted by the flow velocity and re-placed by p 0␮u ␮where p 0␮is the initial four momentum of a jet and u ␮is a local fluid velocity.We take a typical screen-ing scale ␮=0.5GeV and effective path length L =3fm which is chosen from the lifetime of the QGP phase.Here we choose C =0.45͓69͔which is found to reproduce the neutral pion R AA defined by Eq.͑12͓͒9͔.Our purpose here is not a detailed study of jet quenching mechanisms.In-stead,we first fit the suppression factor for neutral pions and next see other hadronic spectra.Feedback of the energy to fluid elements in central colli-sions was found to be about 2%of the total fluid energy.Hence we can safely neglect its effect on hydrodynamic evo-lution in the case of the appropriate amount of energy loss.In Fig.4,we show the jet quenching rate as a function of proper time for 5GeV/c jets.We count the number of par-tons with 4.5Ͻp T,jet Ͻ5.5GeV/c at each time step,and then define the ratio of the current number of jets to the initial number of jets N jet ͑␶͒/N jet ͑␶0͒.Most jet quenching is com-pleted at early times less than 4fm/c .For comparison,we also plot the jet quenching rate for a constant energy loss case dE /dx ϰ␳͑␶͒.Jet quenching is almost finished at ␶ϳ2fm/c in the case of constant energy loss.From Fig.4,the degree of decrease for the jet quenching rate in the GLV formula becomes milder and continues longer than that in theincoherent model.This is due to the existence of ␶in the integrand in Eq.(11)which comes from the property of co-herent (Landau-Pomeranchuk-Migdal [70])effect.Contrary to the simple Bjorken’s ansatz [67],␳͑␶͒=␳0␶0/␶,there exists transverse flow and the parton density profile in the trans-verse plane is not flat in our simulations.This is the reason why jets are quenched only in the QGP phase and why jet quenching in the mixed phase is totally negligible.We include p Ќbroadening accompanied by the energyloss of jets with the formula ͗p Ќ2͘ϳ͐d ␶␳͑r ͒as in Ref.[38].We found that this effect is small in all results in this paper.Within our model,we neglect energy loss before thermal-ization,in our case,␶Ͻ0.6fm/c .One would ask if it is important to take into account the energy loss effects before thermalization because parton density has the maximum value.We can,however,fit the suppression factor R AA by rescaling the energy loss parameter C when the initial time ␶0is changed.The question about the jet quenching before ther-malization is beyond our model description.As a possible model for a study of jet interactions at early times,propaga-tion of jets in the classical Yang-Mills fields based on the idea of the color glass condensate [40,71]is proposed in Ref.[72].It would be interesting to take numerical results from the full lattice calculations [73]for the calculations of jet energy loss at the very early stages of the collisions.III.RESULTSWe discuss in this section transverse dynamics for pions,kaons,and protons from the hydro+jet model focusing on the intermediate p T where interplay between soft and hard components is expected to be crucial.As mentioned in the preceding section,a parameter for jet quenching C was al-ready fixed by fitting the observed data for neutral pions in central Au+Au collisions from PHENIX.Freeze-out tem-perature T th =100MeV is used for hydrodynamics.All re-sults in this section are for midrapidity ͉␩͉Ͻ0.35.A.Transverse momentum distributions for identified particlesFirst,we show the transverse momentum distributions for pions,kaons,and protons from the hydro+jet model in Fig.5in central as well as semicentral Au+Au collisions at RHIC.Each spectrum is the sum of the soft component and the hard component.Before summation,the hard component is mul-tiplied by a “switch Љfunction [31]͕1+tanh ͓2͑p T −p T,cut ͔͖͒/2(where p T is in the unit of GeV/c and p T,cut =2GeV/c )in order to cut the unreliable components from the independent fragmentation scheme and also to fit R AA for neutral pions [9].We have checked the cutoff parameter de-pendence in the switch function on the pion spectrum and found that we are not able to fit the pion data anymore even with p T,cut =1.8or 2.2GeV/c .So the ambiguity of the cut off can be removed to fit the pion data within our approach.At low transverse momentum region p T Ͻ1GeV/c ,the shapes remain the same as hydro predictions as one can check from Fig.2.Also at high transverse momentum,spec-tra are identical to those of pQCD predictions with an appro-priate amount of jetquenching.FIG.4.(Color online )Jet quenching rate N jet ͑␶͒/N jet ͑␶0͒for p T =5GeV/c jets in Au+Au collisions at ͱs NN =200GeV.Jet quenching rate for 10GeV/c jets is very similar to that of the 5GeV/c jets.INTERPLAY BETWEEN SOFT AND HARD HADRONIC …PHYSICAL REVIEW C 69,034908(2004)Our calculation includes interactions of minijets with QGP fluids.We also note that there remains a pQCD-like power law behavior in all hadrons at high transverse momen-tum.This may indicate no hint for the thermalization at high transverse momentum.However,energy loss results in a par-allel shift of hadronic spectra,since the energy loss model used in this paper shows almost flat quenching pattern as shown in our previous analysis [38].In Fig.6,we decompose the spectra into hydro parts and minijet parts.Here the yields from hard components are mul-tiplied by the switch function again.It is seen that both soft and hard components are important for the hadron spectra in the transverse momentum of the range around 2Շp T Շ5GeV/c depending on the hadron mass.We can define the crossing point of transverse momentum p T,cross at which the yield from the soft part is identical to that from the hard part.p T,cross moves toward high momentum with mass of particles because of the effects of radial flow.In central col-lisions,p T,cross ϳ1.8,2.5,and 3.5GeV/c for pions,kaons,and protons,respectively.Minijet spectra are recovered at p T ϳ3.4GeV/c for pions,p T ϳ4.0GeV/c for kaons,and p T ϳ5.0GeV/c for protons.We give some remarks as follows:(i )The point at which hydrodynamic and pQCD spectra cross is determined by the dynamics of the system.The ra-dial flow pushes the soft components toward high p T region,while the dense matter reduces the pQCD components through parton energy loss.The crossing of two spectra causes by interplay of these two effects.(ii )At p T =2–3GeV/c ,the yields of pions and kaons are no longer occupied by soft hydrodynamic component.On the other hand,the proton yield from pQCD prediction is about ten times smaller than that of hydro in the transverse mo-mentum region.(iii )One may try to extract the strength of radial flow and the kinetic freeze-out temperature from experimental data through the hydrodynamics-motivated fitting model.Then one should pay attention to the fitting range of the transverse momentum.In particular,p T spectrum for pions may have no room to fit by a simple thermal spectrum:Contribution fromresonance decays becomes important below p T ϳ0.5GeV/c ,while the hard component slides in the soft component near p T ϳ1.0GeV/c .(iv )We predict positions of the inflection point where p T spectrum becomes convex to concave:p T ϳ3GeV/c for ka-ons and ϳ4GeV/c for protons.These are the indicators of a transition from soft physics to hard physics.The amount of the hydrodynamic contributions to the hadron yields for each particle found in the hydro ϩjet model is very similar to that found in Ref.[25]in which hybrid parametrization of hydrodynamics with the spectral shape in pp collisions.It is also remarkable that baryon junction [17,18]and quark coalescence models [20–22]predicts the same behavior.Quark coalescence models are successful in explaining the mass dependence of p T slopes [75,76].For example,one can easily understand the difference of the transverse slopes of baryons and mesons from a quark coa-lescence hadronization mechanism.A baryon momentum is a sum of three quarks (quark momenta must be almost parallel in order to cluster ),but a momentum of mesons is a sum of two quarks.It is interesting to see,for example,␾meson spectrum in order to distinguish the mass effects in hydrody-namics from meson-baryon effects in coalescence models.B.Suppression factors and particle ratiosWe now turn to the study of the suppression factors R AA for each hadron definedbyFIG.5.(Color online )Transverse momentum spectra for nega-tive pions,negative kaons,and protons from the hydro ϩjet model in Au+Au collisions at ͱs NN =200GeV at the impact parameter of b =2.0and 7.2fm.Yields are divided by 103for b =7.2fmresults.FIG.6.(Color online )Each contribution from hydrodynamics and minijets for ␲−,K −,and p in Au+Au collisions at ͱs NN =200GeV at the impact parameter of b =2.0fm.Yield of negative kaons (protons )is divided by 103͑106͒.PHENIX data are from Ref.[74].TETSUFUMI HIRANO AND YASUSHI NARA PHYSICAL REVIEW C 69,034908(2004)。

监管沙盒:数据要素治理新方案

监管沙盒:数据要素治理新方案

数据科学监管沙盒:数据要素治理新方案*陈振其*本文系国家社科基金重点项目“大数据时代政府信息公开制度变革研究”(项目编号:18AFX007)研究成果。

摘 要 如何形成包容审慎的监管环境是当前数据要素治理的关键问题。

文章采用实证研究和比较研究的方法,提出科层式监管的刚性化管控会导致缺少容错纠错的制度空间,监管部门的科层划分也使组织内部存在信息流通问题,无法适应包容审慎的监管要求。

数据监管沙盒体现了包容审慎的监管理念,具有主体多元、公开透明、规则豁免等制度特征。

数据监管沙盒中的协同监管能增加制度弹性,扁平化的组织结构能促进信息流通,最终实现对科层式监管困境的消解。

文章从实施主体、申请资格、监管行为和监管方式等方面提出构建我国数据监管沙盒的对策,建议由“国家数据局+行业主管部门”作为实施主体,将创新性作为申请资格的核心评价标准,监管行为应注重释放数据要素价值,并通过企业、行业组织及平台的协同治理实现监管目的。

关键词 数据监管沙盒 数据要素治理 科层式监管 包容审慎引用本文格式 陈振其.监管沙盒:数据要素治理新方案[J].图书馆论坛,2024,44(4):138-147.Regulatory Sandbox :a New Approach to Data Element GovernanceCHEN ZhenqiAbstract How to form an inclusive and prudent regulatory environment is a key issue in the current data elementgovernance. Using empirical and comparative research methods ,this paper proposes that the rigid control ofbureaucratic supervision leads to a lack of institutional space for fault tolerance and error correction ,and the hierarchical division of regulatory departments also causes the information flow problems within organizations ,making it impossible to adapt to the requirements of inclusive and prudent regulation. The data regulatory sandbox embodies the regulatory concept of inclusiveness and prudence ,with institutional features such as multiple subjects ,openness and transparency ,and exemptions from rules. Collaborative supervision in the data regulatory sandbox can enhance the institutional flexibility ,and the flat organizational structure can facilitate the flow ofinformation ,and ultimately solving the dilemma of hierarchical regulation. The paper puts forward countermeasures to build China's data regulatory sandbox in terms of implementation subject ,application qualification ,supervision behavior and regulation mode ,and suggests that "national data bureau + industry authorities" should be the implementation subject ,innovation should be the core evaluation criterion of application qualification ,and the regulation behavior should focus on releasing the value of data elements. Through the joint governance of enterprises ,industry organizations and platforms ,the purpose of regulation could be achieved.Keywords data regulatory sandbox ;data element governance ;bureaucratic regulation ;inclusiveness andprudence138◎2024年第4期◎数据科学0 引言数据作为新型生产要素,是驱动数字经济发展的基础。

2021考研英语备考:看阅读记单词(八)

2021考研英语备考:看阅读记单词(八)

2021考研英语备考:看阅读记单词(八)考研英语备考很多事情都要提上日程了,看看哪些是该注意的,下面由小编为你精心准备了“2021考研英语备考:看阅读记单词(八)”,持续关注本站将可以持续获取更多的考试资讯!2021考研英语备考:看阅读记单词(八)Even plants can run a fever, especially when they’re under attack by insects or disease. But unlike humans, plants can have their temperature taken from 3,000 feet away- straight up. A decade ago, adapting the infraredscanningtechnology developed for military purposes and other satellites, physicist Stephen Paley came up witha quick way to take the temperature of crops to determinewhich ones are under stress. The goal was to let farmers preciselytargetpesticide spraying rather than rain poison on a whole field, which invariably includes plants that don’t have pest problems.Even better, Paley’s Remote Scanning Services Company could detect crop problems before they became visible to the eye. Mounted on a plane flown at 3,000 feet at night, an infrared scanner measured the heat emitted by crops. The data were transformed into a color-coded may showing where plants were running “fevers”. Farmers could then spot-spray, using 50 to 70 percent less pesticide than they otherwise would.The bad news is that Paley’s company closed down in 1984, after only three years. Farmers resisted the new technology and long-term backers were hard to find. But with the renewed concern about pesticides on produce, and refinements in infrared scanning, Paley hopes to get back into operation. Agriculture experts have no doubt the technology works. “This technique can be used on 75 percent of agricultural land in the UnitedStates,” says George Oerther of Texas A&M. Ray Jackson, who recently retired from the Department of Agriculture, thinks remote infrared crop scanning could be adopted by the end of the decade. But only if Paley finds the financial backing which he failed to obtain 10 years ago.译文植物也会发烧,特别是当它们受到害虫攻击或生病的时候。

研究生陈中宪

研究生陈中宪

研究生:許中立學號:77871010論文名稱:有限元素法應用於臺中濁水巷地滑地穩定分析之研究英文論文名稱:Study on the Jwo-Shoei alley landslide slope stability with application of the finite element analysis method【中文摘要】近年來在本省有關於「地滑與崩塌」災害之研究已逐漸受到重視,然而其中以應用數值方法分析邊坡滑動破壞之相關研究,雖亦曾被提及;如變分分析法與有限元素分析法等之相關研究;惟其在一般的運用上則尚未見成熟。

本文係以有限元素分析法中之二維三角形元素,來分割並模擬地滑地邊坡破壞之情形,同時藉著以台中濁水巷地滑地為驗証之試驗地,而對該地滑地實施調查、觀測與土壤性質試驗。

並與Taylor氏摩擦圓分析法與Fellenius 氏切片分析法之結果做相互的比較與驗證,以探討其實用性。

為尋求簡便而合理之邊坡穩定分析方法,以為爾後應用分析時之參考。

綜合觀測之資料及分析結果,得到以下結論:1.由應變測定管觀測之結果顯示,本地滑地之日變動量屬於潛在至準確定級,其中BH2 與BH5 的變動情形較為激烈,且BH5 在觀測期中,已因地層之持續滑動,致使應變片超出測定範圍而損壞。

2.本地滑地為一順向坡,據應變測定管之觀測,其可能滑動位置如下:BH1 為8公尺、BH2 為2公尺、BH3 為8公尺、BH4 為4公尺、BH4 為9公尺、3.由室內試驗之結果顯示地表層之崩積層與風化頁岩層間,為各項土壤參數變化較大之點,加上頁岩層積水後,易使土壤軟化而產生滑動。

4.由有限元素分析法所計算推估之滑動面位置,概略與室外試驗所得者一致,惟以有限元素分析法推算破壞元素時,最重要之依據即為土壤的強度,如不能正確地將土壤強度參數與現場配合而輸入計算時,則此種推算方式將會產生巨大的誤差。

5.以有限元素分析法分析地滑地邊坡穩定性時,土壤的應力與應變關係假設為「線彈性」,故適於在潛在性級地滑的小變形之變動量中討論,如BH2-BH4 之剖面。

1990-Theory of Steady-State Passive Films II

1990-Theory of Steady-State Passive Films II
T h e Point Defect M o d e l Before presenting our analysis of steady-state films, it is first convenient to summarize the assumptions contained in the PDM. This is done not only to restate the assumptions themselves, but also to provide additional evidence for their validity with regard to the systems being considered. The most important assumptions are as follows: 1. The film contains high concentrations of point defects (VM ×', Vo"), where X is the oxide stoichiometry (MOx/~). 2. The metal/film and film/solution interfaces are in electrochemical equilibrium. 3. The electrical potential drop across the film/solution interface is a linear function of the applied voltage (V) and the pH of the solution
The formation of anodic passive films on metal surfaces has been the subject of intense interest over the past one h u n d r e d and fifty years (1-32). This interest has arisen because the corrosion resistance of most industrially important metals and alloys is due to kinetic inhibition of various anodic processes at the interface rather than being due to inherent thermodynamic stability. Passive films generally are v e r y thin; for example, those on nickel and stainless steels are normally less than 5 n m (50A) thick and yet they separate phases (metal and environment) which may react spontaneously a n d violently on contact. Finally, micrographic and microchemical examinations of passive films formed o n m a n y metals of interest reveal that they form as bilayers, consisting of a compact base layer underlying a porous upper layer (Fig. 1). Marker studies (33-38) have shown that the barrier (primary passive) layer grows into the metal, whereas the upper layer forms as a precipitate (frequently amorphous) by hydrolysis of cations emerging from the base film. Clearly, any viable theory for the growth of passive films on metal and alloy surfaces must take into account the various processes that lead to the formation of the bilayer structure. Despite the considerable work (1-32) that has been carried out over the past century and a half to define the structural, chemical, and electrical properties of thin passive films, no completely satisfactory model has been advanced to explain the kinetics of their growth. This is, in part, due to the difficulty in obtaining reliable kinetic parameters for the growth of thin (frequently <5 n m thick) passive films u n d e r carefully controlled conditions. Also theoretical analyses have proven to be inadequate for a variety of reasons, including the uncertainty regarding the physical and electronic properties of extremely thin (hydrated) oxide films and the roles played by various charge carriers (e',l~, cation vacancies, anion vacancies) in the growth process. With regard to the latter, the role of space charge in determ i n i n g the properties of passive films remains a crucial issue, in spite of the extensive work (1-6) that has been reported on the subject. However, in n o n e of these analyses, nor in any of the theoretical treatments referred to above, is film dissolution apparently taken into account or is the bilayer structure incorporated in a mathematical description of the film. Intuitively, film dissolution (as opposed to metal dissolution through the film) m u s t be important, since it represents one process by which metal is lost from the surface, and in the steady state the rate of film growth must be equal to the rate of film dissolution. Recently, Chao, Lin, and Macdonald (26-28) advanced a "point defect" model (PDM) to explain the growth (26), breakdown (27), and impedance (28) characteristics of passive films on metal surfaces. This model has been used to interpret these characteristics of anodic oxide films on iron and nickel (26-28), and to account (29) for the role of

Bilek,S-Institut...

Bilek,S-Institut...

Bilek, S.L. and L.J. Ruff, 2002. Analysis of the June 23, 2001 Mw=8.4 Peru underthrusting earthquake and its aftershocks, Geophys. Res. Lett, 29 (20), 1960, doi:10.1029/2002GL015543.ReferencesBeroza, G. C., and W. L. Ellsworth, Properties of the seismic nucleation phase, Tectonophysics, 261, 209-227, 1996.Ellsworth, W. L., and G. C. Beroza, Seismic evidence for a seismic nucleation phase, Science, 268, 851-855, 1995.Fukuyama, E., and R. Madariaga, Dynamic rupture of a planar fault in 3D: friction and rupture initiation (abs.), Seismol. Res. Lett., 68, 329-330, 1997.Hough, S. E., Empirical Green’s function analysis: taking the next step, J. Geophys. Res., 102, 5369-5384, 1997.Mori, J. and H. Kanamori, Initial rupture of earthquakes in the 1995 Ridgecrest, California sequence, Geophys.Res. Lett., 23, 2437-2440, 1996.Bibliography of Work Completed Under this Grant1996 Beroza, G. C., and W. L. Ellsworth, Seis mic observation of earthquake initiation: insights and possible pitfalls (INVITED), ESC meeting, Reykjavik, Iceland.1996 Ellsworth, W. L., Observation of the earthquake initiation process at Ridgecrest, California, EOS Trans.AGU, 77, F481.1996 Dodge, D. A., G. C. Beroza, and W. L. Ellsworth, Detailed observations of California foreshock sequences: implications for the earthquake initiation process, J. Geophys. Res., 101, 22,371-22,392.1997 Beroza, G. C., and W. L. Ellsworth, Constraints on earthquake nucleation from seismology (INVITED), EOS Trans. AGU, 78, S208.1998 Ellsworth, W. L. and G. C. Beroza, Observation of the seismic nucleation phase in the Ridgecrest, California earthquake sequence, Geophys. Res. Lett., 25, 401-404.earthquake nucleation phaseW. L. Ellsworth, and G. C. Beroza, Seismic evidence for an earthquake nucleation phase, Science, 268, 851-855, 1995.G. C. Beroza, and W. L. Ellsworth, Properties of the seismic nucleation phase, Tectonophysics, 261, 209-227, 1996.W. L. Ellsworth, and G. C. Beroza, Observations of the seismic nucleation phase in the Ridgecrest, California earthquake sequence, Geophys. Res. Lett., 25, 401-404, 1998.D. A. Dodge, G. C. Beroza, and W. L. Ellsworth, The foreshock sequence of the 1992 Landers, California earthquake and its implications for earthquake nucleation, J. Geophys. Res., 100, 9865-9880, 1995.Y. Iio, Slow initial phase of the P-wave velocity pulse generated by microearthquakes, Geophys. Res. Lett., 19, 477-480.Y. Iio, Observations of the sloe initial phase generated by microearthquakes: implications for earthquake nucleation and propagation, J. Geophys. Res., 100, 15337-15349, 1995.The great, deep 1994 Bolivia earthquakeSource Parameters - up to 2 presentationsP. F. Ihmle, On the interpretation of subevents in teleseismic waveforms: the 1994 Bolivia deep earthquake revisited, J. Geophys. Res., 103, 17919-17932, 1998.P. F. Ihmle, and T. H. Jordan, Source time function of the great 1994 Bolivia deep earthquake by waveform and spectral inversion, Geophys. Res. Lett., 22, 2253-2256, 1995.M. Antolik, D. Dreger, and B. Romanowicz, Finite fault source study of the great 1994 deep Bolivia earthquake, Geophys. Res. Lett., 23, 1589-1592, 1996.S. L. Beck, P. Silver, T. C. Wallace, and D. James, Directivity analysis of the deep Bolivian earthquake of June 9, 1994, Geophys. Res. Lett., 22, 2257-2260, 1995.W.-P. Chen, En echelon ruptures during the great Bolivia earthquake of 1994, Geophys. Res. Lett., 22, 2261-2264, 1995.C. H. Estabrook, and G. Bock, Rupture history of the great Bolivian earthquake: slab interaction with 660-km discontinuity? Geophys. Res. Lett., 22, 2277-2280, 1995.S. Goes, and J. Ritsema, A broadband P wave analysis of the large deep Fiji Island and Bolivia earthquakes of 1994, Geophys. Res. Lett., 22, 2249-2252, 1995.M. Kikuchi, and H. Kanamori, The mechanism of the deep Bolivia earthquake of June 9, 1994, Geophys. Res. Lett., 21, 2341-2344, 1994.S. H. Kirby, E. A. Okal, and E. R. Engdahl, The 9 June 94 Bolivian deep earthquake: an exceptional event in an extraordinary subduction zone, Geophys. Res. Lett., 22, 2233-2236, 1995.P. Lundgren, and D. Giardini, The June 9 Bolivia and March 9 Fiji deep earthquakes of 1994, I, source processes, Geophys. Res. Lett., 22, 2249-2252, 1995.J. Wu, T. C. Wallace, and S. Beck, A very broadband study of the 1994 deep Bolivia earthquake sequence, Geophys. Res. Lett., 22, 2237-2240, 1995.Rupture Mechanics and AftershocksS. C. Myers, T. C. Wallace, S. L. Beck, P. G. Silver, G. Zandt, J. Vandecaar, and E. Minaya, Implications of spatial and temporal development of the aftershock sequence for the Mw 8.3 June 9, 1994, deep Bolivian earthquake, Geophys. Res. Lett., 22, 2269-2272, 1995.H. Kanamori, D. L. Anderson, and T. H. Heaton, Frictional melting during rupture of the 1994 Bolivia earthquake, Science, 279, 839-842, 1998.P. G. Silver, S. L. Beck, T. C. Wallace, C. Meade, S. C. Myers, D. E. James, and R. Kuehnel,The rupture characteristics of the great, deep Bolivian earthquake of 1994 and the physical mechanism of deep-focus earthquakes, Science, 268, 69-73, 1995.D. A. Wiens, J. J. McGuire, P. J. Shore, M. G. Bevis, K. Draunidalo, G. Prasad, and S. P. Helu,A deep aftershock sequence and implications for the rupture mechanism of deep earthquakes, Nature, 372, 540-543, 1994.silent earthquakes?G. C. Beroza, and T. H. Jordan, Searching for slow and silent earthquakes using free oscillations, J. Geophys. Res., 95, 2485-2510, 1990.Tsunami ModelingK. Satake, Inversion of tsunami waveforms for the estimation of heterogeneous fault motion of large submarine ea rthquakes: the 1968 Tokachi-Oki and 1983 Japan Sea earthquakes, J. Geophys. Res., 94, 5627-5636.synthetic aperture radar -SAR- interferometryT. J. Wright, B. E. Parsons, J. A. Jackson, M. Haynes, E. J. Fielding, P. C. England, and P. J. Clarke, Source Parameters of the 1 October 1995 Dinar (Turkey) earthquake from SAR interferometry and seismic bodywave modelling, Earth and Planet. Science Lett., 172, 23-37, 1999.B. Hernandez, F. Cotton, and M. Campillo, Contribution of radar interferometry to a two-step inversion of kinematic process of the 1992 Landers earthquake, J. Geophys. Res., 104, 13083-13099, 1999.B. Delouis, P. Lundgren, J. Salichon, and D. Giardini, Joint Inversion of InSAR and teleseismic data for the slip history of the Mw=7.4 Izmit (Turkey) earthquake, submitted to Geophys. Res. Lett.K. L. Feigl, A. Sergent, D. Jacq, Estimation of an earthquake focal mechanism from a satellite radar interferogram: application to the December 4, 1992 Landers aftershock, Geophys. Res. Lett., 22, 1037-1040, 1995.aftershocks, body wave amplitude ratios, local earthquakes, grid searchS. Schwartz, Source parameters of aftershocks of the 1991 Costa Rica and 1992 Cape Mendocino, California, earthquakes from inversion of local amplitude ratios and broadband waveforms, Bull. Seis. Soc. Am., 85, 1560-1575, 1995.J. E. Ebel, and K.-P. Bonjer, Moment tensor inversion of small earthquakes in southwestern Germany for the fault plane solution, Geophys. J. Int., 101, 133-146, 1990.earthquake scaling lawsD. L. Wells, and K. J. Coppersmith, New empirical relationships among magnitude, rupture length, rupture width, rupture area, and surface displacement, Bull. Seis. Soc. Am., 84, 974-1002, 1994.rupture mechanicsT. H. Heaton, Evidence for and implications of self-healing pulses of slip in earthquake rupture, Phys. Earth Plant. Int., 64, 1-20, 1990.fault geometrical effects on rupture initiation and arrestG. C. P. King, and J. L. Nabelek, Role of fault bends in the initiation and termination of earthquake rupture, Science, 228, 984-987, 1985.G. C. P. King, Speculations on the geometry of the initiation and termination process of earthquake rupture and its relation to morphology and geological structure, Pure and Applied Geophysics, 124, 225-268, 1986.The classic papers about the Harvard CMT methodA. M. Dziewonski, T.-A. Chou, and J. H. Woodhouse, Determination of earthquake source parameters from waveform data for studies of global and regional seismicity, J. Geophys. Res., 86, 2825-2852, 1981.A. M. Dziewonski, and J. H. Woodhouse, An experiment in systematic study of global seismicity: centroid-moment tensor solutions for 201 moderate and large earthquakes of 1981, J. Geophys. Res., 88, 3247-3271, 1983.far field earthquake triggering: the obscure, the enigmatic, the chaotic?D. P. Hill et al., Seismicity in the Western United States remotely triggered by the M 7.4 Landers, California earthquake of June 28, 1992, Science, 1617-1623, 1993.historic earthquake analysisD. J. Wald, H. Kanamori, D. V. Helmberger, and T. H. Heaton, Source Study of the 1906 San Francisco earthquake, Bull. Seis. Soc. Am., 83, 981-1019, 1993.Northridge revisited, teleseismic body wave analysis, regional surface wave analysisH. K. Thio, and H. Kanamori, Source Complexities of the 1994 Northridge earthquake and its relation to aftershock mechanisms, Bull. Seis. Soc. Am., 86, 584-592, 1996.related papers:H. K. Thio, and H. Kanamori, Moment tensor inversions for local earthquakes using surface waves recorded at TERRAscope, Bull. Seis. Soc. Am., 85, 1021-1038, 1995.M. Kikuchi, and H. Kanamori, Inversion of complex body waves, Bull. Seis. Soc. Am., 72, 491-506, 1982.M. Kikuchi, and H. Kanamori, Inversion of complex body waves, III, Bull. Seis. Soc. Am., 81, 2335-2350, 1991.higher order moment tensors - for everybody who thoroughly enjoyed the first lectureD. J. Doornbos, Seismic moment tensors and kinematic source parameters, Geophys. J. R. astr. Soc., 69, 235-251, 1982.。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
Steady-state stability of an HVDC system using frequency-response methods
R. Yacamini, M.Sc, C.Eng., M.I.E.E., and A.M.I. Taalab, M.Sc, Ph.D.
Indexing terms: Convenors, Power systems and plant, Power transmission and distribution, Control systems
Abstract: An HVDC scheme is an amalgamation of both AC and DC components, the behaviour of which is often difficult to describe because of the interdependence of many of the parameters. The response of the DC system is dependent on the AC system and vice versa. The basic scheme controls also include both rectifier and inverter controls, which are basically separate but interdependent. The complete scheme could be described as a nonlinear multivariable control system which is sensitive to external disturbances. The paper describes the analysis of such an HVDC scheme using a simulator model and frequency-response techniques. The results described constitute a sensitivity analysis of the system to the principal parameters which affect the steady-state stability, such as the inverter short-circuit ratio and AC-system damping, as well as the sensitivity to control parameter changes.
Paper 2585C, first received 23rd November 1982 and in revised form 18th April 1983 Mr. Yacamini is with the Department of Engineering, University of Aberdeen. Marischal College, Aberdeen AB9 IAS, Scotland, and Dr. Taalab is with the Faculty of Engineering and Technology, Menofia University Shebin El-Kom, Egypt. Both authors were formerlurate Nyquist diagram can be used to determine the range of gain values over which the system will be stable, and the frequency (or frequencies) at which it will tend to oscillate as it becomes unstable. Step-response tests were also carried out to show the effect of different system parameters on the closed-loop transient response; in particular, on the control amplifier circuit parameters at the design stage. When it came to optimisation of control parameters, small variations of some of these parameters had little measured effect on the step response; however, the effect could be distinguished using frequency-response measurements, thus suggesting that frequency-response techniques should be used in conjunction with the step response to optimise control parameters. In a large multivariable system, such as an HVDC scheme, there are many system parameters which can be varied, consequently, numerous cases can be studied. However, the parameters which were chosen as being of interest for this paper were the damping angle if/ of the AC network into which the DC transmission is feeding, the inverter AC system impedance Zs, the rectifier firing angle a, the inverter extinction angle y, the DC line length and the derivative feedback gain of the rectifier current control. A summary of the methods previously used to assess system steady-state stability and a description of the method used for this paper are given in the following two Sections.
1
Introduction
Several authors have, in the past, described the stability analysis of various parts of an HVDC scheme, notably the controls and the convertors. These methods have normally used computer programs to calculate the effect of different parameters and, as such, have involved simplifications of a mathematical or analytical type. The literature contains no analysis of an HVDC scheme as an entity, where the entire scheme, including the AC systems and the DC line, have been subjected to stability analysis. An HVDC scheme is a large-scale, multivariable system, which is also both nonlinear and suffers from external disturbances. As such, the analysis becomes extremely complicated and uncertain. Modelling techniques where all the controls, all the nonlinearities, and all the disturbances, can be included simultaneously are therefore of considerable value in determining stability. With such a model, there is no need to make approximations, other than those which are inherent in any model, which can be virtually eliminated by experience and good technique. This paper, therefore, describes for the first time the application of frequency response methods to an HVDC scheme. The use of these techniques gives information about the system which will transform it into a scalar model. The paper highlights the effect of the impedance and damping angle of the networks, into which the HVDC link is feeding by carrying out a parameter sensitivity analysis. (The description of the methods used have recently been collected in the publication by Macfarlane [1], where some 40 papers are included, which cover the entire field.) The measurements were carried out on an HVDC simulator. The closed-loop frequency response was deduced for a transmission system consisting of a rectifier with constant current (CC) control, an inverter with constant extinction angle (CEA) control, and a DC transmission cable. The open-loop frequency response was found from the closed-loop response, using a Nichols' chart and the Nyquist diagram plotted of the open-loop response. The effect of varying different system parameters on the relative stability of the DC transmission system can then be assessed by plotting a Nyquist diagram with only one of the many variables changed at a time.
相关文档
最新文档