Explanations of Empirically Derived Reactive Plans

合集下载

外商直接投资,技术寻求和逆向技术溢出效应【外文翻译】

外商直接投资,技术寻求和逆向技术溢出效应【外文翻译】

外文翻译原文FOREIGN DIRECT INVESTMENT, TECHNOLOGY SOURCING AND REVERSE SPILLOVERSMaterial Source:The Manchester School Vol 71 No. 6 December 2003 Author: NIGEL DRIFFIELD Business School, University of Birmingham And JAMES H. LOVE† Aston Business School, Aston University Recent theoretical work points to the possibility of foreign direct investment motivated not by ‘ownership’ advantages which may be exploited by a multinational enterprise but by the desire to access the superior technology of a host nation through direct investment. To be successful, technology sourcing foreign direct investment hinges crucially on the existence of domestic-to-foreign technological externalities within the host country. We test empirically for the existence of such ‘reverse spillover’ effects for a panel of UK manufacturing industries. The results demonstrate that technology generated by the domestic sector spills over to foreign multinational enterprises, but that this effect is restricted to relatively research and development intensive sectors. There is also evidence that these spillover effects are affected by the spatial concentration of industry, and that learning-by-doing effects are restricted to sectors in which technology sourcing is unlikely to be a motivating influence.1 IntroductionTraditional models of foreign direct investment (FDI) have been heavily influenced by a framework which suggests that where a company has some ‘ownership’(i.e. competitive) advantage over its rivals and wher e, for reasons of property rights protection, licensing is unsafe, a company will set up production facilities in a foreign country through FDI (Dunning, 1988). Since much of the discussion of ownership advantages is couched in terms of technology and/or m anagement expertise, there is a strong a priori assumption that this ‘technology exploiting’ FDI will be an important method by which technology is transferred internationally. Indeed, there is a growing literature concerned with the extent to which FDI contributes to technological advance in host countries. Much of thisanalysis is based on estimations of externalities from inward FDI, with the evidence generally pointing towards positive effects of FDI on domestic productivity (Blomström and Kokko, 1998).However, the literature is increasingly turning to the possibility that FDI may be influenced by multinational firms’ desire not to exploit an existing ownership advantage abroad but to acquire technology from the host country, i.e. that ‘technology sourcing’ may be the motive for FDI. Kogut and Chang (1991) and Neven and Siotis (1996) point out that this possibility has exercised the minds of policy-makers in the USA and the EU, with concerns that host economies’ technological base may be undermined by technology sourcing by Japanese and US corporations respectively. These studies examine the effects of host versus home country research and development (R&D) expenditure differentials on FDI flows between Japan and the USA and the USA and the EU respectively. Both studies find a positive relationship between these measures, and interpret this as evidence of technology sourcing. The literature on the internationalization of R&D also contains an increasing amount of evidence that technology sourcing may be a motive for FDI (Cantwell, 1995; Cantwell and Janne, 1999; Pearce, 1999).This literature stresses a range of reasons for FDI in R&D, much of which is concerned with the relative technological strengths of the capital exporting (i.e. ‘home’) firm or country v ersus that of the host. For example, Kuemmerle (1999) distinguishes between ‘home-base exploiting’ FDI and‘home-base augmenting’ FDI. The former is undertaken in order to exploit firm-specific advantages abroad, while the latter is FDI undertaken to access unique resources and capture externalities created locally. And in an analysis of inward and outward FDI in 13 industrialized countries, van Pottelsberghe de la Potterie and Lichtenberg (2001) find positive spillover effects from outward FDI arising from accessing the R&D capital stock of host countries, leading them to conclude that FDI flows are predominantly technology sourcing in nature.Recent theoretical work represents an important step forward in this area, with Fosfuri and Motta (1999) and Siotis (1999) both presenting formal models of the FDI decision which embody the possibility of technology sourcing. They show that a firm may choose to enter a market by FDI in order to access positive spillover effects arising from close locational proximity to a technological leader in the host country. Because of the externalities associated with technology, these spillovers decrease the production costs of the investing firm both in its subsidiary operations and in its home production base. Siotis (1999) also shows that the presence ofspillovers may induce firms to invest abroad even where exporting costs are zero.The theoretical and empirical work reviewed above hinges crucially on the assumption that foreign firms investing in a host economy are able to capture spillover effects from the domestic (host) industry. The purpose of this paper is to test for the existence of this ‘reverse spillover’ effect for a panel of UK industries. If there is some evidence of productivity spillovers running from the domestic to the foreign sector of UK industry, this would suggest that the necessary condition for technology sourcing FDI does exist in practice. In addition to testing empirically for reverse spillover effects we also test for two elements which are implicit in the theoretical analysis: first, that the spatial concentration of production has an effect on productivity spillovers; and second, that learning-by-doing effects are linked to the investing motivations of foreign firms.2 THE MOTIV ATION FOR FDI, SPILLOVERS AND FIRM GROWTHFosfuri and Motta (1999) present a simple model in which two local (i.e. single country) firms are endowed with different technologies and are given the option of exporting to the other country, engaging in FDI or not entering. They show formally that an investing firm which is a technological laggard (i.e. has unit costs of production above those of its competitor) will find it profitable to invest abroad despite having an efficiency disadvantage, as long as the probability of acquiring the leader’s technology through productivity spillovers is sufficiently high. In other words, ‘technology sourcing’ rather than ‘technology exploiting’ FDI may occur. Siotis (1999) develops a similar model, but allows for the possibility of two-way spillovers between foreign and domestic firms. He too finds theoretical support for technology sourcing as a motivation for FDI.It seems plausible that the probability of benefiting from productivity spillovers will at least in part be dependent on the actions of the firms concerned, and that the scope for spillovers, particularly in the context of technology sourcing investment, will vary with the research efforts of domestic firms. Thus technology sourcing is most likely to occur where the scope for productivity externalities to be assimilated by foreign firms is greatest; this in turn is a positive function of the R&D intensity of domestic industry. We therefore anticipate reverse spillover effects being most apparent in those sectors in which domestic industry has invested heavily in R&D; these are the sectors in which the probability of acquiring technology through spillovers is greatest and in which technology sourcing FDI is most likely to occur. However, traditional explanations for FDI based on the exploitation offirm-specific‘ownership’ advantages shou ld not be ignored. Siotis (1999)shows that where a foreign firm has an ownership (i.e. efficiency) advantage relative to domestic firms, FDI will only occur if spillovers are likely to be small (the ‘dissipation effect’). We therefore anticipate technology exploiting FDI to be most likely where there is little scope for reverse spillovers, i.e. where domestic industry does not invest heavily in R&D. Reverse spillover effects should therefore be most evident in relatively research intensive sectors, but absent or less evident in sectors which are relatively non-research intensive.Two further and related hypotheses can also be tested. The first relates to the growth paths exhibited by firms that have different motivations for FDI. To the extent that it is possible to make the distinction between technology sourcing and technology exploiting FDI, then it is also likely that the patterns of development arising from these forms of investment will be different. This is likely to be important in the study of the development of total factor productivity in the foreign owned sector, following the theory of the multinational enterprise dating back to Dunning (1958) and more explicitly outlined in the seminal papers by Vernon (1966), Buckley and Casson (1976) or Dunning (1979). The traditional explanation of the existence of multinational enterprises is that firms transfer firm-specific assets across national boundaries but internalized within the firm (technology exploiting FDI). Firms operating in the foreign country then have to undertake the process of adapting this technology to a new environment, to take account of local working practices, available human capital and customers’ tastes for example. This is neither costless nor instantaneous, and so total factor productivity of foreign investment motivated in this ‘traditional’ manner is likely to demonstrate experience effects and significant learning-by-doing effects. By contrast, firms motivated by technology sourcing are less likely to undergo this adaptation of internal technology: their concern is not with adapting existing technology but in assimilating knowledge generated externally, in this case by local firms.Of course, in some cases the extent of adaptation by technology exploiting firms may be minimal in certain markets, while technology sourcing subsidiaries may undergo some degree of adaptation, so that the relative extent of learning by doing is ultimately an empirical issue. On balance, however, we expect significant learning by-doing effects among technology exploiting foreign firms, but perhaps not in the technology sourcing firms, where spillovers from domestic investments are likely to contribute more to total factor productivity in the foreign sector.The second subsidiary hypothesis relates to the extent to which technological externalities are constrained spatially. The theoretical analysis of Siotis (1999) depends on the existence of geographically localized spillovers to provide an incentive for technology sourcing FDI; Fosfuri and Motta (1999) also acknowledge this geographical dimension to spillovers. Empirically, there is significant evidence that technology spillovers are indeed limited geographically within countries, as well as between them (Head et al., 1995; Driffield, 1999). This suggests that reverse spillovers may be linked to the spatial distribution of industry; we therefore test whether the spatial concentration of production has an effect on the scale of productivity spillovers running from domestic to foreign industry.译文外商直接投资,技术寻求和逆向技术溢出效应资料来源:曼彻斯特大学学报71卷第6期作者:奈杰尔•德里菲尔德英国伯明翰大学商学院;詹姆斯H.爱阿斯顿商学院,阿斯顿大学近期的理论研究表明,外商直接投资的动机可能不是“所有权”优势,而是跨国公司希望通过直接投资积极利用东道国的先进技术。

英语学术论文实用写作 第十一章 如何在英语论文中使用汉语语料-文档资料

英语学术论文实用写作 第十一章 如何在英语论文中使用汉语语料-文档资料

• 因为汉语有许多不同于英语 的句法类型学、语义类型学 和音系类型学的特异性,比 如汉语有声调节,有量词, 有表示时(tense)和体 (aspect) 的词汇标记(如是 “着”、“了”、“过”), 如果不恰如是其份地表达出 来,不仅英语读者不能理解, 也会妨害例子本身的说服力。
( 1 )用数字表达声调 / 三种版 本对照 比如,我们在英语论文中使用 “小鸭子摇摇摆摆地走路”这一 句话,可以这样表达: Xiao3 ya1zi yao2yao2bai3bai3 de zou3 lu4. Little duck waddlingly DE walk road. The duck is waddling, or The ducks are waddling, or Ducks waddle.

In light of G. Leech's language function theory, Wang Dongfeng (王东风) (1991: 28-31) conducts a study of translation converting, discussing some translation ways from the functional perspective in terms of informative, expressive, conative esthetic and phatic language. Although Wang’s study sheds light on some operative ways of translation strategies, however, the studies confine themselves to the inner-textual level (sentence or sentence groups), and still do not mount to the textual level.

证明这件事英语作文

证明这件事英语作文

证明这件事英语作文英文回答:In the tapestry of life, threads of skepticism and doubt intertwine with the vibrant hues of belief and conviction. As our minds navigate the labyrinthine paths of knowledge, we encounter countless propositions, each begging for our assent. But how do we discern truth amidst a sea of assertions? How do we prove that a particular claim is worthy of our unwavering belief?The scientific method, a beacon of empirical inquiry, offers a rigorous framework for evaluating the veracity of scientific hypotheses. Through observation, experimentation, and rigorous data analysis, scientists meticulously gather evidence to support or refute their theories. By allowing rival explanations to compete on an equal footing, science fosters a crucible of intellectual discourse where truth emerges from the crucible of rigorous scrutiny.In the realm of philosophy, logical reasoning provides a powerful tool for establishing the truth of arguments. Through deductive and inductive arguments, philosophers construct intricate chains of logic that lead to irrefutable conclusions. By analyzing the premises and the validity of the inferences, we can determine whether an argument is sound or fallacious, inching us closer to the elusive truth.Yet, beyond the confines of scientific and philosophical inquiry, the human experience encompasses a myriad of subjective truths that defy empirical verification. In matters of art, aesthetics, and personal beliefs, the truth often lies in the eye of the beholder. Emotional resonance, intuitive understanding, and cultural context shape our perceptions of what is true and meaningful in these realms.In the tapestry of truth, there are hues that transcend the limitations of reason and logic, reaching into the depths of our hearts and souls. Truths of compassion, empathy, and forgiveness resonate within us, guiding ouractions and shaping our humanity. These truths, though subjective and elusive, nevertheless possess an undeniable power to transform our lives and nurture the bonds that unite us.As we navigate the complexities of life, may we embrace a healthy skepticism that prompts us to question and seek evidence. Let us employ reason and logic as our allies, but let us also remain open to the truths that lie beyond the confines of empirical verification. For in the tapestry of truth, there are vibrant threads that defy easy categorization, threads that speak to the fullness of our human experience.中文回答:在生命的长卷上,怀疑和质疑的丝线与信念和确信的鲜艳色彩交织在一起。

Individualism-theCoreofAmericanCulturalValues

Individualism-theCoreofAmericanCulturalValues

Individualism-the Core of American CulturalValuesⅠIntroduction.In recent years, at the background of globalization, intercultural communication has been more and more frequent. When we come across strange people from different cultures, we always experience culture shock. That is because that people from different cultures are endowed with different values, like western culture and Chinese culture.As the frequency between the Chinese and Western intercultural activity day by day, cases of the cultural conflict had been happened repeatedly because of differences on concept of value, ethics morality and national customs among various countries. And it often make many jokes, appears circumstances which made people embarrassed. This seriously affected both two sides to associate succeed and the friendship between nations. Therefore, we are necessary to discover its deep-level reason and take some certain measures to develop the intercultural communication ability. A voids the cultural conflict, let the social communication becoming more free and relaxed between nation and nation, and then, becoming an enjoyment everyone wants instead of frighten or fear.Hofstede has empirically derived four cultural variability dimensions in his large-scale study of a U.S. multinational business corporation, the first and most important one of which is individual-collectivism. Western culture upholds individualism, but Chinese culture advocates collectivism. This paper makes an analysis of the comparison between western individualistic culture and Chinese collectivist culture and their influence on intercultural communication.ⅡThe Origin of IndividualismThe origin of individualism can be traced back to the beginning years of its history, when first American immigrants came to the North American continent looking for better life and shaking off the yoke of European feudal tradition and the oppression from all kinds of powerful classes. It is determined that elements of anti-yoke and searching for freedom should be the American people’s character.In December 1620, with snow already flying, the now celebrated Mayflower dropped anchor off Cape Cod. She had on board only fifty men, plus twenty women and thirty-four children in all. One would have said that these weak and weary Pilgrims could not succeed, but succeed they did in a remarkably short time. These puritans set out on a wooden sailing ship “Mayflower”, hoping to live with freedom and to realize their dreams there. After 66 days, they went ashore in the new continent successfully. In the following days, 41 adult men of them signed the famous Mayflower Compact, leading to a new way of life for people, As a result, the compact would become a criterion to maintain public order instead of authority supervision. Therefore, all the people are living for themselves, but not for somebody or something else and they are the masters of the government, which represented the core of American cultural values.ⅢDefinition and Explanation of Western IndividualismIndividualism, which has various explanations in the British Encyclopedia, is a moral, political, and social philosophy, emphasizing individual liberty, the primary importance of the individual and the “Personal Independence.”In essence, American individualism treats humanism as its guiding ideologies, emphasizes the independence of individual personality, advocates human equality, hold the idea that everyone has free will and others should not work their will upon him except that individual interest endanger public interests. When reflected in social life, it refers to respecting people’s personality and acknowledging that everyone has his right to choose his own life style differing from others’. When reflected on cultural domain, it refers to highly confidence, self-improvement, perseverance, enterprise and personal striving spirit audacious originality not rigidly adhering to tradition. In a word, individualism means two aspects: pursuit of individual interests and bearingof your own responsibility at the same time. Individualism is rooted in the western individual culture. After ages of historical changes, individual culture has infiltrated into all the fields of western society and became the main ideology. The birth of Puritanism attributed to the Reformation in Europe, running the old rigid Christianity and liberating the individual to a large degree. Eventually, Puritanism inherited the achievement of the Reformation in two main perspectives: to set the individual free from the strict discipline of Roman church and to set the individual free from the strict ideological control of Roman church.ⅣChinese Collectivism“Collective”was first put forward by Stalin in July 1934, when talking with English writer Welles. He said, “Collective and socialism don’t gainsay personal interests, but combine personal interest with group interests…”He posed in the conversation, “There is no and should mot be irreconcilable opposition between individual and collectiveness, individual interests and collective interests. This opposition should not exist, because collectivism and socialism don’t deny individual interest, but combine individual interest and collective interest together. Socialism would not leave aside individual interest. Only socialist society can satisfy this sort of individual to the largest degree. Besides, socialist is the unique dependable guarantee to protect individual interest.”ⅤDefinition and Explanation of Chinese collectivismCollectivism is a kind of theology and spirit that advocates that individual belongs to society and individual interest should obey collective interest, racial interest, class interest and national interest, the highest standard of which is that all of the opinions and motions accord with people’s collective interest, and it is the individual interest should obey collective interest when conflict happens between them. A member of the proletariat should act with Marxist outlook as the guide, with collectivism as the principle, with the realization of socialism and communism as the goal. Chinese culture stems from Confucianism, Taoism, and Buddhism. The value goal that Confucianism pursues is collective interests. Confucianism plays a crucial role in forming Chinese people’s mind. Chinese collectivism refers to that collective is the foundation for the existence ofindividual. Collective interests are the interests of individual, and collective values are the values of individual and what is more, collective will is the will of individual.ⅥManifestation of Differences between Western Individualism and Chinese CollectivismThe world can be divided in various methods: rich and poor, democracy and despotism, and individualistic culture nation and collectivist culture nation, which is the most amazing difference. This difference deepens people’s cognition towards the world harder than economy does.If you show a fish bowl picture to an American, the American will always describe about the largest one of those fishes and what it is doing. However, if a Chinese is asked to describe a fish bowl, he or she will always represent the context around the fish. Such types of texts have been done many time, the results imply the same thing: Westerns always see the individual; Chinese and other Asian always see the context. When psychologist Richard Nisbett show a chicken picture, a cow picture, and a grass picture to westerners, and let them pick out two of which together. Westerner will always choose chicken and cow picture, because cow needs to eat grass. Westerners tend to pay, but Chinese tend to pay attention to relations.The different organizational value dimensions people hold leading to series of cultural conflicts in cross-cultural communication. While as the core dimension, Individualism-Collectivism, has the most important influence on the formation of these conflicts. As far as I am concerned, manifestation of the differences between western individualistic culture and Chinese collectivist culture can be concluded in four main aspects as follows:1 .Personal privacyPersonal privacy is respected highly and protected by the law in individualistic culture countries. Anyone who infringe upon any other’s personal privacy with the motiv ate of profit, curiosity or malice will be punished. For instance, one must telephone before he pays a visit to another even including his friends or relatives. Anybody mustn’t step into another’s house without permission. In contrast, Chinese don’t emphasize much on personal privacy, because we believe that individual belongs to society and we need to care about others. For instance, Chinese peoplealways ask the age, marriage condition, occupation and incomes of others they meet at the first time, which is seen as a pattern of showing politeness to others.2 Personal freedom and equalityIn western social values, equal opportunity and personal freedom have the tightest link with Individualism, without which Individualism will be an unrealistic thought. The concept of advocating freedom is also affecting their attitude to life which will be clearly seen from their communication style. They will express their thought in a direct way, if they are not satisfied with something. While in China, people are so reserved that we always do n’t express our true will directly in order to maintain the harmonious atmosphere.3 Self-relianceAchieving both financial and emotional independence from parents as early as possible is crucial for individualists to achieve self-reliance. It means that westerners believed they should take care of themselves, solve their own problems, and “stand on their own two feet”. However, the strong collective belief decided our life pattern in China. Parents want to make all things prepared for their children, even though the children have been adults. Maybe it will take more time for Chinese children to achieve self-reliance.4 Self-expressionWestern people are treated as those people who are brave enough to express themselves, fond of being the center of attention enjoy taking a risk and all the things curious in the world. But Chinese culture advocates being “modesty”, which results in the modesty character of Chinese people and not willing to express themselves.ⅦConclusionAt present, with the rapid economic growth, cross-culture communication between China and western countries has been more and more necessary and frequent. However, during the process ofcommunications, cultural conflicts take place here and there, which affected the development of communication seriously. That is because people with different cultural values(western culture upholds individualism and Chinese culture advocates collectivism) will have different beliefs. All we can do to lessen cultural barriers is to learn more skills to develop intercultural communication ability.。

科学探索真理的英语作文

科学探索真理的英语作文

Science is a systematic enterprise that builds and organizes knowledge in the form of testable explanations and predictions about the universe.It is a quest for truth that has been pursued by humankind for centuries,and it has led to remarkable advancements in our understanding of the world around us.The pursuit of scientific truth begins with observation.Scientists observe phenomena in the natural world and use these observations to formulate hypothesestentative explanations for why things occur.These hypotheses are then tested through experimentation,a process that involves manipulating variables to see how they affect the outcome of an event.One of the key principles of scientific exploration is the reliance on empirical evidence. Empirical evidence refers to information that is derived from observation and experimentation.It is the foundation upon which scientific theories are built.Without empirical evidence,a hypothesis remains unproven and cannot be accepted as a valid explanation.The scientific method also emphasizes the importance of objectivity.Scientists strive to eliminate personal biases and preconceived notions from their research to ensure that their findings are accurate and reliable.This commitment to objectivity is what sets scientific inquiry apart from other forms of knowledge acquisition.As scientists gather data and analyze their results,they may develop theories that explain the phenomena they are studying.A scientific theory is a wellsubstantiated explanation of some aspect of the natural world that is based on a body of facts that have been repeatedly confirmed through observation and experimentation.Theories are subject to change as new evidence emerges,but they provide a framework for understanding complex phenomena.The pursuit of scientific truth is an ongoing process.As new technologies and methodologies become available,scientists are able to ask more sophisticated questions and delve deeper into the mysteries of the universe.This continuous refinement of our understanding is what drives the advancement of knowledge and fosters innovation.Moreover,the quest for scientific truth has practical implications for society.Scientific discoveries have led to the development of new technologies,improved medical treatments,and a better understanding of the environment.These advancements have the potential to improve the quality of life for people around the world.In conclusion,the pursuit of scientific truth is a vital endeavor that has the power totransform our understanding of the world and improve the human condition.By adhering to the principles of observation,empirical evidence,and objectivity,scientists are able to uncover the mysteries of the universe and contribute to the collective knowledge of humankind.。

资本结构与企业绩效【外文翻译】

资本结构与企业绩效【外文翻译】

外文翻译Capital Structure and Firm Performance Material Source: Board of Governors of the Federal Reserve SystemAuthor: Allen N. BergerAgency costs represent important problems in corporate governance in both financial and nonfinancial industries. The separation of ownership and control in a professionally managed firm may result in managers exerting insufficient work effort, indulging in perquisites, choosing inputs or outputs that suit their own preferences, or otherwise failing to maximize firm value. In effect, the agency costs of outside ownership equal the lost value from professional managers maximizing their own utility, rather than the value of the firm.Theory suggests that the choice of capital structure may help mitigate these agency costs. Under the agency costs hypothesis, high leverage or a low equity/asset ratio reduces the agency costs of outside equity and increases firm value by constraining or encouraging managers to act more in the interests of shareholders. Since the seminal paper by Jensen and Meckling (1976), a vast literature on such agency-theoretic explanations of capital structure has developed (see Harris and Raviv 1991 and Myers 2001 for reviews). Greater financial leverage may affect managers and reduce agency costs through the threat of liquidation, which causes personal losses to managers of salaries, reputation, perquisites, etc. (e.g., Grossman and Hart 1982, Williams 1987), and through pressure to generate cash flow to pay interest expenses (e.g., Jensen 1986). Higher leverage can mitigate conflicts between shareholders and managers concerning the choice of investment (e.g., Myers 1977), the amount of risk to undertake (e.g., Jensen and Meckling 1976, Williams 1987), the conditions under which the firm is liquidated (e.g., Harris and Raviv 1990), and dividend policy (e.g., Stulz 1990).A testable prediction of this class of models is that increasing the leverage ratio should result in lower agency costs of outside equity and improved firm performance, all else held equal. However, when leverage becomes relatively high, further increases generate significant agency costs of outside debt – including higher expected costs of bankruptcy or financial distress – arising from conflicts between bondholders and shareholders.1 Because it is difficult to distinguish empiricallybetween the two sources of agency costs, we follow the literature and allow the relationship between total agency costs and leverage to be non-monotonic.Despite the importance of this theory, there is at best mixed empirical evidence in the extant literature (see Harris and Raviv 1991, Titman 2000, and Myers 2001 for reviews). Tests of the agency costs hypothesis typically regress measures of firm performance on the equity capital ratio or other indicator of leverage plus some control variables. At least three problems appear in the prior studies that we address in our application.First, the measures of firm performance are usually ratios fashioned from financial statements or stock market prices, such as industry-adjusted operating margins or stock market returns. These measures do not net out the effects of differences in exogenous market factors that affect firm value, but are beyond management’s control and therefore cannot reflect agency costs. Thus, the tests may be confounded by factors that are unrelated to agency costs. As well, these studies generally do not set a separate benchmark for each firm’s performance that would be realized if agency costs were minimized.We address the measurement problem by using profit efficiency as our indicator of firm performance. The link between productive efficiency and agency costs was first suggested by Stigler (1976), and profit efficiency represents a refinement of the efficiency concept developed since that time.2 Profit efficiency evaluates how close a firm is to earning the profit that a best-practice firm would earn facing the same exogenous conditions. This has the benefit of controlling for factors outside the control of management that are not part of agency costs. In contrast, comparisons of standard financial ratios, stock market returns, and similar measures typically do not control for these exogenous factors. Even when the measures used in the literature are industry adjusted, they may not account for important differences across firms within an industry –such as local market conditions – as we are able to do with profit efficiency. In addition, the performance of a best-practice firm under the same exogenous conditions is a reasonable benchmark for how the firm would be expected to perform if agency costs were minimized.Second, the prior research generally does not take into account the possibility of reverse causation from performance to capital structure. If firm performance affects the choice of capital structure, then failure to take this reverse causality into account may result in simultaneous-equations bias. That is, regressions of firmperformance on a measure of leverage may confound the effects of capital structure on performance with the effects of performance on capital structure.We address this problem by allowing for reverse causality from performance to capital structure. We discuss below two hypotheses for why firm performance may affect the choice of capital structure, the efficiency-risk hypothesis and the franchise-value hypothesis. We construct a two-equation structural model and estimate it using two-stage least squares (2SLS). An equation specifying profit efficiency as a function of the firm’s equity capital ratio and other variables is use d to test the agency costs hypothesis, and an equation specifying the equity capital ratio as a function of the firm’s profit efficiency and other variables is used to test the net effects of the efficiency-risk and franchise-value hypotheses. Both equations are econometrically identified through exclusion restrictions that are consistent with the theories.Third, some, but not all of the prior studies did not take ownership structure into account. Under virtually any theory of agency costs, ownership structure is important, since it is the separation of ownership and control that creates agency costs (e.g., Barnea, Haugen, and Senbet 1985). Greater insider shares may reduce agency costs, although the effect may be reversed at very high levels of insider holdings (e.g., Morck, Shleifer, and Vishny 1988). As well, outside block ownership or institutional holdings tend to mitigate agency costs by creating a relatively efficient monitor of the managers (e.g., Shleifer and Vishny 1986). Exclusion of the ownership variables may bias the test results because the ownership variables may be correlated with the dependent variable in the agency cost equation (performance) and with the key exogenous variable (leverage) through the reverse causality hypotheses noted above.To address this third problem, we include ownership structure variables in the agency cost equation explaining profit efficiency. We include insider ownership, outside block holdings, and institutional holdings.Our application to data from the banking industry is advantageous because of the abundance of quality data available on firms in this industry. In particular, we have detailed financial data for a large number of firms producing comparable products with similar technologies, and information on market prices and other exogenous conditions in the local markets in which they operate. In addition, some studies in this literature find evidence of the link between the efficiency of firms and variables that are recognized to affect agency costs, including leverage andownership structure (see Berger and Mester 1997 for a review).Although banking is a regulated industry, banks are subject to the same type of agency costs and other influences on behavior as other industries. The banks in the sample are subject to essentially equal regulatory constraints, and we focus on differences across banks, not between banks and other firms. Most banks are well above the regulatory capital minimums, and our results are based primarily on differences at the margin, rather than the effects of regulation. Our test of the agency costs hypothesis using data from one industry may be built upon to test a number of corporate finance hypotheses using information on virtually any industry.We test the agency costs hypothesis of corporate finance, under which high leverage reduces the agency costs of outside equity and increases firm value by constraining or encouraging managers to act more in the interests of shareholders. Our use of profit efficiency as an indicator of firm performance to measure agency costs, our specification of a two-equation structural model that takes into account reverse causality from firm performance to capital structure, and our inclusion of measures of ownership structure address problems in the extant empirical literature that may help explain why prior empirical results have been mixed. Our application to the banking industry is advantageous because of the detailed data available on a large number of comparable firms and the exogenous conditions in their local markets. Although banks are regulated, we focus on differences across banks that are driven by corporate governance issues, rather than any differences in-regulation, given that all banks are subject to essentially the same regulatory framework and most banks are well above the regulatory capital minimums.Our findings are consistent with the agency costs hypothesis – higher leverage or a lower equity capital ratio is associated with higher profit efficiency, all else equal. The effect is economically significant as well as statistically significant. An increase in leverage as represented by a 1 percentage point decrease in the equity capital ratio yields a predicted increase in profit efficiency of about 6 percentage points, or a gain of about 10% in actual profits at the sample mean. This result is robust to a number of specification changes, including different measures of performance (standard profit efficiency, alternative profit efficiency, and return on equity), different econometric techniques (two-stage least squares and OLS), different efficiency measurement methods (distribution-free and fixed-effects), different samples (the “ownership sample” of banks with detailed ownership data and the “full sample” of banks), and the different sample periods (1990s and 1980s).However, the data are not consistent with the prediction that the relationship between performance and leverage may be reversed when leverage is very high due to the agency costs of outside debt.We also find that profit efficiency is responsive to the ownership structure of the firm, consistent with agency theory and our argument that profit efficiency embeds agency costs. The data suggest that large institutional holders have favorable monitoring effects that reduce agency costs, although large individual investors do not. As well, the data are consistent with a non-monotonic relationship between performance and insider ownership, similar to findings in the literature.With respect to the reverse causality from efficiency to capital structure, we offer two competing hypotheses with opposite predictions, and we interpret our tests as determining which hypothesis empirically dominates the other. Under the efficiency-risk hypothesis, the expected high earnings from greater profit efficiency substitute for equity capital in protecting the firm from the expected costs of bankruptcy or financial distress, whereas under the franchise-value hypothesis, firms try to protect the expected income stream from high profit efficiency by holding additional equity capital. Neither hypothesis dominates the other for the ownership sample, but the substitution effect of the efficiency-risk hypothesis dominates for the full sample, suggesting a difference in behavior for the small banks that comprise most of the full sample.The approach developed in this paper can be built upon to test the agency costs hypothesis or other corporate finance hypotheses using data from virtually any industry. Future research could extend the analysis to cover other dimensions of capital structure. Agency theory suggests complex relationships between agency costs and different types of securities. We have analyzed only one dimension of capital structure, the equity capital ratio. Future research could consider other dimensions, such as the use of subordinated notes and debentures, or other individual debt or equity instruments.译文资本结构与企业绩效资料来源: 联邦储备系统理事会作者:Allen N. Berger 在财务和非财务行业,代理成本在公司治理中都是重要的问题。

波普尔把科学发现归结为从错误到对错英语作文

波普尔把科学发现归结为从错误到对错英语作文

全文分为作者个人简介和正文两个部分:作者个人简介:Hello everyone, I am an author dedicated to creating and sharing high-quality document templates. In this era of information overload, accurate and efficient communication has become especially important. I firmly believe that good communication can build bridges between people, playing an indispensable role in academia, career, and daily life. Therefore, I decided to invest my knowledge and skills into creating valuable documents to help people find inspiration and direction when needed.正文:波普尔把科学发现归结为从错误到对错英语作文全文共3篇示例,供读者参考篇1Popper's Falsification and the Path of ScienceWhen I first learned about Sir Karl Popper's views on the philosophy of science, I have to admit I was a bit perplexed. Popper argued that the way science advances is by scientistscontinuously putting forth bold theories and then trying their hardest to falsify or refute those theories through stringent testing. If the theories withstand serious attempts at falsification, they are provisionally retained. But if they are falsified by observable evidence, they must be rejected or revised.This seemed backwards to me at first. Isn't the goal of science to ultimately arrive at profound truths about the universe through empirical investigation? Why would Popper claim that science doesn't deal in ultimate truths at all, but merely erecting theoretical structures that have so far withstood our efforts to knock them down? Asserting that science progresses "from error to error" seems like an awfully pessimistic view.However, the more I studied Popper's critical rationalism, the more I came to see the wisdom and importance of his ideas. Popper was reacting against the rigid empiricism and verificationism of the logical positivists who demanded that scientific knowledge be proven with certainty through pure observation and inductive reasoning. Popper rightly pointed out that this is an unrealistic and unattainable standard. No matter how many observations seem to confirm a theory, it can never be proven with total certainty because there is always the possibility of a future observation that contradicts and falsifies it.Instead, Popper argued that scientific theories can only ever be provisionally retained as being the best explanations we have so far until contradictory evidence emerges. This forces scientists to hold even their most cherished theories tentatively and be willing to modify or abandon them if the observable evidence demands it. As Popper put it, "The old scientific ideal of episteme – of absolutely certain, demonstrable knowledge – has proved to be an idol. The demand for scientific objectivity makes it inevitable that every scientific statement must remain tentative for ever."What I really grew to appreciate about Popper's philosophy is that it encourages a mindset of constant critical scrutiny, skepticism of dogma, and willingness to change one's views in the face of new evidence. This is the essence of the true scientific temperament – to never cling stubbornly to ideas just because we want them to be true, but to always follow the path of reason and observable reality wherever it leads.Popper used the vivid metaphor of scientists as unwitting "randy plank-builders" who devise bold theoretical planks to cross the ocean of the unknown. They build their plank as far out as they dare, secured only by the flimsiest anchor of tested knowledge, constantly extending it outwards to explore newdomains. But they must always be ready to demolish or modify their plank if it doesn't hold up to rigorous testing.To Popper, science advances by this constant process of conjectures and refutations. We put forth daring conjectures or hypotheses that attempt to explain aspects of reality. We then expose these conjectures to the most strenuous attempts at refutation that we can devise through precise observation and experimentation. Those theories that do get falsified are discarded as errors, while the survivors are retained – but only tentatively until contradicted. In this way, science progresses not by proving absolute truths, but by discarding error after error in favor of better approximations of the truth.The history of science is replete with examples of dominant theories that were later overturned or modified through bold conjectures and rigorous refutation. The geocentric model of the universe reigned for centuries until it was falsified by the heliocentric model of Copernicus, Galileo and Kepler. Newton's classical physics stood as the paramount theory until it was superseded in certain domains by Einstein's theories of relativity. The idea of the immutable gene was eventually overturned by our understanding of genetic mutation, horizontal gene transfer, and epigenetic expression.Each new scientific revolution involved daring scientists putting forth bold conjectures that contradicted and ultimately falsified the old paradigms through painstaking empirical scrutiny. As Popper said, "Whenever a theory appears to you as the only possible one, take this as a sign that you have neither understood the theory nor the problem which it was intended to solve."So in this light, science doesn't progress linearly towards final truths, but in fitful bursts, zigs and zags as old errors are discarded for newer and more empirically adequate conjectures. Popper likened it to a Socratic discussion where we continually uncover problems and revise our theories through the provocation of new arguments and careful refutation.There is undoubtedly a sense of provisionality and humility in Popper's view – we can never attain perfect, certain truth, only contingent approximations subject to revision. But I've come to see this as a strength rather than a weakness of science. It means our scientific knowledge remains flexible, open to change, and deeply rooted in observable reality rather than rigid doctrine. As Popper said, "The autonomy of science, guaranteed by incessant criticism and forever undergoing risk to survive, provides the firmest security system for science."So after studying Popper at length, I've come to embrace his perspective of science as boldly conjecturing solutions and then striving indefatigably to criticize and refute those solutions. By eliminating errors one by one, and revising our theories to account for contradictory evidence, science advances step by step towards fuller and more empirically robust explanations of the observable world. We may never attain ultimate truth, but we can progressively shed error after error in its pursuit.篇2Scientific Discovery: Popper's View of Progressing from Error to TruthAs students of science, we are taught from an early age that the scientific method is the path to uncovering objective truths about the natural world. We dutifully memorize the steps - make an observation, form a hypothesis, design an experiment to test the hypothesis, analyze the results, and draw a conclusion. If our results support the hypothesis, we consider it to be a valid theory that explains the phenomenon we investigated.However, the renowned philosopher of science Karl Popper held a radically different view of how scientific knowledge advances. In his seminal work "The Logic of Scientific Discovery,"Popper argued that the classical scientific method is fundamentally flawed because it is impossible to prove a theory is true through observations or experiments, no matter how much data we collect supporting it. Instead, Popper proposed that science progresses by continually attempting to falsify or disprove accepted theories through rigorous testing.Popper's core premise is that no number of confirming observations or experiments can establish a scientific theory as true with absolute certainty. This is because it is always possible that a future observation or test could arise that contradicts the theory. For example, physicists once considered Newtonian mechanics to be an unassailable truth based on centuries of observations that aligned with its principles. However, this theory was later shown to be incomplete and only an approximation through experiments that revealed the bizarre nature of physics at the quantum scale.Instead of naively seeking to prove theories true, Popper advocated for scientists to approach theories as permanent sources of potential error or falsehoods that must be ruthlessly scrutinized. A theory can only be considered scientifically valuable if it is inherently falsifiable - meaning it generatestestable predictions or premises that could reveal the theory to be false if contradictory evidence emerges.Through this process of "surviving" strenuous attempts at falsification, Popper believed that theories could provisionally be accepted as closer approximations of the truth, but never proven to be perfect or complete representations of reality. Each failed attempt to falsify strengthens a theory's credibility and explanatory power, but it is always subject to being revised or discarded if future observations reveal flaws.For example, Einstein's theory of relativity made very precise, quantifiable predictions about phenomena like the bending of light by gravitational fields. When astronomers observed stars positioned precisely where Einstein's equations predicted during a solar eclipse, it was considered a falsification of Newton's established laws. However, rather than taking this as final proof, scientists have continued rigorously testing relativity for over a century through experiments probing the theory's limits in more extreme scenarios. The theory remains unchallenged, but open to being potentially superseded.From Popper's perspective, science self-corrects and progresses through a Darwinian competition amongst theories to survive increasingly stringent tests. The fittest theories live on,being further refined by new evidence, while flawed or limited theories are culled from the body of accepted scientific knowledge.By adopting this critical mindset of potential falsification, Popper felt science could avoid falling into dogmatism and blind allegiance to potentially flawed axioms or doctrines, as he believed had occurred in fields like Freudian psychoanalysis and Marxist economic theory. Instead, an enduring culture of scrutiny and openness to revising even our most fundamental beliefs in light of new evidence is vital for expanding the frontiers of human knowledge.Popper used the example of Einstein's revolutionary theory of relativity displacing long-held notions of absolute space and time to illustrate how truly transformative scientific breakthroughs often originate from admitting the flaws in existing paradigms. He argued that if Einstein had merely sought confirmations of Newton's teachings, he would never have conceived such a radically different perspective.Of course, not all students may find Popper's philosophy of science intuitive or appealing. Critics argue that his emphasis on seeking falsifications rather than verifications is an unnecessary constraint that could actually impede scientific progress. If weare overly preoccupied with finding reasons why theories might be wrong, we may fail to thoroughly explore and expand upon their useful applications and predictive power.Additionally, some contend that Popper's falsification principle sets an unrealistic standard, as it is effectively impossible to definitively rule out any theory with 100% certainty through a finite set of observations or experiments. There will always be some possibility that future evidence could revive a theory previously considered falsified.Nonetheless, Popper's overarching emphasis on maintaining a critical, skeptical attitude and willingness to challenge even our most deeply held assumptions resonates with many scholars. His philosophy reminds us that no scientific theory, no matter how comprehensive or well-established, should be blindly accepted as infallible truth. There must always be space for new evidence and ideas to emerge that could revolutionize our understanding, or even expose folly in long-accepted tenets.As students, we would be wise to embrace Popper's humble perspective that all scientific knowledge is inherently provisional, incomplete, and open to revision through a process of continual error-correction. Each theory we learn represents the culmination of accumulated scrutiny withstanding arduousattempts at falsification by generations of inquisitive minds. However, these theories should not be dogmas etched in stone, but subjected to the same critical evaluation that allowed them to displace previous flawed models.Ultimately, Popper viewed science not as a linear pursuit of proving universal truths, but as an evolutionary process of discarding errors and developing ever-closer approximations of how the natural world operates. By treating even our most compelling theories as potential sources of error to scrutinize, we open the doors for superior explanations to emerge and our collective understanding to progress. Science advances not through perfection, but by paradoxically admitting the permanence of imperfection in our theories, and seeking to identify and correct those flaws.篇3From Error to Truth: Karl Popper's Revolutionary View on Scientific DiscoveryAs students of science, we are often taught that the scientific method is a logical and systematic process of formulating hypotheses, conducting experiments, and using empirical evidence to accept or reject those hypotheses. However, therenowned philosopher Karl Popper challenged this conventional view, proposing a radically different perspective on how scientific discoveries are made. In his seminal work, "The Logic of Scientific Discovery," Popper argued that scientific progress is not a linear accumulation of knowledge but rather a continuous process of trial and error, where theories are constantly subjected to rigorous testing and potential falsification.At the heart of Popper's philosophy lies the principle of falsifiability. According to Popper, a theory or hypothesis is scientific only if it is formulated in such a way that it can be empirically tested and potentially proven false. This criterion distinguishes science from pseudoscience, which often relies on unfalsifiable claims or explanations that are immune to refutation. Popper believed that the true essence of science lies not in the process of verifying theories but in the earnest attempt to falsify them through rigorous experimentation and observation.Popper's revolutionary idea challenged the widely accepted inductivist approach, which held that scientific knowledge is built upon repeated observations and the gradual accumulation of evidence supporting a theory. Instead, he proposed a deductive approach, where scientists start with bold conjectures orhypotheses and then subject them to the most stringent tests possible in an attempt to find flaws or counterexamples. If a theory withstands these tests, it is provisionally accepted, but it is always open to further scrutiny and potential falsification by new evidence.One of the key implications of Popper's philosophy is that scientific progress is driven by a continuous cycle of proposing theories, subjecting them to critical tests, and refining or replacing them with better explanations when they are falsified. This process is often referred to as the "trial and error" method, where scientists learn from their mistakes and use them as stepping stones to advance their understanding of the world.Popper's ideas have had a profound impact on the philosophy of science and have influenced generations of scientists across various disciplines. His emphasis on falsifiability has encouraged researchers to formulate precise and testable hypotheses, rather than vague or unfalsifiable claims. It has also fostered a culture of critical thinking and skepticism, where theories are constantly challenged and scrutinized, rather than accepted dogmatically.Moreover, Popper's philosophy has shed light on the inherent fallibility of scientific knowledge. Unlike the traditionalview of science as a steady accumulation of proven facts, Popper recognized that all scientific theories are provisional and subject to revision or replacement in light of new evidence or better explanations. This acknowledgment of the provisional nature of scientific knowledge has encouraged humility andopen-mindedness among scientists, as well as a willingness to adapt and embrace new paradigms when warranted.Critics of Popper's philosophy have argued that the strict application of falsifiability can lead to the premature rejection of promising theories or the inability to falsify certain theories due to practical limitations. Additionally, some have pointed out that the process of theory formulation and testing is not as clear-cut as Popper suggested, and that various psychological, social, and historical factors can influence the development of scientific knowledge.Despite these criticisms, Popper's ideas remain influential and have inspired a rich tradition of critical rationalism in science. His emphasis on the fallibility of human knowledge and the importance of subjecting theories to rigorous testing has contributed to the self-correcting nature of science and its ability to overcome dogmatism and stagnation.As students of science, we can learn valuable lessons from Popper's philosophy. First, we must embrace a spirit of critical inquiry and be willing to challenge our own preconceptions and biases. Secondly, we should strive to formulate precise and falsifiable hypotheses, recognizing that the true progress of science lies in the potential for our theories to be refuted and replaced by better explanations. Thirdly, we should cultivate a sense of humility and acknowledge that our current understanding of the world is always provisional and subject to revision in the face of new evidence or insights.By embracing Popper's revolutionary perspective, we can appreciate the dynamic and self-correcting nature of science, where discoveries emerge not from the gradual accumulation of facts but from the continuous cycle of proposing bold conjectures, subjecting them to rigorous testing, and learning from our errors. It is through this process of trial and error that we can advance our understanding of the world and move ever closer to the elusive goal of truth.。

城市社会学笔记 完整精简版

城市社会学笔记 完整精简版

前言一、What is urban sociology?Urban sociology is a sub-discipline (sociological study and knowledge about) to examine the nature of city life and urban social issues, how they are interrelated, and how a sociological approach helps us understand both the roots of these urban ―problems‖ and the consequences for individuals, communities and societies. Y ou will learn the historical experiences, theoretical explanations and solutions devised concerning today’s urban problems. The ability to critically assess current and future urban policies in comparative perspective is essential in our increasingly interdependent, global urban world.二、Sociological perspective on urbanism1、The City as Social OrganizationThey emphasized the functions cities perform and their types of organization.n Max Weber (1864-1920)The city performs economic, legal, and protective functions.We can use formal organization, power and authority to analyse urban governments and formal structures).n Durkheim (1858 –1917)Division of Labor created a mutual independence among various segments of the population so that organic solidarity holds people together. Mechanical solidarity is the basis of the social organization.n Maine (1822-1888) (Textbook P5)Social agreement or contractAscribed status & achieved status ( Ralph Linton1893-1953)2、The City as Eviln Oswald Spengler (1880 -1936)Lose of the natural based ―soul‖n Georg Simmel (1858~1918)City as an agent of social and psychological changeUrban life is full of inconsistencies3、The City as a Way of Life: UrbanismWirth, Louis (1897-1952)a. Urbanism was a function of population density, size and heterogeneity.b. A term used by Louis Wirth to denote distinctive characteristics of urban social life, such as its impersonality.Urbanism as a characteristic mode of life may be approached empirically from three interrelated perspectives: (1) as a physical structure comprising a population base, a technology, and an ecological order; (2) as a system of social organization involving acharacteristic social structure, a series of social institutions, and a typical pattern of social relationships; and (3) as a set of attitudes and ideas, and a constellatio n of personalities engaging in typical forms of collective behavior and subject to characteristic mechanisms of social control.第一章一、Urbanization is synonym as Urban Development to certain extent. Urbanization: the concentration of humanity into cities. Urbanization is a population process through which percentages of people shift their residences from rural to urban areas.二、City-state (城邦)An independent political unit consisting of a city and surrounding countryside.城邦或称城市国家,是在一定历史条件下由原始公社演化而来的一种公民集体。

外文翻译--创新绩效评价

外文翻译--创新绩效评价

外文原文:Innovation Performance MeasurementPeter Schentler,Frank Lindner,and Ronald Gleich In order to be able to retain or increase the market share of a company despite high competitive pressure or saturated markets, innovations have become increasingly important.89% of the respondents of a recent study answered that their company needs major innovations in order to meet financial goals. Studies also show that companies with a high innovation rate are more successful than their competitors. This cannot only be applied for the production of goods, but for the creation of services as well.Even though innovations are very important for a company’s current and future success, a high percentage of innovation projects fail. Hence, many companies invest huge amounts of resources in innovation projects which do not pay off in the end. In order to decrease risks of innovation projects and to minimize the squandering of resources, it is necessary to engage only in innovation projects which have the potential to lead to success and to ensure an efficient execution of the innovation projects. Considering the fact that innovations are characterized by high levels of risk and uncertainty8 and the difficulty to plan their success, companies should establish an innovation controlling aimed at increasing the level of successful innovations. By using innovation controlling, it should be possible to allocate resources to the most promising projects and stop projects, which do not fit corporate strategy, market needs or are unfeasible or unprofitableFor planning and steering innovation activities, measurement systems are applied. These measurement systems often focus on financial targets or particular, either strategic or operational, levels of innovation or mix up different levels. But it is necessary to cover all fields of innovation and of the innovation process −idea generation, research and development, market entry − as well as to find appropriate solutions for different levels of innovation. In addition, other facets of innovation management such as cooperation with suppliers, competitors or customers must beconsidered.Due to these problems, existing measurement systems do not fit the requirements of companies. In order to establish a successful innovation controlling covering the discrepancies of existing systems, a concept covering all levels of innovation performance has to be developed. A performance measurement system can serve as a basis for such a controlling concept.Therefore, the goal of this paper is to conceptualize a performance measurement system for innovation. To achieve this, the following procedure is used:• In Chap. 2, the state of the art of performance measurement is put into focus.• In Chap. 3, the application of performance measurement in the field of innovation management is introduced.• In Chap. 4, the different levels of performance measurement are shown.An overview about the results and the demand for further research are covered in Chap. 5.2 Performance MeasurementPerformance measurement systems were developed, because science and practice have come to the conclusion that traditional, financially oriented measurement systems provide limited use for the sustainable management and controlling of a company.The rising criticism covers aspects like a disregard of nonfinancial parameters, the missing alignment to corporate strategy, the backward view, the short-term perspective, the insufficient customer orientation and misleading reference points for incentives. Several new concepts have been developed, which are subsumed under the term ‘performance measurement’. The best-known and most prominent performance measurement system, as studies show, is the Balanced Scorecard.Performance measurement fosters two issues: target setting tailored to fit beneficiaries and performance levels; and the operationalization of strategy and its translation in quantified goals. This involves the setup and use of several KPIs of various dimensions (e.g. costs, time, quality, ability for innovation, customer satisfaction and flexibility), which serve as a basis for the evaluation of the effectiveness andefficiency of the performance and the performance potentials of different objects, so-called performance levels (e.g. organizational units, processes and employees). Performance measurement should not only be used for evaluation and measurement of past actions, but mainly focus on future oriented management and controlling activities Although a lot of papers have been written about this topic, a definition of ‘performance measurement’ is still discussed in th e literature. There is still no common meaning nor is there general agreement on its characteristics. However, it can be derived that performance measurement systems should comply with the following requirements:• Provide past- and future-oriented management control information• Reflect the demands of both internal and external stakeholders• Provide basis, and on corporate and divisional levels• Contain financial KPIs which can be extended by non-financial parameters which influence the long-term financial performance capabilities of a company• Contain not only quantitative (hard facts) but also qualitative (soft facts) information • Provide strategic and operational KPIs• Support continuous improvement.When applying this comprehensive understanding of performance measurement, the question if it is more than just another measurement concept, can be answered with a yes. But studies show that, in practice, performance measurement systems often do not meet the requirements formulated in theory.Ten determinants, which are also featured in a comprehensive empirical study, clearly show the shortcomings that often exist in the practical implementation of performance measurement. In the past, applied controlling concepts were not structured along the lines of progressive performance measurement in most cases (see Fig. 1, left column). Currently, controlling systems in many companies are already equipped to meet these new requirements (central column). In the future, the development and implementation of progressive performance measurement will be needed in order to tackle the increasing and multidimensional requirements of markets and customers on management and controlling structures of a company (right column).Examples of a progressive performance measurement system can only be found in practice in isolated cases. It can be assumed, and has been proven empirically for these cases, that progressive performance measurement leads to higher profitability in comparison to other companies in the same sector with less developed performance measurement systems.At the end of this chapter it has to be noted that performance measurement and management control concepts in the English-speaking world as well as controlling concepts in German-speaking areas have many interdependencies. The underlying aim behind these concepts is to steer or influence the behavior of members of an organization, especially managers, in such a way as to increase the likelihood of achieving goals. Performance measurement –which forms an interface between planning, control, and information systems in the same way as controlling systems do – does not only enhance controlling in terms of time and target groups, but also by means of the format information is delivered in(qualitative information instead of quantitative information) and by means of non-financial indicators. Nevertheless, performance measurement should still be regarded as an element(subsystem) of the controlling system with a special focus on supporting strategy implementation.3 Performance Measurement for InnovationA performance measurement system for a company is very complex. To reduce the complexity, a differentiation via subsystems such as performance measurement systems for different departments or main and cross-divisional corporate functions, is necessary. Based on the insight that innovation management has to be both effective and efficient and that it demands particular attention besides other, more routine, activities, it can be assumed that innovation is one of these subsystems and an innovation performance measurement should be established.However, the question arises why innovation is so important that a particular performance measurement system is necessary. To clarify this, innovations need to be defined. They can be characterized with the following attributes• Strategic relevance• Uncertainty of outcome• Fundamental investments• Complexity, cross-functional tasks• Knowledge and collaboration intensive processes• Involving int ernal and external stakeholders• Difficult to plan because of the novelty.It becomes evident that, on the one hand, innovations have a great importance to the medium- and long-term success of companies. Therefore, a company has to ensure that innovations are managed effectively. But, on the other hand, innovations are insecure, uncertain, and involve a lot of different internal and external stakeholders. Therefore, their success is difficult to predict. This leads to a dilemma:The more innovations a company pursues and the more fundamental innovations are, the more important planning and controlling of these innovations becomes. But the higher the number of parallel innovation projects and the more radical their scope, the more difficult planning and controlling are. Performance measurement for innovation should help to cope with this situation.To develop a performance measurement system, Neely et al. suggest the following procedure1. Decide what should be measured.2. Decide how it is going to be measured.3. Collect the appropriate data.4. Eliminate conflicts in the measurement system.Points 1 and 2 are discussed in this paper. Points 3 and 4 are not included in the following explanations, because these steps are company specific.To be able to conceptualize a performance measurement system for innovation and to decide what needs to be measured, a common understanding of innovation management is necessary. Innovation management is the conception and implementation of a company’s innovation system.It co vers both R&D and non- R&D innovations, which can be products, services or processes or be concerned with the applied business model. Based on existing approaches to conceptualize innovation management ability, the following dimensions can be seen as parts of innovationmanagement:•Innovation strategy (and portfolio)•Innovation culture•Innovation structure•Innovation competences and learningConsequently, innovation management capability and performance (not equal to innovation performance) represents a multi-dimensional framework. Thus measuring the performance of innovation management needs to holistically picture different dimensions.The measurement of performance on all three levels allows a detailed understanding of innovation activities and results as well as of strategy implementation. It is of great significance to link the different levels and aspects to each other. Starting top down, the innovation strategy needs to be considered in the innovation culture,innovation competences/learning and innovation structure, as well as via the different innovation fields, in the innovation portfolio. The strategic decisions made on the first level need to be translated into specific goals and activities as input for the other dimensions and levels. The goals of the multi- project landscape need to be split up into different projects. Thinking bottom up, the status reports of single projects are aggregated as an input for the portfolio management on the second performance level, the portfolios themselves in the overall level.The definition of an innovation strategy represents a crucial element of a holistic innovation management system. The innovation strategy is derived from the company strategy and should thus be aligned with the latter. During the strategy process, innovation fields are determined. These are topics/fields in which the company wants to foster innovations. One innovation field can comprise one or a multitude of innovation projects.According to Cooper et al. the innovation strategy has to embrace different kindsof decisions:• Selection of target markets, products and technologies to invest into• Allocation of resources corresponding to each field of innovation• Preselection of specific ideas and projects within the innovation fields• Ensur ing of a balanced innovation portfolio which fits the identified targets, available resources, and time horizons.The innovation culture is the sum of the innovation-related attitudes, experiences, beliefs and values of the employees in an organization. Innovation culture has a coordinating function, drives innovation activities and represents the environment in which they take place. A company can only be innovative, if the overall culture in the company allows and supports this. “Companies that know how t o innovate don’t necessarily throw money into R&D. Instead they cultivate a new style of corporate behavior that’s comfortable with new ideas, change, risk, and even failure.”In order to measure the performance of the cultural dimension of measures developed by Amabile et al. can be applied. They suggest criteria in the following dimensions: •Encouragemen of creativiting (organizational encoueagement, supervisory encouragement work group encouragement), e.g. readiness to take risks, fairness with idea evaluation, recognition and rewarding practice for creativity• Freedom and autonomy as a prerequisite for innovative work• Resource adequacy and its effect on motivation• Pressure between fostering efficiency and inhibiting creativity• Organizational aspec ts to impede creativity.The innovation structure contains the innovation-related organizational aspects of a company. The organizational structure represents the backbone of innovation processes and of innovation projects.It links structured activities with roles and responsibilities. Several authors suggest measures for innovation processes covering cost, time and quality dimensions as well as profit and customer satisfaction.However not only process structure but also organizational structure should be considered in this dimension.This includes the appropriateness of roles and responsibilities and of formal structures to execute innovation processes (e.g. decision boards, innovation teams and innovation project offices).Innovation competences and learnin g represent the basis of innovation activities. Innovation derives from the combination of previously unconnected knowledge;thus both the ability and the performance in developing knowledge and building up competences form a crucial part of an innovation management system. Sammerl distinguishes between internal and external learning of an organization:• Internal learning refers to the creation of new knowledge within the company andis based on existing internal resources and people.• External learning refers to the integration of knowledge from outside the company, e.g. from partners, competitors, research institutes or customers. This demands certain learning processes and structures to achieve permeability of knowledge from the company’s environment.Measuring organizational learning ability and performance leads to a number of methodological problems due to the intangible nature of knowledge and learning. Therefore, approaches to measure learning ability and performance should focus on the following factors• Learning behavior of the members of the organization• Management commitment to learning and knowledge management• Openness and experimentation• The exchange of knowledge• Social networks used for knowledge transfer• Systematic knowledge management.4.Design of Performance Measurement SystemEven if the ‘soft’ aspects that have been mentioned are difficult to plan and steer, they have to be considered in a holistic innovation performance measurement system. Therefore, a company needs to find measures which enable it to control culture, competences and structure. Suggestions for measures were given during the description of the different dimensions of innovation management above.Among different performance measurement approaches (see Chap. 2), the balanced scorecard appears most suitable for the measurement of innovation performance on company level. In order to consider the holistic approach of innovation management, the following dimensions of the innovation scorecard, regarding the different parts of the innovation management system, are recommended: portfolio, culture, structure(organization, processes), competences/learning and financials.It has to be noted that companies should, despite all balanced scorecard euphoria, be careful with the isolated introduction of the balanced scorecard concept, as it is heavily focused on strategic performance measurement and many of the components mentioned in Chaps. 2 and 3 are not per se taken into consideration. A balanced scorecard certainly does, as a rule, form the basis for a comprehensive application of performance management. But there are more levels and aspects which have to be addressed. Only then can those ideas, concepts, and strategies be brought into (operative) action right down to the last performance level.Source:Innovation and International Corporate Growth. 2010:P299-317.中文译文:创新绩效评价Peter Schentler,Frank Lindner,and Ronald Gleich 尽管一个公司有很大的竞争力或饱和的市场,为了能够保留或增加市场份额,创新变得越来越重要。

Positive Accounting Theory

Positive Accounting Theory
Positive Accounting Theory: A Ten Year Perspective
Ross L. Watts and Jerold L. Zimmerman
THE ACCOUNTING REVIEW
2013-7-14 1
ABSTRACT

This paper reviews and critiques the positive accounting literature following publication of Watts and Zimmerman (1978, 1979).
I. Evolution and State of Positive Accounting Theory

1.1 Evolution




An important reason that the information perspective failed to generate hypotheses explaining and predicting accounting choice is that in the finance theory underlying the empirical studies, accounting choice per se could not affect firm value. To predict and explain accounting choice accounting researchers had to introduce information and/or transactions costs. The initial empirical studies in accounting choice used positive agency costs of debt and compensation contracts and positive information and lobbying costs in the political process to generate value effects for and, hence, hypotheses about accounting choice. Finance researchers had introduced costs of debt that increase with the debt/equity ratio to explain how optimal capital structures could vary across industries.

A9 Employment protection legislation, adjustment costs

A9 Employment protection legislation, adjustment costs

Employment protection legislation, adjustment costs and cross-country differences in cost behavior 就业保护立法,调整成本而在成本行为的跨国差异摘要Central to the economic theory of sticky costs is the proposition that managers consider adjustment costs when changing resource levels. We test this proposition using employment protection legislation (EPL) provisions in different countries as a proxy for labor adjustment costs. Using a large sample of firms in 19 OECD countries during 1990–2008, we find that the degree of cost stickiness at the firm level varies with the strictness of the country-level EPL provisions. This finding supports the theory that cost stickiness reflects the deliberate resource commitment decisions of managers in the presence of adjustment costs.中央对粘费用的经济理论是,经理人考虑的命题改变资源水平时,调整的成本。

我们使用测试这个命题在不同的国家就业保护立法(EPL)规定,作为代理对劳动力的调整成本。

外文翻译--资本结构与企业绩效

外文翻译--资本结构与企业绩效

Capital Structure and Firm Performance1. IntroductionAgency costs represent important problems in corporate governance in both financial and nonfinancialindustries. The separation of ownership and control in a professionally managed firm may result in managersexerting insufficient work effort, indulging in perquisites, choosing inputs or outputs that suit their ownpreferences, or otherwise failing to maximize firm value. In effect, the agency costs of outside ownership equalthe lost value from professional managers maximizing their own utility, rather than the value of the firm. Theory suggests that the choice of capital structure may help mitigate these agency costs. Under theagency costs hypothesis, high leverage or a low equity/asset ratio reduces the agency costs of outside equity andincreases firm value by constraining or encouraging managers to act more in the interests of shareholders. Sincethe seminal paper by Jensen and Meckling (1976), a vast literature on such agency-theoretic explanations ofcapital structure has developed (see Harris and Raviv 1991 and Myers 2001 for reviews). Greater financialleverage may affect managers and reduce agency costs through the threat of liquidation, which causes personallosses to managers of salaries, reputation, perquisites, etc. (e.g., Grossman and Hart 1982, Williams 1987), andthrough pressure to generate cash flow to pay interest expenses (e.g., Jensen 1986). Higher leverage canmitigate conflicts between shareholders and managers concerning the choice of investment (e.g., Myers 1977), the amount of risk to undertake (e.g., Jensen and Meckling 1976, Williams 1987), the conditions under which thefirm is liquidated (e.g., Harris and Raviv 1990), and dividend policy (e.g., Stulz 1990).A testable prediction of this class of models is that increasing the leverage ratio should result in loweragency costs of outside equity and improved firm performance, all else held equal. However, when leveragebecomes relatively high, further increases generate significant agency costs of outside debt –including higherexpected costs of bankruptcy or financial distress –arising from conflicts between bondholders andshareholders.1 Because it is difficult to distinguish empirically between the two sources of agency costs, wefollow the literature and allow the relationship between total agency costs and leverage to be nonmonotonic.Despite the importance of this theory, there is at best mixed empirical evidence in the extant literature(see Harris and Raviv 1991, Titman 2000, and Myers 2001 for reviews). Tests of the agency costs hypothesistypically regress measures of firm performance on the equity capital ratio or other indicator of leverage plussome control variables. At least three problems appear in the prior studies that we address in our application.In the case of the banking industry studied here, there are also regulatorycosts associated with very high leverage.First, the measures of firm performance are usually ratios fashioned from financial statements or stockmarket prices, such as industry-adjusted operating margins or stock market returns. These measures do not netout the effects of differences in exogenous market factors that affect firm value, but are beyon d management’scontrol and therefore cannot reflect agency costs. Thus, the tests may be confounded by factors that areunrelated to agency costs. As well, these studies generally do not set a separate benchmark for each firm’sperformance that would be reali zed if agency costs were minimized.We address the measurement problem by using profit efficiency as our indicator of firm performance.The link between productive efficiency and agency costs was first suggested by Stigler (1976), and profitefficiency represents a refinement of the efficiency concept developed since that time.2 Profit efficiencyevaluates how close a firm is to earning the profit that a best-practice firm would earn facing the sameexogenous conditions. This has the benefit of controlling for factors outside the control of management that arenot part of agency costs. In contrast, comparisons of standard financial ratios, stock market returns, and similarmeasures typically do not control for these exogenous factors. Even when the measures used in the literature areindustry adjusted, they may not account for important differences across firms within an industry – such as localmarket conditions – as we are able to do with profit efficiency. In addition, the performance of a best-practicefirm under the same exogenous conditions is a reasonable benchmark for how the firm would be expected toperform if agency costs were minimized.Second, the prior research generally does not take into account the possibility of reverse causation fromperformance to capital structure. If firm performance affects the choice of capital structure, then failure to takethis reverse causality into account may result in simultaneous-equations bias. That is, regressions of firmperformance on a measure of leverage may confound the effects of capital structure on performance with theeffects of performance on capital structure.We address this problem by allowing for reverse causality from performance to capital structure. Wediscuss below two hypotheses for why firm performance may affect the choice of capital structure, theefficiency-risk hypothesis and the franchise-value hypothesis. We construct a two-equation structural model andestimate it using two-stage least squares (2SLS). An equation specifying profit efficiency as a functi on of the2 Stigler’s argument was part of a broader exchange over whether productive efficiency (or X-efficiency) primarily reflectsdifficulties in reconciling the preferences of multiple optimizing agents –what is today called agency costs –versus “true” inefficiency, or failure to optimize (e.g., Stigler 1976, Leibenstein 1978). firm’s equity capital ratio and other variables is used to test the agency costs hypothesis, and an equationspecifying the equity capital ratio as a function of the firm’s profi tefficiency and other variables is used to testthe net effects of the efficiency-risk and franchise-value hypotheses. Both equations are econometricallyidentified through exclusion restrictions that are consistent with the theories.Third, some, but not all of the prior studies did not take ownership structure into account. Undervirtually any theory of agency costs, ownership structure is important, since it is the separation of ownership andcontrol that creates agency costs (e.g., Barnea, Haugen, and Senbet 1985). Greater insider shares may reduceagency costs, although the effect may be reversed at very high levels of insider holdings (e.g., Morck, Shleifer, and Vishny 1988). As well, outside block ownership or institutional holdings tend to mitigate agency costs bycreating a relatively efficient monitor of the managers (e.g., Shleifer and Vishny 1986). Exclusion of theownership variables may bias the test results because the ownership variables may be correlated with thedependent variable in the agency cost equation (performance) and with the key exogenous variable (leverage)through the reverse causality hypotheses noted aboveTo address this third problem, we include ownership structure variables in the agency cost equationexplaining profit efficiency. We include insider ownership, outside block holdings, and institutional holdings.Our application to data from the banking industry is advantageous because of the abundance of qualitydata available on firms in this industry. In particular, we have detailed financial data for a large number of firmsproducing comparable products with similar technologies, and information on market prices and otherexogenous conditions in the local markets in which they operate. In addition, some studies in this literature findevidence of the link between the efficiency of firms and variables that are recognized to affect agency costs,including leverage and ownership structure (see Berger and Mester 1997 for a review).Although banking is a regulated industry, banks are subject to the same type of agency costs and otherinfluences on behavior as other industries. The banks in the sample are subject to essentially equal regulatoryconstraints, and we focus on differences across banks, not between banks and other firms. Most banks are wellabove the regulatory capital minimums, and our results are based primarily on differences at the mar2. Theories of reverse causality from performance to capital structureAs noted, prior research on agency costs generally does not take into account the possibility of reversecausation from performance to capital structure, which may result in simultaneous-equations bias. We offer twohypotheses of reverse causation based on violations of the Modigliani-Millerperfect-markets assumption. It isassumed that various market imperfections (e.g., taxes, bankruptcy costs, asymmetric information) result in abalance between those favoring more versus less equity capital, and that differences in profit efficiency move theoptimal equity capital ratio marginally up or down.Under the efficiency-risk hypothesis, more efficient firms choose lower equity ratios than other firms, allelse equal, because higher efficiency reduces the expected costs of bankruptcy and financial distress. Under thishypothesis, higher profit efficiency generates a higher expected return for a given capital structure, and thehigher efficiency substitutes to some degree for equity capital in protecting the firm against future crises. This isa joint hypothesis that i) profit efficiency is strongly positively associated with expected returns, and ii) thehigher expected returns from high efficiency are substituted for equity capital to manage risks.The evidence is consistent with the first part of the hypothesis, i.e., that profit efficiency is stronglypositively associated with expected returns in banking. Profit efficiency has been found to be significantlypositively correlated with returns on equity and returns on assets (e.g., Berger and Mester 1997) and otherevidence suggests that profit efficiency is relatively stable over time (e.g., DeYoung 1997), so that a finding ofhigh current profit efficiency tends to yield high future expected returns.The second part of the hypothesis –that higher expected returns for more efficient banks are substitutedfor equity capital –follows from a standard Altman z-score analysis of firm insolvency (Altman 1968). Highexpected returns and high equity capital ratio can each serve as a buffer against portfolio risks to reduce theprobabilities of incurring the costs of financialdistressbankruptcy, so firms with high expected returns owing tohigh profit efficiency can hold lower equity ratios. The z-score is the number of standard deviations below theexpected return that the actual return can go before equity is depleted and the firm is insolvent, zi = (μi +ECAPi)/σi, where μi and σi are the mean and standard deviation, respectively, of the rate of return on assets, andratios for those that were fully owned by a single owner-manager. This may be an improvement in the analysis of agencycosts for small firms, but it does not address our main issues of controlling for differences in exogenous conditions and insetting up individualized firm benchmarks for performance.ECAPi is the ratio of equity to assets. Based on the first part of the efficiency-risk hypothesis, firms with higherefficiency will have higher μi. Based on the second part of the hypothesis, a higher μi allows the firm to have alower ECAPi for a ven z-score, so that more efficient firms may choose lower equity capital ratios.文章出处:Raposo Clara C. Capital Structure and Firm Performance .Journal ofFinance.Blackwell publishing.2005, (6): 2701-2727.资本结构与企业绩效1.概述代理费用不管在金融还是在非金融行业,都是非常重要的企业治理问题。

行为金融学文献15Leaning for the Tape Evidence of Gaming Behaviour in Equity Mutual Funds

行为金融学文献15Leaning for the Tape Evidence of Gaming Behaviour in Equity Mutual Funds

THE JOURNAL OF FINANCE•VOL.LVII,NO.2•APRIL2002Leaning for the Tape:Evidence of Gaming Behavior in Equity Mutual FundsMARK M.CARHART,RON KANIEL,DAVID K.MUSTO,and ADAM V.REED*ABSTRACTWe present evidence that fund managers inf late quarter-end portfolio prices withlast-minute purchases of stocks already held.The magnitude of price inf lationranges from0.5percent per year for large-cap funds to well over2percent forsmall-cap funds.We find that the cross section of inf lation matches the cross sec-tion of incentives from the f low0performance relation,that a surge of trading inthe quarter’s last minutes coincides with a surge in equity prices,and that theinf lation is greatest for the stocks held by funds with the most incentive to inf late,controlling for the stocks’size and performance.Q UARTER-END AND ESPECIALLY YEAR-END equity mutual fund prices are abnor-mally high.We present strong evidence that some mutual fund managers mark up their holdings at quarter end through aggressive trading of stocks they already hold.Funds with the greatest ability and most incentive to improve their performance exhibit the largest turn-of-quarter effect.Intra-daily data show a surge of transactions and transaction prices in the quar-ter’s last few minutes,and fund-holdings data show a larger effect in the funds with the most incentive to mark up.Considering that open-end equity funds intermediate$3.46trillion~year-end1999!,1this turn-of-quarter in-f lation of their prices is a significant opportunity for potential sellers,and a significant hazard for everybody else.In general,open-end domestic equity mutual funds calculate their net asset values per share~NAVs!from the closing transaction prices of their holdings.*Carhart is from Goldman Sachs Asset Management,Kaniel is from the University of Texas, Musto is from the University of Pennsylvania,and Reed is from the University of North Carolina. The authors thank Marshall Blume;Mercer Bullard;Susan Christoffersen;Dan Deli;Diane Del Guercio;Roger Edelen;Chris Geczy;Bruce Grundy;Don Keim;Alan Lee;Andrew Metrick;Rob Stambaugh;Laura Starks;Paula Tkac;Kent Womack;Jason Zweig;and seminar participants at the Securities Exchange Commission,Iowa,Texas,and the Wharton School;participants in the Western Finance Association meeting in Sun Valley,Idaho;the Academic0Practitioners Con-ference on Mutual Funds at the Investment Company Institute;and RenéStulz and an anon-ymous referee for helpful comments and suggestions.Financial and other support from the Rodney L.White Center for Financial Research and the Wharton Financial Institutions Center is gratefully acknowledged.The views expressed are those of the authors alone,and do not necessarily ref lect the views of Goldman Sachs Asset Management,Wharton,or UT.1Investment Company Institute,Mutual Fund Fact Book,2000Edition,p.73.This figure excludes international funds.661662The Journal of FinanceWhile there are obvious benefits from pricing off the most recent arms-length transactions,there are potential concerns as well.One of these,ex-plored by Chalmers,Edelen,and Kadlec~2000!,Boudoukh et al.~2000!,and others,is the age of thinly traded stocks’last trades,which allows specula-tors to profit off longer-term shareholders.The concern we explore here is the inf luence of last-minute trading on last-trade prices,which allows fund managers to move performance between periods with last-minute trading in stocks they already hold,a practice alternately known as“painting the tape,”“marking up,”or“portfolio pumping.”Market regulators regard this practice as illegal.See Sugawara~2000!.If managers mark up to move performance to one period from the next, the result is abnormally high NAVs at period ends.Because of the signifi-cance of quarterly and annual performance figures,the ends of calendar quarters—particularly the fourth—are logical targets.We first establish that quarter-end distortion of NAVs is economically and statistically significant, then study fund returns,portfolio holdings,stock returns,and stock trades to determine whether the marking-up tactic is responsible.We first establish the abnormal-NAV pattern around quarter ends.Equity fund returns,net of the S&P500,are abnormally high on the last day of the quarter,especially the fourth,and abnormally low the next day.This effect appears in both our database of daily fund returns and in the Lipper daily fund indices.Magnitudes range from around50basis points per year for large-cap funds to well over200basis points for small-cap funds.There is little or no effect at month-ends that are not quarter-ends.We then focus in on the cause of the abnormal returns with a sequence of tests on the cross section of fund and stock returns.We confirm the link between the quarter-end rise and the next-day decline by showing that larger increases precede larger decreases in the cross section,which is not the case for fund returns on other days.Next,to establish whether mutual fund managers are actively involved, we check if funds with relatively more incentive to mark up do in fact show more marking up.We test two hypotheses.First,funds just below the S&P 500for the year mark up to beat the index~Zweig~1997!!,which we call “benchmark-beating.”Second,funds with the best performance mark up to improve their year-end ranking and to profit from the convexity of the f low0 performance relation~Ippolito~1992!,Sirri and Tufano~1998!!and manage-rial incentive pay.We denote this as the“leaning-for-the-tape”hypothesis. Despite its intuitive appeal,we reject the benchmark-beating hypothesis. As Degeorge,Patel,and Zeckhauser~1999!observe,manipulation of a sta-tistic to beat a benchmark should distort the empirical distribution of the statistic around the benchmark.In the empirical distribution of funds’calendar-year returns,there is no distortion around the S&P return,such as De-george,Patel,and Zeckhauser find in corporate-earnings numbers around analysts’expectations,and no distortion around zero return.However,we find significant evidence supporting the leaning-for-the-tape hypothesis.We find that the year’s best-performing funds have the largestEvidence of Gaming Behavior in Equity Mutual Funds663 abnormal year-end return reversals,and the quarter’s best-performing funds have the largest abnormal quarter-end return reversals.Intraday data iso-late much of the pattern in a small window of trading time around the quarter-end day’s close.Finally,we find that the stocks in the disclosed portfolios of the best-performing funds,controlling for capitalization and recent return, show significantly more price inf lation at year-end than do other stocks.We conclude that marking up by mutual funds explains some,if not all,of the price inf lation.The rest of the paper is in five sections.Section I covers the relevant literature on equity and equity-fund returns,and Section II tests for NAV inf lation at period-ends.Section III presents evidence from the cross section of fund returns.Section IV tests for marking up on transactions data,and Section V summarizes and concludes.I.Background and LiteratureTwo literatures relate to regularities in equity-fund returns:the extensive literature on equity-return seasonality,and the more recent literature on equity-fund-return seasonality,which is qualitatively different in both causes and implications.We cover each brief ly,then describe the main hypotheses of this paper in the context of the literature on equity-fund agency issues, particularly those relating to the effect of fund performance on net cash f lows.A.Literature on Equity Return SeasonalityThe finance literature has uncovered and analyzed many peculiarities in equity returns in the days around the year-end.Most attention has focused on small-cap issues.Relative to big-cap stocks,small-cap stocks shift signif-icantly upward on each of the five trading days starting with the last of the year~Keim~1983!,Roll~1983!!with a persistence across years that defies risk-based explanations.Explanations include tax-loss selling and window dressing.Tax-loss selling implies that retail investors’demand for stocks with poor past performance shifts up after the year-end tax deadline~e.g., Roll~1983!,and see Ritter~1988!for evidence that sale proceeds are“parked”for a while!.Similarly,window dressing implies that institutional demand for prior poor performers shifts up after year-end portfolio disclosures~e.g., Haugen and Lakonishok~1988!,Musto~1997!!.Neither explains why the shift starts a day before the year-end.22U.S.mutual funds~with a few exceptions!use trade dateϩ1positions in calculating their daily NAV.Therefore,any changes in position that occur through trading on the last day of the year are not ref lected in that day’s NAV,but rather in the next day’s NAV.However,U.S.GAAP requires that semiannual mutual fund reports ref lect trade date positions,so these trades would be observed in the financial statements of funds with calendar fiscal years.664The Journal of FinanceLining up daily index returns from1963to1981around month ends,Ariel ~1987!isolates all of equities’positive average returns in the nine trading days starting with the last of the month.Various explanations are consid-ered and discarded.Sias and Starks~1997!find that greater institutional ownership is associated with relatively better returns in the last four days of the year,and relatively worse in the first four days of the year,and conclude that individual-investor tax-loss selling explains their finding.However,the average returns are virtually the same over these two periods,so they con-clude it is actually the unusually poor performance of low institutional own-ership stocks at the end of the year,and their good performance at the beginning of the year,that drives their results.At the intraday frequency,Harris~1989!shows that transaction prices systematically rise at the close,and that this“day-end”anomaly is largest at month-ends~the study did not consider quarter-or year-ends separately! and when the last transaction is very near to the close in time.Harris also finds that the effect is stronger for low-priced firms and that buyers more frequently initiate day-end transactions.The literature also documents price shifts directly traceable to institu-tional money management.Harris and Gurel~1986!show that prices on new constituents of the S&P500index abnormally increase more than three per-cent upon announcement,all of which is eliminated within two weeks.Lynch and Mendenhall~1997!,studying a period when S&P additions and dele-tions were announced several trading days in advance,show transaction prices to be temporarily low on deletion days and high on addition days,and Reed~2000!confirms this for Russell2000additions and deletions.This effect is understood to be caused by the rebalancing trades of index managers.B.Literature on Equity-fund Return SeasonalityIt might seem redundant to measure the seasonality of equity-fund re-turns,because we might expect to see only the previously discussed equity-return patterns.But the return on an equity fund is fundamentally different from the return on an equity,and the significance of this difference is only recently being addressed in the literature.An equity return represents the difference between the prices of two arms-length transactions.It tells us what an investor would have earned if he bought at the initial price and sold at the later price.We cannot know how much,if any,another investor could have transacted at these prices,as they are specific to the size and direction,and possibly other circumstances,of those two trades.In many cases,we abstract from transaction times,which is a minor concern for heavily traded stocks but not for the sparsely traded ones.An alternative is to look instead at bid and ask prices,but these,too, are only relevant for trades of a specific size.An equity-fund return represents the difference between two NAV calcu-lations,where each NAV is calculated from the closing prices of the fund’s holdings on their respective primary exchanges.In contrast to an equityEvidence of Gaming Behavior in Equity Mutual Funds665 price,the NAV is the actual transaction price used for purchases and re-demptions of fund shares after the close that day.However,it is unlikely that an investor could purchase or sell all of the fund’s equity positions at the closing prices used to calculate NAV.So NAVs directly represent the experience of hypothetical investors,without the guesswork and error,but they can depart from the“equilibrium”value of fund shares whenever equities’closing prices depart from their equilibrium values.When this departure is predictable,investors have a trading rule whose profits derive from the funds’other shareholders.Some recent studies illustrate the predictability caused by nonsynchro-nous trading.Nonsynchroneity is most extreme in funds holding non-U.S. equities that price these holdings using closing trades on their home ex-change,yet allow fund purchases and redemptions up to the close of U.S. trading~Goetzmann,Ivkovic,and Rouwenhorst~2000!!.But even purely do-mestic funds allow arbitrage to the extent they hold equities whose last trades tend to precede the market close~e.g.,Boudoukh et al.~2000!,Chal-mers et al.~2000!,Greene and Hodges~2000!!.These profit opportunities are sporadic and require good estimates of the magnitude of market moves between non-U.S.and U.S.market closes for international funds,or shortly before the U.S.close for funds holding illiquid stocks.In the popular press,Zweig~1997!demonstrates year-end seasonality in equity funds,and offers an explanation.From1985to1995,the average equity fund outperformed the S&P500by53bp~bpϭbasis points,10100of one percent!on the year’s last trading day,and under performed by37bp on the next year’s first trading day.Small-cap funds shifted more:103bp above the S&P,then60bp below.This does not match the price shifts of small-cap equity indices,which generally beat the market on both days in those years. The explanation offered is that some fund managers cause the pattern by manipulating year-end valuations to improve their fund’s return.In SEC terminology,“marking the close”is“the practice of attempting to inf luence the closing price of a stock by executing purchase or sale orders at or near the close of the market”~see Kocherhans~1995!!.Zweig~1997!pro-poses that the fund managers just short of the S&P500on the year’s pen-ultimate trading day try to pass it by marking the last day’s close with buys. At the least,they could simply increase the probability their holdings close at the ask,but with more aggressive purchases,they could push up both the bid and the ask.Either way,this“marking up”would result in inf lated NAVs and thereby inf lated returns for holding periods ending on that date, and correspondingly de f lated returns over periods beginning then.C.Two Models of the Marking-up StrategyWe consider two models,not mutually exclusive,of the marking-up strat-egy.The first is the scenario just described in which managers mark up to beat the S&P,which we label the“benchmark-beating”model.The idea that funds mark up to beat the S&P500has intuitive appeal;a fund’s success at666The Journal of Financebeating the S&P500is a popular topic in the financial press and in pro-spectuses and annual reports,so it would seem that investors would rewardit with new investment.Interestingly,however,they do not.A recent com-parative study of the f low0performance relation in the mutual fund and pen-sion fund industries,Del Guercio and Tkac~2000!,finds new investmentincreases with performance in both segments,but beating the S&P500bringssignificant new investment only to pension funds,not to mutual funds.So ifa reward mechanism encourages marking up to beat the S&P500,it is prob-ably something other than expectations of new investment.For example,managers’bonus plans might reward outperforming the S&P500.The second model is suggested by the convex relation between past per-formance and subsequent net fund f lows.As Ippolito~1992!and Sirri andTufano~1998!show,net f lows are much more sensitive to the differencebetween two high returns than between two lower returns,so that fundmanagers get more benefit from rank improvements if they are near the topof the distribution.This further implies that on the last day of a referenceperiod,fund managers get more benefit from moving performance to thatperiod from the next if their period-to-date performances are near the top ofthe distribution.They are in the high-slope region of the relation for thisperiod and not particularly likely to be there for the next~Hendricks,Patel,and Zeckhauser~1993!,Brown and Goetzmann~1995!,Carhart~1997!!,whichincreases their incentive to move performance from the next reference pe-riod.Marking up at period end moves performance from the next period.Soin the second model,which we label the“leaning-for-the-tape”model~pic-ture runners at the finish line!,funds mark up at the end of a referenceperiod to improve a superior performance.The primary reference period would intuitively be the calendar year.Re-turns over calendar years figure disproportionately in the analysis of fundperformance in the press,in mutual fund ratings and databases,and in theacademic literature.And managers’bonus plans are typically described ascalendar-year based.Calendar quarters would be a secondary target,giventheir prominence in the press~e.g.,the Wall Street Journal’s quarterly pull-out section!,in shareholder reports~e.g.,the quarterly mailings to pension-plan participants!,and elsewhere.Since the two models isolate marking-up activity in different sets of funds—average performers~i.e.,near the S&P!in one case,top performers in theother—it might seem they are easy to distinguish in the data.But the con-centration of mutual fund holdings in certain equities serves to blur thisdistinction.If funds“herd”to certain equities,as is suggested by Lakonishok,Shleifer,and Vishny~1992!,Grinblatt,Titman,and Wermers~1995!,and Karceski ~1999!,and a few determined fund managers mark up some of these secu-rities,then other funds will benefit from the marking effect of these man-agers.Those responsible may not even manage mutual funds;calendar-yearand quarter returns are also important in other institutional-investor cat-egories,such as pension and hedge funds.Pension and hedge fund managersEvidence of Gaming Behavior in Equity Mutual Funds667 may herd with mutual funds when picking stocks and then mark the close. So even if only the benchmark-beating hypothesis is correct,funds in gen-eral would show at least some of the same return seasonality.And the same applies if only the top-performing hedge funds are marking up.So it is not enough to show that the NAVs of some funds go up then down;we also have to test against these alternative passive scenarios.II.NAV Inflation at Quarter-endsIn this section,we show that equity funds are overpriced at the close of calendar quarters,in the sense that investors who sell at the quarter-end NAV earn abnormally high returns,and those who buy earn abnormally low returns.This is apparent both in the Lipper mutual fund indices and in our own database of individual equity funds.The Lipper time series are useful in that they are popular references regarding fund performance~e.g.,daily in the Wall Street Journal!;they extend through July7,2000;and they sort funds along dimensions of interest.The results from our own database ex-tend back further and also permit us to refine our tests on the cross section of the funds and of funds’holdings.Since the time period over which this return accrues is only one day,our results are insensitive to our model of equity market equilibrium.A.Tests Using Lipper Mutual Fund IndicesLipper produces many equity-fund indices.We use the nine“style”indices that constitute a three-by-three sort of the equities concentrated in$Small-cap,Mid-cap,Large-cap%by$Value,Core,Growth%,and are available daily from July13,1992,through the present.These indices allow us to relate our results on NAV inf lation to the well-documented~e.g.,Fama and French ~1992!!variation of average equity returns with size and book-to-market value.From the return on the Lipper style indices,we subtract the return onthe Lipper index of S&P500funds3to obtain excess returns.If NAVs are inf lated at quarter-and year-ends,we should observe abnor-mally high returns on the last day of each quarter and year,and abnormally low returns on the first day.Let R i,t denote the daily excess return of style index i on day t,for t from July14,1992~the first return we can calculate!, to July7,2000~the last date when we downloaded Lipper data!,and run the following OLS indicator-variable regression:R i,tϭb i,0ϩb i,1YEND tϩb i,2YBEG tϩb i,3QEND t~1!ϩb i,4QBEG tϩb i,5MEND tϩb i,6MBEG tϩe i,t3The Lipper index values,and therefore returns,are based on the total returns of their constituent funds,where total returns are calculated by reinvesting dividends on their ex-dates.668The Journal of Financewhere YEND t takes the value of one when t is the last day of a year,and zero otherwise.Similarly,QEND t and MEND t are one on the last day of a calen-dar quarter that is not a year end,and the last day of a month that is not a quarter end,respectively.The variables YBEG t,QBEG t,and MBEG t are defined analogously,except that they are for the first day of the period.We present the fitted coefficients from these nine regressions in Panels A~YEND and YBEG!,B~QEND and QBEG!,and C~MEND and MBEG!of Table I. The results indicate a strong two-day return reversal pattern across month-end,quarter-end,and year-end periods,especially for small-cap and growth funds.The results are strongest for quarter-and year-ends:Of the36coef-ficients in Panels A and B,all but one are in the predicted direction,and all but four are statistically significant at the10percent level.In addition,the magnitudes decrease strongly in market capitalization,and increase moder-ately from value to growth.Finally,the evidence in Panel C around non-quarter-end month ends is weak.There is a small and generally statistically significant increase on the last day of the month,but almost nothing the next day.To test whether the reversal pattern is significantly more intense at quarter-ends than at other month-ends,we rerun the nine regressions with the vari-ables regrouped so that the second and third coefficients pick the marginal effect of being a quarter-end~including year-end!in addition to being a month-end:R i,tϭb i,0ϩb i,1~YEND tϩQEND t!ϩb i,2~YBEG tϩQBEG t!ϩb i,3~YEND tϩQEND tϩMEND t!~2!ϩb i,4~YBEG tϩQBEG tϩMBEG t!ϩe i,t.The results,in Panel D,again show widespread significance.All but one are in the right direction,and all but three are statistically significant at the five percent rejection level.The magnitudes of the abnormal returns are quite large,especially con-sidering they accrue over just a few days in a year.In small-cap growth funds,the quarter-ending positive abnormal returns total435basis points per year,and the quarter-beginning negative returns sum toϪ345basis points.For our purposes,it is clear that quarter-end prices are inf lated in that selling at those prices delivers economic profits,and there is little of this inf lation at other month ends.Further,abnormal returns of this size are unlikely to be compensation for risk.B.Tests Using Daily Individual Mutual Fund ReturnsTo further analyze period-end abnormal returns,we construct a database of daily returns on2,829funds using daily price,dividend,and dividend reinvestment NAV data from Micropal.Our database consists of diversified open-end equity mutual funds in the aggressive growth,growth and income,Evidence of Gaming Behavior in Equity Mutual Funds669Table IExcess Returns,in Basis Points,of the Lipper Mutual FundIndices around Period-endsNine of the Lipper mutual fund total-return indices are a three by three sort of equity funds, $Small-cap,Mid-CAP,Large-cap%by$Value,Core,Growth%.There is also an index of S&P500 funds.Daily index levels begin July13,1992,and run through July7,2000,a total of2019 trading days.For each of the nine indices,we calculate its daily return net of the S&P-fund index and then for Panels A,B,and C we regress this excess return on six100indicator vari-ables:YEND~last trading day of the year!,YBEG~first of the year!,QEND~last of a calendar quarter other than the fourth!,QBEG~first of a calendar quarter other than the first!,MEND ~last of a month but not the last of a quarter!,and MBEG~first of a month but not the first of a quarter!.The coefficients are arranged into panels:YEND0YBEG in Panel A,QEND0QBEG in Panel B,and MEND0MBEG in Panel C.For Panel D,we use only four100variables: YENDϩQEND,YBEGϩQBEG,YENDϩQENDϩMEND,and YBEGϩQBEGϩMBEG,and we report the coefficients on the first two100variables analogously to the other panels.Results are in basis points.Each of the18time-series regressions has2018observations.For Panels A,B,and C the model isX tϭb0ϩb1YEND tϩb2YBEG tϩb3QEND tϩb4QBEG tϩb5MEND tϩb6MBEG tϩe t.For Panel D,the model isX tϭb0ϩb1~YEND tϩQEND t!ϩb2~YBEG tϩQBEG t!ϩb3~YEND tϩQEND tϩMEND t!ϩb4~YBEG tϩQBEG tϩMBEG t!ϩe t.Small-cap Mid-cap Large-capPanel A:Turn of the Year,YEND0YBEG CoefficientsValue141**0Ϫ30120**0Ϫ34*25**0Ϫ17** Core153**0Ϫ53**155**0Ϫ73**30**0Ϫ20** Growth174**0Ϫ96**157**0Ϫ78**37**0Ϫ33** Panel B:Turn of Calendar Quarters Other than Fourth,QEND0QBEG CoefficientsValue59**0Ϫ33**31**0Ϫ14405Core71**0Ϫ52**60**0Ϫ55**8**0Ϫ5* Growth87**0Ϫ83**69**0Ϫ82**15*0Ϫ17** Panel C:Turn of Months Other than Quarter-ends,MEND0MBEG CoefficientsValue25**0Ϫ910**0126**0Ϫ2 Core21**0Ϫ225**0Ϫ14**0Ϫ1 Growth28**0626**06404Panel D:Quarters versus Other Months,YENDϩQEND0YBEGϩQBEG CoefficientsValue55**0Ϫ24*43**0Ϫ31**402Core70**0Ϫ50**59**0Ϫ59**9**0Ϫ8** Growth81**0Ϫ92**65**0Ϫ87**17**0Ϫ25** *Significantly different from zero at10percent rejection level.**Significantly different from zero at5percent rejection level.670The Journal of Financeand long-term growth categories as defined by Carhart~1997!,and it runs from January2,1985,to August29,1997.There is some survivor bias in the early years of Micropal data.4We calculate total-return time series~reinvest-ing dividends on ex-dates!and also create four equal-weighted fund indices: one for each of the fund categories above,and one for all funds in our sam-ple.For each index,we define its excess return as its own return minus that of the S&P500index~which does not include dividends;we do not use the Lipper index of S&P500funds as it is not available daily for much of this period!.We run the indicator-variable regression~1!,described above,and report the results in Panel A of Table II.These results mirror those from Lipper.Fund prices are significantly in-f lated at quarter ends,especially year ends,and there is little or no inf la-tion at other month ends.While these tests show that the mean return is higher on quarter ends,it is also important to ask if this inf lation is wide-spread across funds.To address this,we estimate regression~1!again,but replace the dependent variable with the percentage of funds that outper-formed the S&P on that day.We show these results in Panel B of Table II. We find80percent of funds beating the S&P on the last day of the fourth quarter,compared to37percent beating the S&P the next day.At the turn of other quarters,62percent beat the S&P on the last day,and40percent the next.In the fund categories,we see the strongest results among aggres-sive growth funds—91percent and34percent around the year-end,70per-cent and34percent around other quarter-ends—and the weakest among growth and income funds.Since growth and income is closest to the value categorization of Lipper,the pattern across fund categories matches that of Table I.Tables I and II show that equity funds are significantly overvalued at the ends of quarters,especially the fourth,compared to just before and after. We see this in the Lipper indices,in our own database,and in the fraction of funds beating the S&P500.The inf lation cuts across styles,but it is stron-gest in funds with a small-cap or growth orientation.It is worth noting that this is not a projection of any previously reported regularity in equity re-turns.The quarter-end results are completely novel,and the year-end results are not indicated in the extensive year-end literature.5Further,as discussed above,there are no market microstructure issues surrounding mutual fund NAVs like there are with individual stock prices,making the results all the more meaningful.4For example,zero of the463funds with Micropal data for some of1985die in1985,whereas one of the493similarly defined funds in the CRSP monthly mutual fund database with data for some of1985dies in1985.The analogous numbers for1990are29out of807in Micropal and 8of700in CRSP,and for1995it is66of2,063in Micropal and67of1,979in CRSP.5The closest would probably be Table III of Sias and Starks~1997!,which addresses a dif-ferent frequency~annual!,a different category of institutions~all of them!,and finds very different numbers.Sias and Starks also conclude that most of the turn-of-the year effect is due to individual investor trading,whereas we find strong evidence of turn-of-the-year effects due to specific institutions~mutual funds!trading.。

social work practice纯英文

social work practice纯英文

T H E C O N T E X T O F P R A C T I C ET H E S O C I A L W O R K E RT H E C O N T E X T O F P R A C T I C EMastery of social work practice involves the integration of the knowledge and value base of the profession and a set of core interviewing skills with the "personal self" of the social worker. The behavior of the social worker in the social work interview represents an individual social worker's unique expression of this combination of factors. It is difficult to imagine how effective service could be offered in the absence of social workers' competence in using interviewing skills. It is through our interpersonal actions, the words we use, the attitudes and feelings we convey verbally and nonverbally that we may achieve whatever goals social workers and clients set for their work together. Hence interviewing skills can be seen as the primary tool of practice, and social workers need to know how to use them effectively.社会工作实务的掌握涉及专业基础知识和价值观的整合,核心面谈技巧的设置和社会工作者个人特质的整合。

英语论文TheoryAnaly

英语论文TheoryAnaly
Diversified development
The limitations and challenges of Theory
CATALOGUE
05
Integration of Multiple Perspectives
Theory development often requires integration of multiple perspectives, which can be challenging due to the complexity and diversity of the subject matter
Gradually improving
In the Middle Ages, scholastic philosophers further developed their theories by combining Aristotle's theory with Christian theology. During the Renaissance and Enlightenment, theory began to break free from the constraints of theology and develop towards a more rational and scientific direction.
Research objective: This study aims to conduct an in-depth analysis of the theory and practice of English paper writing, explore effective strategies and techniques for English paper writing, and improve the quality and level of English papers.

阐释现象考博英语作文模版

阐释现象考博英语作文模版

阐释现象考博英语作文模版Expounding the Phenomenon.Expounding a phenomenon encompasses an exhaustive exploration of its various facets, encompassing its genesis, underlying mechanisms, and far-reaching implications. To accomplish this endeavor effectively, a systematic approach is paramount, guided by a precise outline that ensures both clarity and coherence.I. Introduction.A compelling introduction serves to captivate thereader's attention and establish the significance of the phenomenon under investigation. It should succinctly define the phenomenon, emphasizing its unique characteristics and relevance to a broader context. Furthermore, theintroduction should provide a concise overview of thepaper's structure, outlining the key arguments and evidence that will be presented.II. Theoretical Framework.The theoretical framework section provides a comprehensive analysis of the phenomenon through the lens of established theories and concepts. It draws upon relevant literature to construct a coherent explanation of the phenomenon's origins, mechanisms, and dynamics. This section should demonstrate a thorough understanding of the theoretical underpinnings of the phenomenon and their application to the specific context under study.III. Empirical Evidence.Empirical evidence serves as the backbone of any scientific investigation. This section presents a comprehensive analysis of data collected through various research methods, such as surveys, experiments, interviews, or observations. The data should be presented in a clear and concise manner, using tables, graphs, or other visual aids as necessary. The analysis should demonstrate a systematic approach to data interpretation, highlightingpatterns, trends, and relationships that support the theoretical framework.IV. Discussion and Implications.The discussion section provides an in-depth interpretation of the findings, linking them back to the theoretical framework and the broader context. It explores the significance of the results, considering their implications for theory, policy, and practice. This section should also address any limitations or weaknesses of the study, as well as suggest directions for future research.V. Conclusion.The conclusion provides a concise summary of the main arguments and findings of the paper. It reiterates the significance of the phenomenon, emphasizing itsimplications and the broader contributions it makes to the field of study. The conclusion should leave the reader with a clear understanding of the phenomenon's multifaceted nature and its enduring relevance.In conclusion, expounding a phenomenon requires a systematic and rigorous approach that encompasses a thorough theoretical framework, empirical evidence, discussion, and a comprehensive conclusion. By adhering to this framework, researchers can effectively unravel the intricacies of complex phenomena, advancing our understanding and paving the way for further exploration and discovery.。

实验法与口译研究 英文

实验法与口译研究 英文

well as control group.
Single group experiment(连续实验), also known as continuous experiment, is an experimental method to test hypotheses by comparing the results of pretest and posttest of a single experimental object at different time.
(4)According to whether the experimenters and the subjects know the excitation of the experiment, the experimental methods can be divided into single-blind experiment and double-blind experiment. Single-blind experiments(单盲实验) do not let the subjects know that they are experimenting; and the experiment stimulation and test are carried out by the experimenters. At present, most experiments are of this kind. Double-blind experiments(双盲实验) do not let both the subjects and the experimenters know that the experiment is being carried out, while the third party implements the experiment stimulation and test.

时间都去哪儿了 英语作文

时间都去哪儿了 英语作文

时间都去哪儿了英语作文Title: Where Does the Time Go?Time, an intangible yet immensely significant aspect of our lives, often leaves us pondering its elusive nature. Where does it go? This question has intrigued philosophers, scientists, and ordinary individuals alike for centuries. In this essay, we will delve into the various dimensions of time and explore potential answers to this perennial query.Firstly, let us consider the scientific perspective on time. According to the theory of relativity proposed by Albert Einstein, time is not an absolute quantity but rather a relative one. It is intertwined with space to form the fabric of spacetime. In this framework, time can appear to pass differently for observers in different reference frames, depending on their relative motion andgravitational fields. This phenomenon, known as time dilation, suggests that time is not a fixed entity but rather a dynamic and malleable one.Furthermore, from a psychological standpoint, our perception of time is highly subjective. Have you ever noticed how time seems to fly when you're having fun, yet drags on when you're bored or anxious? This phenomenon, known as time perception, highlights the role of our emotions and attention in shaping our experience of time. Moreover, as we age, our perception of time often changes, with days, months, and years seeming to pass more swiftly as we grow older. This subjective aspect of time adds another layer of complexity to the question of where it goes.Moreover, the relentless march of time is evident in the natural world. Seasons change, tides ebb and flow, and celestial bodies orbit ceaselessly through the cosmos. These cyclical patterns remind us of the inherent rhythm of time and its inexorable progression. Yet, despite our best efforts to measure and quantify it, time remains elusive, slipping through our fingers like grains of sand.In the realm of philosophy, time has been a subject ofcontemplation and speculation for millennia. From ancient civilizations to modern thinkers, philosophers havegrappled with questions about the nature of time and its relationship to existence. Does time have a beginning andan end, or is it eternal and infinite? Is it linear or cyclical, deterministic or probabilistic? These profound inquiries reflect our innate curiosity about the fundamental nature of reality.Moreover, the concept of time plays a central role in human culture and society. We divide our lives into past, present, and future, anchoring our identities andaspirations in this temporal framework. Our calendars, clocks, and schedules dictate the rhythm of our daily lives, while cultural rituals and traditions mark the passage of time and commemorate significant events. Yet, amidst the hustle and bustle of modern life, it is all too easy tolose sight of the deeper meaning and significance of time.In conclusion, the question "Where does the time go?" encompasses a myriad of perspectives, from the scientific and psychological to the philosophical and cultural. Timeis a multifaceted phenomenon that defies easy explanation, yet it remains an integral part of the human experience. As we navigate the complexities of existence, let us cherish each moment and strive to make the most of the time we have. For in the end, the true value of time lies not in its quantity but in how we choose to spend it.。

愿望不变英语作文模版

愿望不变英语作文模版

When writing an essay about unchanging aspirations in English,you can follow a structured template that includes an introduction,body paragraphs,and a conclusion. Here is a detailed outline and template to help you craft your essay:Title:The Unchanging AspirationIntroduction:Hook:Start with a thoughtprovoking quote or a rhetorical question that relates to the concept of unchanging aspirations.Thesis statement:Clearly state the main idea of your essay,which is that certain aspirations remain constant despite the changing circumstances of life.Body Paragraph1:Topic sentence:Introduce the first aspect of unchanging aspirations,such as the pursuit of happiness or fulfillment.Explanation:Elaborate on why this aspiration remains constant and how it influences peoples decisions and actions.Example:Provide a reallife example or a hypothetical scenario that illustrates the point.Body Paragraph2:Topic sentence:Discuss the second aspect,which could be the desire for personal growth or selfimprovement.Explanation:Explain how this aspiration is fundamental to human nature and how it drives individuals to learn and adapt.Example:Cite an example from history or literature that demonstrates the enduring nature of this aspiration.Body Paragraph3:Topic sentence:Explore the theme of unchanging aspirations in the context of societal or cultural values.Explanation:Discuss how certain aspirations are deeply rooted in cultural beliefs and are passed down through generations.Example:Use a cultural or historical event to show how these aspirations have shaped societies and cultures.Body Paragraph4:Topic sentence:Address the challenges and obstacles that may arise in the pursuit of unchanging aspirations.Explanation:Discuss how external factors such as societal pressures,economic conditions,or personal setbacks can affect the pursuit of these aspirations.Solution:Offer strategies or approaches that individuals can use to overcome these challenges and stay true to their aspirations.Conclusion:Restate thesis:Reiterate the main idea of your essay,emphasizing the enduring nature of certain aspirations.Summary:Briefly summarize the key points discussed in the body paragraphs. Closing thought:End with a powerful statement or a call to action that encourages readers to reflect on their own aspirations and the reasons behind their persistence.Example Introduction:The human spirit is driven by a set of aspirations that seem to transcend time and circumstance.From the pursuit of happiness to the quest for personal growth,these unchanging aspirations form the bedrock of our existence and guide us through lifes journey.Example Body Paragraph:One such aspiration is the relentless pursuit of happiness.It is a universal goal that resonates with every individual,regardless of their background or situation.Whether its through the pursuit of a fulfilling career,the establishment of meaningful relationships,or the enjoyment of lifes simple pleasures,the quest for happiness remains a constant in our lives.For instance,the story of insert reallife example or hypothetical scenario exemplifies how the pursuit of happiness can lead to significant life changes and personal growth.Example Conclusion:In conclusion,the unchanging aspirations that define the human experience are not merely abstract concepts they are the very forces that propel us forward.Despite the inevitable challenges and obstacles we encounter,it is these aspirations that give our lives meaning and purpose.Let us embrace our unchanging aspirations,for they are the compass that guides us through the complexities of life and towards a future filled with hope and fulfillment.Remember to use a variety of sentence structures and vocabulary to make your essay engaging and to avoid repetition.Additionally,ensure that your essay is wellorganized and that each paragraph flows logically into the next.。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

In the Proceedings of the Seventh International Conference on Machine Learning,Austin,TX,1990.Explanations of Empirically Derived Reactive PlansDiana F.Gordon(gordon@)John J.Grefenstette(gref@) Navy Center for Applied Research in Artificial IntelligenceNaval Research Laboratory,Code5514Washington,D.C.20375-5000AbstractGiven an adequate simulation model of thetask environment and payoff function thatmeasures the quality of partially successfulplans,competition-based heuristics such asgenetic algorithms can develop high per-formance reactive rules for interestingsequential decision tasks.We have previ-ously described an implemented system,called S AMUEL,for learning reactive plansand have shown that the system can suc-cessfully learn rules for a laboratory scaletactical problem.In this paper,wedescribe a method for deriving explana-tions to justify the success of such empiri-cally derived rule sets.The method con-sists of inferring plausible subgoals andthen explaining how the reactive rulestrigger a sequence of actions(i.e.,a stra-tegy)to satisfy the subgoals.1IntroductionThis report is part of an on-going study con-cerning learning reactive plans for sequential deci-sion tasks given a simulation of the task environ-ment.In particular,we have been investigating tech-niques that allow a learning system to actively explore alternative behaviors in simulation,and to construct high performance rules from this experi-ence using competition-based methods.Our current research focuses on learning reactive rules for a variety of tactical scenarios.Learning tactical rules is especially difficult if the environment is only par-tially modeled,contains other independent agents,or permits only limited sensing of important state vari-ables.Such features reduce the utility of traditional projective problem solving(Mitchell,1983;Minton et.al,1989)and favor the use of reactive control rules that respond to current information and suggest useful actions(Agre and Chapman,1987;Schoppers, 1987).We have been investigating the usefulness of genetic algorithms and other competition-based heuristics(Grefenstette,1988)to learn high perfor-mance reactive rules in the absence of a strong domain theory.The approach has been implemented in a system called S AMUEL(Grefenstette,1989).One of the important differences between S AMUEL and many other genetic learning systems is that S AMUEL learns rules expressed in a high level rule language. The use of a symbolic rule language is intended to facilitate the incorporation of more powerful learn-ing methods into the system where appropriate.In this paper,we investigate the use of explanation-based learning methods to explain the success of the empirically learned plans found by the genetic learn-ing system,and to suggest possible improvements.S AMUEL consists of three major components: a problem specific module,a performance module, and a learning module.The problem specific module consists of the task environment simulation,or world model,and its interfaces.The performance module consists of a competition-based production system that performs matching,conflict resolution and credit assignment.The learning module uses a genetic algorithm to develop high performance reactive plans,each plan expressed as a set of condition-action rules.Each plan is evaluated by testing its performance in controlling the world model through the performance module.Genetic operators,such as crossover and mutation,produce plausible new plans from high performance precursors.Experiments have shown that S AMUEL learns highly effective reactive plans for laboratory scale tactical problems(Grefenstette,1989).However, even though the individual rules of a plan can be interpreted,the strategy underlying the plan is often not apparent.We are currently expanding our focusto include the derivation of explanations of S AMUEL’s reactive rules.These explanations are expected to clarify the system’s performance to sys-tem users as well as to generate new reactive rules for S AMUEL.In this paper,wefirst discuss a simulated environment to which S AMUEL has been successfully applied.The remainder of the paper is devoted to describing our research on the topic of generating explanations of reactive plans.This work is part of an on-going study of genetic algorithms for learning tactical plans.The current system is detailed in(Grefenstette,Ramsey& Schultz,1990).An analysis of the credit assignment methods in appears in(Grefenstette,1988).A study of the effects of sensor noise on appears in(Schultz, Ramsey&Grefenstette,1990).2The Evasive Maneuvers ProblemWe have tested S AMUEL initially in the con-text of a particular task called Evasive Maneuvers (EM),inspired in part by(Erickson and Zytkow, 1988).In the EM simulation,there are two objects of interest,a plane and a missile,which maneuver in a two-dimensional world.The object is to control the turning rate of the plane to avoid being hit by the approaching missile.The missile tracks the motion of the plane and steers toward the plane’s anticipated position.The initial speed of the missile is greater than that of the plane,but the missile loses speed as it maneuvers.If the missile speed drops below some threshold,it loses maneuverability and drops out of the sky.It is assumed that the plane is more maneuverable than the missile,that is,the plane has a smaller turning radius.There exist six sensors that provide informa-tion about the current tactical state:1)last-turn:the current turning rate of the plane. This sensor can assume nine values,ranging from -180degrees to180degrees in45degree increments.2)time:a clock that indicates time since detection of the missile.Assumes integer values between0and 19.3)range:the missile’s current distance from the plane.Assumes values from0to1500in increments of100.4)bearing:the direction from the plane to the mis-sile.Assumes integer values from1to12.The bear-ing is expressed in‘‘clock terminology’’,in which12o’clock denotes dead ahead of the plane,and6 o’clock denotes directly behind the plane.5)heading:the missile’s direction relative to the plane.Assumes values from0to350in increments of10degrees.A heading of0indicates that the mis-sile is aimed directly at the plane’s current position, whereas a heading of180means the missile is aimed directly away from the plane.6)speed:the missile’s current speed measured rela-tive to the ground.Assumes values from0to1000in increments of50.In addition to the sensors,there is one control variable,namely,the plane’s turning-rate.Turning-rate has nine possible values,between-180and180 degrees in45degree increments.The learning objective is to develop a set of decision rules that map current sensor readings into actions that suc-cessfully evade the missile whenever possible.The rule condition contains sensor ranges(which may be cyclic),and the action specifies a setting for the con-trol variable.An example of an actual decision rule learning by S AMUEL is the following:RULE16:IF(and(last-turn[-135,135])(time[2,12]) (range[0,700])(bearing[2,6])(heading[0,30])(speed[100,950])) THEN(turn90)STRENGTH949The EM process is divided into episodes that begin with the missile approaching the plane from a randomly chosen direction and that end when either the plane is hit or the missile velocity falls below a given threshold.The critic module provides numeric feedback at the end of each episode that measures the extent to which the missile has been successfully evaded.In the case of unsuccessful evasion,partial credit is given reflecting the plane’s survival time (see(Grefenstette et.al,1990)).Each decision rule is assigned a numeric strength that serves as a pred-iction of the rule’s utility.The system uses incre-mental credit assignment methods(Grefenstette, 1988)to update the rule strengths based on feedback from the critic received at the end of the episode. Experiments have shown that S AMUEL can learn high-performance rule sets(plans)for this task(Gre-fenstette,1989).As can be seen from the above example, while the rules are individually understandable,the underlying strategy behind the rules is not usually clear from inspection.On the other hand,a personwho watches a display of the EM task under the con-trol of the learned rules can usually describe the stra-tegy being followed in conceptual terms,for exam-ple:Get the missile directly behind the plane,let itget fairly close,then make a hard left turn.Once such a description has been obtained,qualita-tive reasoning can be applied to explain and justify the strategy.It is expected that explanation-based methods will help to explicate the higher-level stra-tegies being learned,making the results of the empir-ically learning more easily accepted by human operators and,ultimately,expediting the learning process itself.The remainder of the paper offers ini-tial steps in this direction.3Explaining Empirically Derived Rules Our approach to applying explanation-based techniques to reactive plans can be divided into four phases:(1)inferring plausible subgoals;(2)confirming subgoal satisfaction;(3)creating explanations for reactive plans;and(4)deriving new rules.The following sections elaborate our approach to each of thefirst three phases.The fourth phase is outlined under our plans for future research.3.1Inferring Plausible SubgoalsPrior to deriving explanations that S AMUEL’s actions are intended to satisfy particular subgoals, the systemfirst attempts to derive plausible subgoals, such as‘‘increase range to missile’’or‘‘increase missile deceleration’’from a trace of the behavior of the system under the control of the learned rules.A trace covering the actions occurring over a single episode is examined.Traces consist of snapshots of sensor readings followed by the decision rule that hasfired.Each snapshot is associated with a time,or state.An example of a trace is shown in Figure1, where‘‘lturn’’,‘‘brng’’,and‘‘hdng’’are abbrevia-tions for last turn,bearing,and heading.The action is the turn taken by the plane at this time.In order to simplify the trace shown here,the decision rules do not appear.A domain theory has been developed for automating subgoal derivation.This part of the domain theory consists of plausible subgoal ___________________________________________ lturn time range brng hdng speed action 0010007070000160070650135 13520935055000330032904004545420060300-135-1355100420250909061007020000730********45840070150454595008015045451050080100-90-901160050100-45-45127004010045___________________________________________Fig.1.Example execution trace.derivation(PSD)rules such as the following:PSD1:IF range(m)>RANGE1THEN PLAUSIBLE-SUBGOAL(INCREASING deceleration(m))PSD2:IF range(m)<RANGE2THEN PLAUSIBLE-SUBGOAL(INCREASING range(m))where RANGE1and RANGE2are user-definable parameters and m represents the missile.The trace is examined tofind thefirst time at which a PSD rule precondition,such as‘‘range(m)>RANGE1’’, holds.The algorithm forfinding plausible subgoals is the following:PSD ALGORITHM:Find the set of all time inter-vals in the execution trace of an episode for which the sensor values satisfy the PSD rule condition dur-ing that interval.This set,called the trigger set,con-sists of situations that would plausibly trigger the implementation of a strategy to satisfy the subgoal specified in the PSD rule.In the example trace above,if RANGE1were set to900,then there is one time interval(of length one unit)that satisfies the condition for PSD1.This interval is[0,0]and,therefore,the trigger set is sim-ply{[0,0]}.Since PSD1is satisfied,its subgoal,namely,‘‘(INCREASING deceleration(m))’’,is pro-posed as a candidate subgoal.Once a plausible subgoal is found,the next task is to determine whether the subgoal has been satisfied.Satisfaction is determined by applying the confirmation procedure described in the next section for time intervals in the trigger set until either the set of intervals is exhausted or the subgoal has been confirmed.3.2Confirming Subgoal SatisfactionSubgoal satisfaction is determined by once again scanning the execution trace.Scanning begins at the time in the trace following a time interval from the trigger set.Subgoal confirmation requires an additional domain theory.In this case,S AMUEL’s decision rule language is extended to capture further information from the trace.For example,the system extracts from the trace information about the change in sensor values over time.The speed or range of the missile,for instance,may increase from one state to the next.By scanning the trace over multiple states, the system derives acceleration and range increase information for confirming subgoal satisfaction.The confirmation of subgoal satisfaction begins when a time interval is chosen from the trigger set.In the current implementation,the user defines a window over which the subgoal satisfaction check is executed.The window begins at a user-defined time that is after the trigger set time interval. Continuing with the example above,suppose the sys-tem must confirm that the increasing missile deceleration goal has been achieved over the time window that extends from time1to time3.Then the change in missile speed over this interval is checked to be certain that missile deceleration is increasing. The deceleration is increasing from100to150over this time interval.Therefore,subgoal satisfaction has been confirmed.Once subgoals have been derived and confirmed,explanations may be generated to justify the observed behavior.The next section describes the process of explanation generation.3.3Creating ExplanationsAfter deriving plausible subgoals and confirming that they are satisfied,explanations may be formed which prove that sequences of S AMUEL’s decision rules satisfy the subgoals.Explaining failure to satisfy subgoals is presented as future work.Creating justifications for successful subgoal satisfaction requires the development of a domain theory that captures important results of particular actions.We are adapting Forbus’s Qualitative Pro-cess Theory(Forbus,1984)for the interpretation of the empirically derived rules similarly to the way this theory is adapted in(Gervasio,1989).Qualitative Process Theory(QP Theory)expresses common sense notions about qualitative relationships between objects.We are currently using QP Theory to define processes relevant to EM.A process is defined in (Forbus,1984)as something that acts through time to change the parameters of objects in a situation. Example processes arefluid and heatflow,boiling, and motion.We define an EM process below.The individuals are the objects on which the process acts. The quantity conditions are inequalities regarding the quantities of individuals that can be predicted solely within dynamics.Preconditions are conditions that must hold during the process but which need not be predictable using dynamics.Relations are state-ments that are true during the process.A process is active whenever its preconditions and quantity con-ditions hold.The Q+/Q-relations define qualitative proportionalities.(Q+X,Y)means that parameter X is directly proportional to parameter Y.(Q-X,Y) means that X and Y are inversely proportional. process missile-evasion(p,m)Individuals:p,a planem,a missileQuantity Conditions:speed(p)>0speed(m)>0Preconditions:range(m)>0Relations:(Q+deceleration(m),turning-rate(m))(Q+turning-rate(m),turning-rate(p))(Q-speed(p),turning-rate(p))(Q-turning-rate(m),range(m))The above process description is incomplete and is not entirely accurate.Since we do not intendto engineer a complete and perfect domain theory, our system will eventually possess a capability to diagnose errors in its domain theory.Once a partial domain theory exists,it is pos-sible to create plausible explanations of the events that occurred during an EM episode.Explanations are derived by creating proofs using the process rela-tions similarly to(Gervasio,1989).The proof begins with an observable but noncontrollable subgoal and terminates when a change in a controllable parameter has been found that is believed to have caused subgoal satisfaction.The body of the proof consists of QP Theory relational rules,such as those presented above.For example,the following proof explains how the increasing turning-rate of the plane eventually causes the missile deceleration to increase.(EXPLANATION(INCREASING deceleration(m))((Q+deceleration(m)turning-rate(m))(Q+turning-rate(m)turning-rate(p))(INCREASING turning-rate(p))))The above proof has terminated with a state-ment that the plane turning rate is increasing.(The plane turning rate is currently the only controllable parameter.)The increasing turning rate is hypothesized as having initiated a strategy to achieve subgoal satisfaction.The system next verifies(by examining the execution trace)that this behavior has,in fact,occurred.For the above example,this would consist of a check to be certain that the plane turning rate is increasing during the time period that begins during the trigger set time interval and ends at some user-specified time following this interval.In the example trace above,the condition that the turn-ing rate must be increasing would be satisfied if the plane’s actions were examined from time0to time1.The selection of times for checking both subgoal satisfaction and triggering behaviors is currently done by the user.These are important parameters,yet they are difficult to choose.We next describe our plans for future work.These plans include automating the choice of these parameters,as well as other parts of the system.4Future WorkThere are a few important directions that we plan to pursue.Thefirst direction consists of ordering explanations according to their degree of plausibil-ity.The second direction consists of using the expla-nations to generate new decision rules for S AMUEL. Third,we plan to automate the generation of system parameters and rules.The fourth future direction consists of diagnosing failures.Finally,we would like to increase the complexity of the EM problem.Currently,we are running experiments to determine the differences in the degree of plausibility of various explanations.The manner in which this is being done is by generating explanations from multi-ple episode traces.From our experiences with expla-nation generation,we have been observing that some explanations/subgoals are considered plausible more frequently than others.We plan to use this informa-tion about the frequency to order the PSD rules in a manner that reflects the plausibility of explanations, e.g.,more plausible subgoals are triedfirst.The second direction for future research con-sists of generating new decision rules from the expla-nations.If a subgoal is satisfied,and an explanation is generated for subgoal satisfaction,then the system can generalize the explanation(perhaps using the explanation-based learning methods of(Mitchell, Keller&Kedar-Cabelli,1986))and then use the gen-eralized explanation to generate new decision rules. Given a successful explanation,S AMUEL’s perfor-mance can benefit by the creation of new decision rules that are expected to achieve the same results as the rules from which the explanation is formed.The process of generating decision rules from generalized explanations is one of rule specialization.We are currently considering using ideas from MARVIN (Sammut and Banerji,1986)for designing the rule specialization process.Once new decision rules have been created,they can be fed back into S AMUEL’s performance module to augment the exist-ing rule sets.These modified rule sets may then be empirically evaluated using the EM simulator.The third direction planned for our research is the automation of certain portions of the system that are currently provided by the user.For example,sys-tem parameters,such as the user-input window size for subgoal confirmation,might be empirically deter-mined.Furthermore,the domain theory might also be derived empirically.For instance,the Q+/-rela-tionships in the domain theory for explanations could be extracted from the execution traces.Although we have been able to generate explanations for successful subgoal satisfaction,a ripe area for future research is the addition of theability to handle failures.If the system derives an explanation that the reactive rules are intended to achieve a particular subgoal,but the trace does not verify that the subgoal has been satisfied,then there exist four possible cases:(1)The chosen explanation is incorrect,but the domain theory is not faulty(2)The plausible subgoal that is inferred is not actu-ally the subgoal that the system is trying to achieve (3)The reactive rules are intended to achieve a subgoal,but the system has encountered some unex-pected interference(4)The domain theory is incorrect or incomplete Although the generation of alternative explanations would be a relatively simple a solution for thefirst case,the other cases would require more sophisti-cated error diagnosis.Afinal direction for future research is to increase the complexity of the EM problem.For example,the only controllable parameter currently implemented is the plane turning rate.More con-trollable parameters might be added.Furthermore, the problem difficulty would be greatly increased if the number of missiles were increased.Ultimately, we would like S AMUEL to be able to handle realistic problems.5SummaryProgress in generating and using explanations of reactive plans for S AMUEL is expected to provide an important step toward reducing the burden placed on the system’s empirical learning mechanisms.The eventual goal of our research is to use these explana-tions to create high performance reactive plans. ReferencesAgre,P.and Chapman,D.(1987).Pengi:An imple-mentation of a theory of activity.Proceedings of the Sixth National Conference on Artificial Intelligence.Erickson,M.and Zytkow,J.(1988).Utilizing experience for improving the tactical manager.Proceedings of the Fifth International Confer-ence on Machine Learning.Ann Arbor,MI. Forbus,K.(1984).Qualitative process theory.Artificial Intelligence,24(1-3).North-Holland Publishing Company,Amsterdam,The Netherlands.Gervasio,M.and DeJong,G.(1989).Explanation-based learning of reactive operators.Proceed-ings of the Sixth International Workshop on Machine Learning.Ithica,NY.Morgan Kauf-mann Publishers,Inc.Grefenstette,J.(1988).Credit assignment in rule discovery system based on genetic algorithms.Machine Learning,3(2/3).Kluwer Academic Publishers,Hingham,MA.Grefenstette,J.(1989).A system for learning control strategies with genetic algorithms.Proceedings of the Third International Conference on Genetic Algorithms.Fairfax,VA:Morgan Kauf-mann.Grefenstette,J.,Ramsey,C.and Schultz,A.(1990).Learning sequential decision rules using simula-tion models and competition.To appear in Machine Learning Journal.Kluwer Academic Publishers,Hingham,MA.Minton,S.,Carbonell,J.,Knoblock,C.,Kuokka,D., Etzioni,O.,and Gil,Y.(1989).Explanation-based learning:A problem-solving perspective.Carnegie-Mellon University Technical Report Number CMU-CS-89-103.Mitchell,T.(1983).Learning by experimentation: Acquiring and refining problem-solving heuris-tics.In R.Michalski,J.Carbonell,and T.Mitchell(Eds.),Machine Learning:An Artificial Intelligence Approach(Vol.1).Tioga Publish-ing Co.,Palo Alto,CA.Mitchell,T.,Keller,R.and Kedar-Cabelli,S.(1986).Explanation-based generalization:A unifying view.Machine Learning,1(1).Kluwer Academic Publishers,Hingham,MA. Sammut,C.and Banerji,R.(1986).Learning con-cepts by asking questions.In R.Michalski,J.Carbonell,and T.Mitchell(Eds.),Machine Learning:An Artificial Intelligence Approach (Vol.2).Morgan Kaufmann Publishers,Los Altos,CA.Schoppers,M.(1987).Universal plans for reactive robots in unpredictable environments.Proceed-ings of the Tenth International Joint Conference on Artificial Intelligence.Schultz,A.,Ramsey,C.and Grefenstette,J.(1990).Simulation-assisted learning by competition: Effects of noise differences between training model and target environment.In Proceedingsof the Seventh International Machine Learning Conference.Austin,TX:Morgan Kaufmann.。

相关文档
最新文档