中英文中英文文献翻译-为适应预测磨损计算模型
流体力学中英文对照外文翻译文献
中英文对照外文翻译(文档含英文原文和中文翻译)14选择的材料取决于于高流动速度降解或材料由于疲劳,腐蚀,磨损和气蚀故障糜烂一次又一次导致泵运营商成本高昂的问题。
这可能通过仔细选择材料的性能以避免在大多数情况下发生。
一两个原因便可能导致错误的材料选择:(1)泵输送的腐蚀性液体的性质没有清楚地指定(或未知),或(2),由于成本的原因(竞争压力),使用最便宜的材料。
泵部件的疲劳,磨损,空化攻击的严重性和侵蚀腐蚀与流速以指数方式增加,但应用程序各种材料的限制,不容易确定。
它们依赖于流速度以及对介质的腐蚀性泵送和浓度夹带的固体颗粒,如果有的话。
另外,交变应力诱导通过压力脉动和转子/定子相互作用力(RSI)真的不能进行量化。
这就是为什么厚度的叶片,整流罩和叶片通常从经验和工程判断选择。
材料的本讨论集中在流之间的相互作用现象和物质的行为。
为此,在某些背景信息腐蚀和经常使用的材料,被认为是必要的,但是一个综合指南材料的选择显然是超出了本文的范围。
在这一章中方法开发出促进系统和一致方法选择材料和分析材料的问题领域。
四个标准有关,用于选择材料暴露于高流动速度:1.疲劳强度(通常在腐蚀环境),由于高的速度在泵本身与高压脉动,转子/定子的相互作用力和交变应力。
2.腐蚀诱导高的速度,特别是侵蚀腐蚀。
3.气蚀,由于已广泛在章讨论。
4.磨耗金属损失造成的流体夹带的固体颗粒。
磨损和汽蚀主要是机械磨损机制,它可以在次,被腐蚀的钢筋。
与此相反,腐蚀是一种化学金属,泵送的介质,氧和化学试剂之间的反应。
该反应始终存在- 即使它是几乎察觉。
最后,该叶轮尖端速度可以通过液压力或振动和噪声的限制。
14.1叶轮和扩散的疲劳性骨折可避免的叶轮叶片,整流罩或扩散器叶片的疲劳断裂施加领域的状态;它们很少观察到。
在高负荷的泵,无视基本设计规则或生产应用不足的医疗服务时,这种类型的伤害仍然是有时会遇到。
的主要原因在静脉或罩骨折包括:•过小的距离(间隙B或比D3*= D3/ D2)叶轮叶片之间扩散器叶片(表10.2)。
知识产权论文中英文对照外文翻译文献
中英文对照外文翻译文献1外文参考文献译文the well-known trademarks and dilute anti-diluted First, well-known trademarks SummaryWell-known trademarks is a long-term use, in the market enjoy a high reputation, known for the relevant public and by certain procedures that the trademark. Since the "Paris Convention" was first introduced the concept of well-known trademarks, the well-known trademarks for special protection legislation has become the world trend.Paris Convention stipulates: all of the members were identified as the well-known trade marks, or registered First, the first to ban others, and the other is to prohibit the use of others with identical or similar logo. Trips further provides: 1, the Paris Convention for the special protection and extension of the services of well-known trademarks, 2, the scope of protection does not extend to prohibit similar goods or services with the well-known trademarks for use on the same or similar logo, 3, on how to That a well-known trademarks in principle a simple requirement.National legislation on the practice, the well-known trade marks that standards vary, often based on specific trade mark promotion of public awareness of related areas, logo merchandise sales and the scope of national interests, and other factors identified. From an international treaty to protect the well-known trademarks mind, that well-known trade marks and protection of well-known trade marks are closely linked.Second, the well-known trademarks protected modeOn the protection of the main trademarks of relative and absolute protectionism two models.The former refers to ban others with well-known trademarks identical or similar trademark with the trademark owner the same or similar industries in the registration or use of similar goods in non-use of the same or similar trademarks is permitted, "the Paris Convention "That is, relative to protectionism.While the latter refers to ban others in any industry, including the well-known trade mark goods with different or similar to those in the industry to register with the well-known trade marks and the use of the same or similar trademarks, TRIPS agreement that is taken by the expansion of the absolute protectionism.In simple economic form, as specified by the trade mark goods at a single, specific trade mark goods and the link between more closely. With, a valuable well-known trademarks have been more and more use of different types of commodities, which are among the types of goods on the property may be totally different, in a trademark associated with the commodity groups and the relative weakening of trade marks Commodity producers and the relative isolation. Not well-known trademarks such as cross-category protection and allow others to register, even if the goods obvious differences, the public will still be in the new goods and reputable well-known trademarks to establish a link between people that the goods may be well-known trademark, the new commodities , Or the well-known trademarks of goods and people between the existence of a legal, organizational or business association, thus leading to the misuse of consumers purchase. The rapid development of the commodity today, the relative protectionism has not improved the protection of the public and well-known trademark owner's interests.In view of this, in order to effectively prevent the reputation of well-known trademarks, and the identification of significant features and advertising value by the improper use of the damage, many countries on the implementation of a well-known trademarks is protectionism, which prohibits the use of any products on the same or with the well-known trademarks Similar to the trademark.TRIPS Agreement Article 16, paragraph 3 states: Paris Convention 1967 text, in principle, applicable to the well-known trademarks and logos of the commodities or services are not similar goods or services, if not similar goods or services on the use of the trademark will be Suggest that the goods or services with the well-known trademarks on a link exists, so that the interests of all well-known trademarks may be impaired.Third, the well-known trademarks dilutedThe protection of trademark rights, there are mainly two: one for the confusion theory, a theory for desalination.The main traditional trademark protection for trade marks the difference between functional design, and its theoretical basis for the theory of confusion. In summary, which is to ensure that the trademark can be identification, confirmation and different goods or services different from the significant features, to avoid confusion, deception and E Wu, the law gives first use of a person or persons registered with exclusive rights, which prohibits any Without the permission of the rights to use may cause confusion among consumers in the same or similar trademarks. Clearly, the traditional concept of trademark protection, to stop "the possibility of confusion" is the core of trademark protection.With the socio-economic development and commercialization of the continuous improvement of the degree, well-known trademarks by the enormous implication for the growing commercial value have attracted the attention of people. Compared with ordinary marks, bearing well-known trademarks by the significance and meaning beyond the trademark rights to the general, and further symbol of product quality and credit, contains a more valuable business assets - goodwill. Well-known trade mark rights of people to use its excellent reputation of leading the way in the purchasing power, instead of the use of trademarks to distinguish between different products and producers.When the mark beyond the role of this feature to avoid confusion, then, this factor is obviously confused and can not cover everything, and other factors become as important as or more important. Thus, in theory confusion on the basis of further development of desalination theory.Trademark Dilution (dilution), also known as trademark dilution, is one of trademark infringement theory. "Watered down", according to the U.S. "anti-federal trademark law dilute" means "regardless of well-known trade mark rights and theothers between the existence of competition, or existence of confusion, misunderstanding or the possibility of deception, reduce and weaken the well-known trademarks Its goods or services and the identification of significant capacity of the act. " In China, some scholars believe that "refers to dilute or weaken gradually weakened consumer or the public will be trademarks of the commercial sources with a specific link between the ability." Trademark faded and that the main theory is that many market operators have Using well-known trademarks of the desire of others, engage in well-known trademarks should be to prevent others from using its own unique identification of special protection.1927, Frank • Si Kaite in the "Harvard Law reviews" wrote the first trademark dilute theory. He believes that people should not only be trademarks of others prohibit the use of the mark, he will compete in the commodity, and should prohibit the use of non-competitive goods on. He pointed out: the real role of trade marks, not distinguish between goods operators, but satisfied with the degree of difference between different commodities, so as to promote the continuous consumer purchase. From the basic function of trademarks, trade mark used in non-competitive goods, their satisfaction with regard to the distinction between the role of different commodities will be weakened and watered down. Trademarks of the more significant or unique, to the public the impression that the more deeply, that is, should be restricted to non-compete others in the use of goods or services.Since then, the Intellectual Property Rights Branch of the American Bar Association Chairman Thomas • E • Si Kaite Smith on the theory made a fu rther elaboration and development. He said: "If the courts allow or laissez-faire 'Rolls Royce' restaurants, 'Rolls-Royce' cafeteria, 'Rolls-Royce' pants, 'Rolls-Royce' the candy, then not 10 years, ' Rolls-Royce 'trademark owners will no longer have the world well-known trademarks. "Si Kaite in accordance with the theory of well-known trade marks have faded because of the effect of non-rights holders with well-known trademarks in the public mind the good image of well-known trademarks will be used in non-competitivegoods, so as to gradually weaken or reduce the value of well-known trademarks, That is, by the well-known trademarks have credibility. Trademark tag is more significant or unique characteristics, which in the public mind the impression that the more deep, more is the need for increased protection, to prevent the well-known trade marks and their specific goods was the link between the weakening or disappearance.In practice, trademarks diluted share a wide range of operating methods, such as:A well-known trademarks of others will still use as a trademark, not only in the use of the same, similar to the goods or services. For example, household appliances, "Siemens" trademark as its own production of the furniture's trademark.2. To other people's well-known trademarks as their corporate name of the component. Such as "Haier" trademark for the name of his restaurant.3. To the well-known trademarks of others as the use of domain names. For example, watches trademark "OMEGA" registered the domain name for themselves ().4. To the well-known trademarks of others as a commodity and decorating use.5. Will be others as well-known trade marks of goods or services using the common name. For example, "Kodak" interpreted as "film, is a camera with photographic material", or "film, also known as Kodak,……" This interpretation is also the mark of the water down. If the "Kodak" ignored the trademark owner, after a period of time, people will Kodak film is, the film is Kodak. In this way, the Kodak film-related goods has become the common name, it as a trademark by a significant, identifiable on limbo. The public well-known Jeep (Jeep), aspirin (Aspirin), freon (Freon), and so was the registration of foreign goods are due to improper use and management and the protection of poor, evolved into similar products common name, Thus lost its trademark logo features.U.S. "anti-diluted Federal trademark law" before the implementation of the Federal Court of Appeal through the second from 1994 to 1996 case, identified thefollowing violations including the Trademark Dilution: (1) vague, non-means as others in similar goods not on Authorized the use of a trademark so that the sales of goods and reduce the value of trademarks or weakened (2) pale, that is because of violations related to the quality, or negative, to demonize the acts described a trademark goods may be caused to others The negative effects of the situation, (3) to belittle, or improperly changed, or derogatory way to describe a trade mark case.The majority of our scholars believe that the well-known trademarks diluted There are two main forms: watered down and defaced. The so-called dilute the people will have no right to use the same or similar trademark with the well-known trademarks used in different types of commodities, thus making the mark with the goods weakened ties between the specific acts the so-called defaced is that people will have no right to use the same Or similar marks for the well-known trade marks will have to belittle good reputation, tarnished the role of different types of goods on the act.Some scholars believe that the desalination also refers to the three aspects of well-known trademarks damage. First, in a certain way to demonize the relevant well-known trademarks; Second, some way related to well-known trademark dark; Third is the indirect way so that consumers will distort trade mark goods for the general misunderstanding of the name.In general, can be diluted in the form summarized as follows:1, weakeningWeakening is a typical diluted form, also known as dark, is that others will have some visibility in the use of a trademark is not the same, similar to the goods or services, thereby weakening the mark with its original logo of goods or services The link between, weakening the mark was a significant and identifiable, thus bearing the trade mark by the damage caused by acts of goodwill. Weakening the mark of recognition of the significant damage is serious, it can be the recognition of trademark dilution, was significant, or even make it completely disappeared, then to the mark bycarrying the reputation of devastating combat.First, the weakening of the identification is the weakening and lower. Any unauthorized person, others will have some visibility in the use of a trademark is not the same, similar to the goods or services, will reduce its recognition of. But consumers were referred to the mark, it may no longer think of first is the original goods or services, not only is the original or goods or services, consumers simply will not even think of goods or services, but the Trademark Dilution of goods Or services. There is no doubt that this marks the recognition of, is a heavy blow.Weakening of the mark is significantly weakened and the lower. Mark is significantly different from other commercial trademark marked characteristics. A certain well-known trademarks, which in itself should be a very significant, very significant and can be quickly and other signs of its own separate. However, the Trademark Dilution of the same or similar trademarks used in different goods or services, so that was the trademark and other commercial marked difference in greatly reduced, to the detriment of its significant.Of course, regardless of the weakening of the mark was a significant or identifiable, are the ultimate impact of the mark by the bearer of goodwill. Because the trade mark is the carrier of goodwill, the mark of any major damage, the final performance for all bearing the trade mark by the goodwill of the damage.2, tarnishedMeans others will have some well-known trademarks in the use of the good reputation of the trademark will have to belittle, defaced role of the goods or services on the act. Contaminate the trademarks of others, is a distortion of trade marks to others, the use of the damage, not only reduced the value of the mark, even on such values were defaced. As tarnished reputation is a trademark of damage, so tarnished included in the diluted acts, is also relatively accepted view. Moreover, in the field of trademark faded, tarnished than the weakening of the danger of even greater acts, the consequences are more serious.3, degradationDegradation is due to improper use of trademarks, trade mark goods for the evolution of the common name recognition and loss of function. Trademark Dilution degradation is the most serious kind. Degradation of the event, will completely lose their identification marks, no longer has the distinction function as the common name of the commodity.Fourth, protection against diluteBased on the well-known trademarks dilute the understanding, and accompanied by a serious weakening of well-known trademarks, all countries are gradually legislation to provide for the well-known trademarks to protect anti-diluted. There are specific models:1, the development of special anti-dilute the protection of well-known trademarksThe United States is taking this protection on behalf of the typical pattern.1995, in order to prevent lower dilute "the only representative of the public eye, the unique image of the trademark" to protect "the trademark value of advertising," the U.S. Congress passed the National reunification of the "anti-federal trademark law watered down", so as to the well-known trademarks All provide the unified and effective national anti-dilute the protection.U.S. anti-diluted in trademark protection has been added a new basis for litigation, which is different from the traditional basis of trademark infringement litigation. Trademark infringement of the criteria is confusing, the possibility of deception and misleading, and the Trademark Dilution criteria is unauthorized to others well-known trademarks of the public to reduce the use of the trademark instructions for goods and services only and in particular of Feelings. It is clear that the U.S. law is anti-diluted basis, "business reputation damage" and the possibility of well-known trade mark was a significant weakening of the possibility of providingrelief. Moreover, anti-faded law does not require the application of competitive relations or the existence of possible confusion, which is more conducive to the exercise of trademark right to appeal.2, through the Anti-Unfair Competition Law ProtectionSome countries apply anti-unfair competition law to protect famous trademarks from being watered down. Such as Greece, "Anti-Unfair Competition Law," the first one: "Prohibition of the Use of well-known trademarks in order to take advantage of different commodities on the well-known trademarks dilute its credibility was significant." Although some countries in the Anti-Unfair Competition Law does not explicitly prohibits trademark faded, but the Trademark Dilution proceedings, the application of unfair competition litigation.3, through or under well-known trademark protection within the scope of trademark protectionMost civil law countries is this way. 1991, "the French Intellectual Property Code," Di Qijuan trademark law section L.713-5 of the provisions that: not in similar goods or services on the use of well-known trade marks to the trademark owner or a loss caused by the improper use of trademarks , Against people should bear civil liability.Germany in 1995, "the protection of trademarks and other signs of" Article 14 also stipulates that: without the consent of the trademark rights of third parties should be banned in commercial activities, in and protected by the use of the trademark does not like similar goods or services , And the use of the trademark identical or similar to any signs.4, in the judicial precedents in the application of anti-dilute the protection ofIn some countries there are no clear legislative provisions of the anti-dilute well-known trademarks, but in judicial practice, they are generally applicable civil law on compensation for the infringement of the debt to protect the interests of allwell-known trademarks, through judicial precedents to dilute the protection of applicable anti.China's well-known trademarks in the protection of the law did not "water down" the reference, but on the substance of the relevant legal provisions, protection of anti-diluted. 2001 "Trademark Law" amendment to increase the protection of well-known trademarks, in particular, it is important to the well-known trademarks have been registered to conduct cross-category protection. Article 13 stipulates: "The meeting is not the same as or similar to the trademark application for registration of goods is copied, Mofang, translation others have been registered in the well-known trademarks, misleading the public, the standard of the well-known trade mark registration may be the interests of the damage, no registration And can not be used. "But needs to be pointed out that this provision does not mean that China's laws for the well-known trademarks has provided an effective anti-dilute the protection. "Trademark Law" will prohibit only well-known trademarks and trademarks of the same or similar use, without the same or similar goods not on the behavior, but the well-known trade marks have faded in various forms, such as the well-known trademarks for names, domain names, such acts Detract from the same well-known trademarks destroyed the logo of the ability to make well-known trade mark registration of the interests of damage, this is not a legal norms.It must be pointed out that the trade mark that should be paying attention to downplay acts of the following:1, downplay acts are specifically for the well-known registered trade marks.Perpetrators diluted one of the main purpose is the free-rider, using the credibility of well-known trademarks to sell their products, and general use of trademarks do not have this value. That acts to dilute limited to well-known trademarks, can effectively protect the rights of trademark rights, have not excessively restrict the freedom of choice of logo, is right to resolve the conflict right point of balance. "Trademark Law" will be divided into well-known trademarks have beenregistered and unregistered, and give different protection. Anti-has been watered down to protect only against the well-known trade marks registration, and for China not only well-known trade marks registered in the same or similar ban on the registration and use of goods. This reflects the "Trademark Law" the principle of protection of registered trademarks.2, faded in the different categories of goods and well-known trademarks for use on the same or similar logo.If this is the same or similar goods with well-known trademarks for use on the same or similar to the logo should be in accordance with the general treatment of trademark infringement. There is also a need to downplay the use of the tags are similar to a well-known trademarks and judgments.3, not all the non-use of similar products on the well-known trade marks and logos of the same or similar circumstances are all faded.When a trademark has not yet become well-known trademarks, perhaps there are some with the same or similar trademarks used in other types of goods on. In the well-known trademarks, the original has been in existence does not constitute a trademark of those who play down.4, acts that play down the perpetrator does not need to consider the subjective mental state.Regardless of their out of goodwill or malicious, intentional or fault, is not watered down the establishment. But the acts of subjective mental state will assume responsibility for its impact on the manner and scope. Generally speaking, if the perpetrator acts intentionally dilute the responsibility to shoulder much weight, in particular, bear a heavier responsibility for damages, if the fault is the commitment will be less responsibility. If there are no mistakes, just assume the responsibility to stop infringement.5, due to anti-faded to protect well-known trade marks with a specific goods orservices linked to well-known trademarks a long time widely used in a variety of goods, will inevitably lead to trademark the logo of a particular commodity producers play down the link, well-known trademarks A unique attraction to consumers will also be greatly reduced. So that should not be watered down to conduct a source of confusion for the conditions of goods, after all, not all the water down will cause consumers confusion. For example, a street shop's name is "Rolls-Royce fruit shop," people at this time there will be no confusion and that the shop and the famous Rolls-Royce trademark or producers of the contact. However, such acts can not be allowed, a large number of similar acts will dilute the Rolls-Royce trademark and its products linked to undermine the uniqueness of the trademark, if things continue this way when the mention of Rolls-Royce trademark, people may think of is not only Automobile, food, clothing, appliances, etc.. That faded as to cause confusion for the conditions, some will not dilute norms and suppression of acts, makes well-known trade marks are not well protected. Therefore, as long as it is a well-known trademark detract from the logo and unique ability to act on the behavior should be identified as diluted.1. Zheng Chengsi: "Intellectual property law", legal publishers 2003 version.2. Wu Handong editor: "Intellectual Property Law," China Politics and Law University Press 2002 edition.3. Susan. Sela De: "The United States Federal trademark law dilute the anti-legislation and practice," Zhang Jin Yi, contained in the "Law on Foreign Translation" 1998 No.4.4. Kong Xiangjun: "Anti-Unfair Competition AFP theory," People's Court Press, 2001 edition.5. Liu Ping, Qi Chang: "On the special protection of famous trademarks", in "law and commercial" 1998 No.6.6. Well-Tao, Lu Zhou Li: "On the well-known trademarks to protect the anti-diluted", in "Law" 1998 No. 5.2 外文参考文献原文浅谈驰名商标之淡化与反淡化一、驰名商标概述驰名商标是指经过长期使用,在市场上享有较高声誉,为相关公众所熟知,并经一定程序认定的商标。
常用研磨机外文文献翻译、中英文翻译、外文翻译
常用研磨机外文文献翻译、中英文翻译、外文翻译Grinding machine is a crucial n processing method that offers high machining accuracy and can process a wide range of materials。
It is suitable for almost all kinds of material processing。
and can achieve very high n and shape accuracy。
even reaching the limit。
The machining accuracy of grinding device is simple and does not require complex ___.2.Types of Grinding MachinesGrinding machines are mainly used for n grinding of workpiece planes。
cylindrical workpiece surfaces (both inside and outside)。
tapered faces inside。
spheres。
thread faces。
and other types of ___ grinding machines。
including disc-type grinding machines。
shaft-type grinding machines。
ic grinding machines。
and special grinding machines.3.Disc-type Grinding MachineThe disc-type grinding machine is a type of grinding machine that uses a grinding disc to grind the ___。
科技文献中英文对照翻译
Sensing Human Activity:GPS Tracking感应人类活动:GPS跟踪Stefan van der Spek1,*,Jeroen van Schaick1,Peter de Bois1,2and Remco de Haan1Abstract:The enhancement of GPS technology enables the use of GPS devices not only as navigation and orientation tools,but also as instruments used to capture travelled routes:assensors that measure activity on a city scale or the regional scale.TU Delft developed aprocess and database architecture for collecting data on pedestrian movement in threeEuropean city centres,Norwich,Rouen and Koblenz,and in another experiment forcollecting activity data of13families in Almere(The Netherlands)for one week.Thequestion posed in this paper is:what is the value of GPS as‘sensor technology’measuringactivities of people?The conclusion is that GPS offers a widely useable instrument tocollect invaluable spatial-temporal data on different scales and in different settings addingnew layers of knowledge to urban studies,but the use of GPS-technology and deploymentof GPS-devices still offers significant challenges for future research.摘要:增强GPS技术支持使用GPS设备不仅作为导航和定位工具,但也为仪器用来捕捉旅行路线:作为传感器,测量活动在一个城市或区域范围内规模。
智能交通系统中英文对照外文翻译文献
智能交通系统中英文对照外文翻译文献(文档含英文原文和中文翻译)原文:Traffic Assignment Forecast Model Research in ITS IntroductionThe intelligent transportation system (ITS) develops rapidly along with the city sustainable development, the digital city construction and the development of transportation. One of the main functions of the ITS is to improve transportation environment and alleviate the transportation jam, the most effective method to gain the aim is to forecast the traffic volume of the local network and the important nodes exactly with GIS function of path analysis and correlation mathematic methods, and this will lead a better planning of the traffic network. Traffic assignment forecast is an important phase of traffic volume forecast. It will assign the forecasted traffic to every way in the traffic sector. If the traffic volume of certain road is too big, which would bring on traffic jam, planners must consider the adoption of new roads or improving existing roads to alleviate the traffic congestion situation. This study attempts to present an improved traffic assignment forecast model, MPCC, based on analyzing the advantages and disadvantages of classic traffic assignment forecast models, and test the validity of the improved model in practice.1 Analysis of classic models1.1 Shortcut traffic assignmentShortcut traffic assignment is a static traffic assignment method. In this method, the traffic load impact in the vehicles’ travel is not considered, and the traffic impedance (travel time) is a constant. The traffic volume of every origination-destination couple will be assigned to the shortcut between the origination and destination, while the traffic volume of other roads in this sector is null. This assignment method has the advantage of simple calculation; however, uneven distribution of the traffic volume is its obvious shortcoming. Using this assignment method, the assignment traffic volume will be concentrated on the shortcut, which isobviously not realistic. However, shortcut traffic assignment is the basis of all theother traffic assignment methods.1.2 Multi-ways probability assignmentIn reality, travelers always want to choose the shortcut to the destination, whichis called the shortcut factor; however, as the complexity of the traffic network, thepath chosen may not necessarily be the shortcut, which is called the random factor.Although every traveler hopes to follow the shortcut, there are some whose choice isnot the shortcut in fact. The shorter the path is, the greater the probability of beingchosen is; the longer the path is, the smaller the probability of being chosen is.Therefore, the multi-ways probability assignment model is guided by the LOGIT model:∑---=n j ii i F F p 1)exp()exp(θθ (1)Where i p is the probability of the path section i; i F is the travel time of thepath section i; θ is the transport decision parameter, which is calculated by the followprinciple: firstly, calculate the i p with different θ (from 0 to 1), then find the θwhich makes i p the most proximate to the actual i p .The shortcut factor and the random factor is considered in multi-ways probabilityassignment, therefore, the assignment result is more reasonable, but the relationshipbetween traffic impedance and traffic load and road capacity is not considered in thismethod, which leads to the assignment result is imprecise in more crowded trafficnetwork. We attempt to improve the accuracy through integrating the several elements above in one model-MPCC.2 Multi-ways probability and capacity constraint model2.1 Rational path aggregateIn order to make the improved model more reasonable in the application, theconcept of rational path aggregate has been proposed. The rational path aggregate,which is the foundation of MPCC model, constrains the calculation scope. Rationalpath aggregate refers to the aggregate of paths between starts and ends of the trafficsector, defined by inner nodes ascertained by the following rules: the distancebetween the next inner node and the start can not be shorter than the distance betweenthe current one and the start; at the same time, the distance between the next innernode and the end can not be longer than the distance between the current one and theend. The multi-ways probability assignment model will be only used in the rationalpath aggregate to assign the forecast traffic volume, and this will greatly enhance theapplicability of this model.2.2 Model assumption1) Traffic impedance is not a constant. It is decided by the vehicle characteristicand the current traffic situation.2) The traffic impedance which travelers estimate is random and imprecise.3) Every traveler chooses the path from respective rational path aggregate.Based on the assumptions above, we can use the MPCC model to assign thetraffic volume in the sector of origination-destination couples.2.3 Calculation of path traffic impedanceActually, travelers have different understanding to path traffic impedance, butgenerally, the travel cost, which is mainly made up of forecast travel time, travellength and forecast travel outlay, is considered the traffic impedance. Eq. (2) displaysthis relationship. a a a a F L T C γβα++= (2)Where a C is the traffic impedance of the path section a; a T is the forecast traveltime of the path section a; a L is the travel length of the path section a; a F is theforecast travel outlay of the path section a; α, β, γ are the weight value of that threeelements which impact the traffic impedance. For a certain path section, there aredifferent α, β and γ value for different vehicles. We can get the weighted average of α,β and γ of each path section from the statistic percent of each type of vehicle in thepath section.2.4 Chosen probability in MPCCActually, travelers always want to follow the best path (broad sense shortcut), butbecause of the impact of random factor, travelers just can choose the path which is ofthe smallest traffic impedance they estimate by themselves. It is the key point ofMPCC. According to the random utility theory of economics, if traffic impedance is considered as the negativeutility, the chosen probability rs p of origination-destinationpoints couple (r, s) should follow LOGIT model:∑---=n j jrs rs bC bC p 1)exp()exp( (3) where rs p is the chosen probability of the pathsection (r, s);rs C is the traffic impedance of the path sect-ion (r, s); j C is the trafficimpedance of each path section in the forecast traffic sector; b reflects the travelers’cognition to the traffic impedance of paths in the traffic sector, which has reverseratio to its deviation. If b → ∞ , the deviation of understanding extent of trafficimpedance approaches to 0. In this case, all the travelers will follow the path whichis of the smallest traffic impedance, which equals to the assignment results withShortcut Traffic Assignment. Contrarily, if b → 0, travelers ’ understanding error approaches infinity. In this case, the paths travelers choose are scattered. There is anobjection that b is of dimension in Eq.(3). Because the deviation of b should beknown before, it is difficult to determine the value of b. Therefore, Eq.(3) is improvedas follows:∑---=n j OD j OD rsrs C bC C bC p 1)exp()exp(,∑-=n j j OD C n C 11(4) Where OD C is the average of the traffic impedance of all the as-signed paths; bwhich is of no dimension, just has relationship to the rational path aggregate, ratherthan the traffic impedance. According to actual observation, the range of b which is anexperience value is generally between 3.00 to 4.00. For the more crowded cityinternal roads, b is normally between 3.00 and 3.50.2.5 Flow of MPCCMPCC model combines the idea of multi-ways probability assignment anditerative capacity constraint traffic assignment.Firstly, we can get the geometric information of the road network and OD trafficvolume from related data. Then we determine the rational path aggregate with themethod which is explained in Section 2.1.Secondly, we can calculate the traffic impedance of each path section with Eq.(2),Fig.1 Flowchart of MPCC which is expatiated in Section 2.3.Thirdly, on the foundation of the traffic impedance of each path section, we cancalculate the respective forecast traffic volume of every path section with improvedLOGIT model (Eq.(4)) in Section 2.4, which is the key point of MPCC.Fourthly, through the calculation processabove, we can get the chosen probability andforecast traffic volume of each path section, but itis not the end. We must recalculate the trafficimpedance again in the new traffic volumesituation. As is shown in Fig.1, because of theconsideration of the relationship between trafficimpedance and traffic load, the traffic impedanceand forecast assignment traffic volume of everypath will be continually amended. Using therelationship model between average speed andtraffic volume, we can calculate the travel timeand the traffic impedance of certain path sect-ionunder different traffic volume situation. For theroads with different technical levels, therelationship models between average speeds totraffic volume are as follows: 1) Highway: 1082.049.179AN V = (5) 2) Level 1 Roads: 11433.084.155AN V = (6) 3) Level 2 Roads: 66.091.057.112AN V = (7) 4) Level 3 Roads: 3.132.01.99AN V = (8) 5) Level 4 Roads: 0988.05.70A N V =(9) Where V is the average speed of the path section; A N is the traffic volume of thepath section.At the end, we can repeat assigning traffic volume of path sections with themethod in previous step, which is the idea of iterative capacity constraint assignment,until the traffic volume of every path section is stable.译文智能交通交通量分配预测模型介绍随着城市的可持续化发展、数字化城市的建设以及交通运输业的发展,智能交通系统(ITS)的发展越来越快。
建筑三维模型分析中英文资料对照外文翻译文献
建筑三维模型分析中英文资料对照外文翻译文献本文档对比了建筑三维模型分析方面的中英文资料,并提供了相应的外文翻译文献。
以下是对比内容:1. 中文资料:中文资料:建筑三维模型分析是基于三维建模技术,通过对建筑模型进行分析和评估,以帮助设计师评估和改进设计方案的可行性和性能。
这些模型可以用于预测建筑物的能源效率、结构强度、照明效果等方面的性能。
2. 英文资料:英文资料:- 文献1:标题:"A Review of Three-Dimensional Model Analysis in Architecture"作者:John Smith来源:International Journal of Architectural Analysis摘要:本文综述了建筑领域中三维模型分析的研究进展。
通过分析现有文献,总结了三维模型分析在建筑设计中的应用、方法和技术。
文章还讨论了目前存在的挑战和未来的研究方向。
- 文献2:标题:"Performance Analysis of Building Models Using Three-Dimensional Simulation"作者:Jane Doe来源:Journal of Building Performance摘要:本文介绍了利用三维模拟技术对建筑模型进行性能分析的方法。
通过模拟建筑物在不同环境条件下的行为,提供了对建筑物能源效率、照明效果和空气流动等方面性能的评估。
文章还讨论了如何利用这些分析结果来优化建筑设计。
3. 外文翻译文献:外文翻译文献:- 文献1:《建筑中三维模型分析的综述》- 翻译摘要:本文综述了建筑领域中三维模型分析的研究进展。
通过分析现有文献,总结了三维模型分析在建筑设计中的应用、方法和技术。
文章还讨论了目前存在的挑战和未来的研究方向。
翻译摘要:本文综述了建筑领域中三维模型分析的研究进展。
通过分析现有文献,总结了三维模型分析在建筑设计中的应用、方法和技术。
工程管理专业外文文献翻译(中英文)【精选文档】
xxxxxx 大学本科毕业设计外文翻译Project Cost Control: the Way it Works项目成本控制:它的工作方式学院(系): xxxxxxxxxxxx专业: xxxxxxxx学生姓名: xxxxx学号: xxxxxxxxxx指导教师: xxxxxx评阅教师:完成日期:xxxx大学项目成本控制:它的工作方式在最近的一次咨询任务中,我们意识到对于整个项目成本控制体系是如何设置和应用的,仍有一些缺乏理解。
所以我们决定描述它是如何工作的.理论上,项目成本控制不是很难跟随。
首先,建立一组参考基线。
然后,随着工作的深入,监控工作,分析研究结果,预测最终结果并比较参考基准。
如果最终的结果不令人满意,那么你要对正在进行的工作进行必要的调整,并在合适的时间间隔重复。
如果最终的结果确实不符合基线计划,你可能不得不改变计划.更有可能的是,会 (或已经) 有范围变更来改变参考基线,这意味着每次出现这种情况你必须改变基线计划。
但在实践中,项目成本控制要困难得多,通过项目数量无法控制成本也证明了这一点。
正如我们将看到的,它还需要大量的工作,我们不妨从一开始启用它。
所以,要跟随项目成本控制在整个项目的生命周期.同时,我们会利用这一机会来指出几个重要文件的适当的地方。
其中包括商业案例,请求(资本)拨款(执行),工作包和工作分解结构,项目章程(或摘要),项目预算或成本计划、挣值和成本基线。
所有这些有助于提高这个组织的有效地控制项目成本的能力。
业务用例和应用程序(执行)的资金重要的是要注意,当负责的管理者对于项目应如何通过项目生命周期展开有很好的理解时,项目成本控制才是最有效的。
这意味着他们在主要阶段的关键决策点之间行使职责。
他们还必须识别项目风险管理的重要性,至少可以确定并计划阻止最明显的潜在风险事件。
在项目的概念阶段•每个项目始于确定的机会或需要的人.通常是有着重要性和影响力的人,如果项目继续,这个人往往成为项目的赞助。
中英文文献翻译pid控制器--大学毕业设计论文
附件1:外文原文PID controllerZuo Xin and Sun Jinming(Research Institute ofAutomation, University of Petroleum,Belting 102249,China)Received April 2,2005Abstract:Performance assessment of a proportional-integral-derivative(PID)controller is condueted using the PID achievable minimum variance as abenchmark.When the process model is unknown,we carl estimate the P/D·achievable minimum variance and the corresponding parameters by routine closed-loop operation data.Simulation results show that the process output variance is reduced by retuning controller parameters.Key words:Performance assessment,PID control,minimum varianceA proportional–integral–derivative controller (PID controller) is a generic .control loop feedback mechanism widely used in industrial control systems.A PID controller attempts to correct the error between a measured process variable and a desired setpoint by calculating and then outputting a corrective action that can adjust the process accordingly.The PID controller calculation (algorithm) involves three separate parameters; the Proportional, the Integral and Derivative values. The Proportional value determines the reaction to the current error, the Integral determines the reaction based on the sum of recent errors and the Derivative determines the reaction to the rate at which the error has been changing. The weightedsum of these three actions is used to adjust the process via a control element such as the position of a control valve or the power supply of a heating element.By "tuning" the three constants in the PID controller algorithm the PID can provide control action designed for specific process requirements. The response of the controller can be described in terms of the responsiveness of the controller to an error, the degree to which the controller overshoots the setpoint and the degree of system oscillation. Note that the use of the PID algorithm for control does not guarantee optimal control of the system or systemstability.Some applications may require using only one or two modes to provide the appropriate system control. This is achieved by setting the gain of undesired control outputs to zero. A PID controller will be called a PI, PD, P or I controller in the absence of the respective control actions. PI controllers are particularly common, since derivative action is very sensitive to measurement noise, and the absence of an integral value may prevent the system from reaching its target value due to the control action.Note: Due to the diversity of the field of control theory and application, many naming conventions for the relevant variables are in common use.1.Control loop basicsA familiar example of a control loop is the action taken to keep one's shower water at the ideal temperature, which typically involves the mixing of two process streams, cold and hot water. The person feels the water to estimate its temperature. Based on this measurement they perform a control action: use the cold water tap to adjust the process. The person would repeat this input-output control loop, adjusting the hot water flow until the process temperature stabilized at the desired value.Feeling the water temperature is taking a measurement of the process value or process variable (PV). The desired temperature is called the setpoint (SP). The output from the controller and input to the process (the tap position) is called the manipulated variable (MV). The difference between the measurement and the setpoint is the error (e), too hot or too cold and by how much.As a controller, one decides roughly how much to change the tap position (MV) after one determines the temperature (PV), and therefore the error. This first estimate is the equivalent of the proportional action of a PID controller. The integral action of a PID controller can be thought of as gradually adjusting the temperature when it is almost right. Derivative action can be thought of as noticing the water temperature is getting hotter or colder, and how fast, and taking that into account when deciding how to adjust the tap.Making a change that is too large when the error is small is equivalent to a high gain controller and will lead toovershoot. If the controller were to repeatedly make changes that were too large and repeatedly overshoot the target, this control loop would be termed unstable and the output would oscillate around the setpoint in either a constant, growing, or decaying sinusoid. A human would not do this because we are adaptive controllers, learning from the process history, but PID controllers do not have the ability to learn and must be set up correctly. Selecting the correct gains for effective control is known as tuning the controller.If a controller starts from a stable state at zero error (PV = SP), then further changes by the controller will be in response to changes in other measured or unmeasured inputs to the process that impact on the process, and hence on the PV. Variables that impact on the process other than the MV are known as disturbances and generally controllers are used to reject disturbances and/or implement setpoint changes. Changes in feed water temperature constitute a disturbance to the shower process.In theory, a controller can be used to control any process which has a measurable output (PV), a known ideal value for that output (SP) and an input to the process (MV) that will affect the relevant PV. Controllers are used in industry to regulate temperature, pressure, flow rate, chemical composition, speed and practically every other variable for which a measurement exists. Automobile cruise control is an example of a process which utilizes automated control.Due to their long history, simplicity, well grounded theory and simple setup and maintenance requirements, PID controllers are the controllers of choice for many of these applications.2.PID controller theoryNote: This section describes the ideal parallel or non-interacting form of the PID controller. For other forms please see the Section "Alternative notation and PID forms".The PID control scheme is named after its three correcting terms, whose sum constitutes the manipulated variable (MV). Hence:Where Pout, Iout, and Dout are the contributions to the output from the PID controller from each of the three terms, as defined below.2.1. Proportional termThe proportional term makes a change to the output that is proportional to the current error value. The proportional response can be adjusted by multiplying the error by a constant Kp, called the proportional gain.The proportional term is given by:WherePout: Proportional outputKp: Proportional Gain, a tuning parametere: Error = SP − PVt: Time or instantaneous time (the present)Change of response for varying KpA high proportional gain results in a large change in the output for a given change in the error. If the proportional gain is too high, the system can become unstable (See the section on Loop Tuning). In contrast, a small gain results in a small output response to a large input error, and a less responsive (or sensitive) controller. If the proportional gain is too low, the control action may be too small when responding to system disturbances.In the absence of disturbances, pure proportional control will not settle at its target value, but will retain a steady state error that is a function of the proportional gain and the process gain. Despite the steady-state offset, both tuning theory and industrial practice indicate that it is the proportional term that should contribute the bulk of the output change.2.2.Integral termThe contribution from the integral term is proportional to both the magnitude of the error and the duration of the error. Summing the instantaneous error over time (integrating the error) gives the accumulated offset that should have been correctedpreviously. The accumulated error is then multiplied by the integral gain and added to the controller output. The magnitude of the contribution of the integral term to the overall control action is determined by the integral gain, Ki.The integral term is given by:Iout: Integral outputKi: Integral Gain, a tuning parametere: Error = SP − PVτ: Time in the past contributing to the integral responseThe integral term (when added to the proportional term) accelerates the movement of the process towards setpoint and eliminates the residual steady-state error that occurs with a proportional only controller. However, since the integral term is responding to accumulated errors from the past, it can cause the present value to overshoot the setpoint value (cross over the setpoint and then create a deviation in the other direction). For further notes regarding integral gain tuning and controller stability, see the section on loop tuning.2.3 Derivative termThe rate of change of the process error is calculated by determining the slope of the error over time (i.e. its first derivative with respect to time) and multiplying this rate of change by the derivative gain Kd. The magnitude of the contribution of the derivative term to the overall control action is termed the derivative gain, Kd.The derivative term is given by:Dout: Derivative outputKd: Derivative Gain, a tuning parametere: Error = SP − PVt: Time or instantaneous time (the present)The derivative term slows the rate of change of the controller output and this effect is most noticeable close to the controller setpoint. Hence, derivative control is used to reduce the magnitude of the overshoot produced by the integral component and improve the combined controller-process stability. However, differentiation of a signal amplifies noise and thus this term in the controller is highly sensitive to noise in the error term, and can cause a process to become unstable if the noise and the derivative gain are sufficiently large.2.4 SummaryThe output from the three terms, the proportional, the integral and the derivative terms are summed to calculate the output of the PID controller. Defining u(t) as the controller output, the final form of the PID algorithm is:and the tuning parameters areKp: Proportional Gain - Larger Kp typically means faster response since thelarger the error, the larger the Proportional term compensation. An excessively large proportional gain will lead to process instability and oscillation.Ki: Integral Gain - Larger Ki implies steady state errors are eliminated quicker. The trade-off is larger overshoot: any negative error integrated during transient response must be integrated away by positive error before we reach steady state.Kd: Derivative Gain - Larger Kd decreases overshoot, but slows down transient response and may lead to instability due to signal noise amplification in the differentiation of the error.3. Loop tuningIf the PID controller parameters (the gains of the proportional, integral and derivative terms) are chosen incorrectly, the controlled process input can be unstable, i.e. its output diverges, with or without oscillation, and is limited only by saturation or mechanical breakage. Tuning a control loop is the adjustment of its control parameters (gain/proportional band, integral gain/reset, derivative gain/rate) to the optimumvalues for the desired control response.The optimum behavior on a process change or setpoint change varies depending on the application. Some processes must not allow an overshoot of the process variable beyond the setpoint if, for example, this would be unsafe. Other processes must minimize the energy expended in reaching a new setpoint. Generally, stability of response (the reverse of instability) is required and the process must not oscillate for any combination of process conditions and setpoints. Some processes have a degree of non-linearity and so parameters that work well at full-load conditions don't work when the process is starting up from no-load. This section describes some traditional manual methods for loop tuning.There are several methods for tuning a PID loop. The most effective methods generally involve the development of some form of process model, then choosing P, I, and D based on the dynamic model parameters. Manual tuning methods can be relatively inefficient.The choice of method will depend largely on whether or not the loop can be taken "offline" for tuning, and the response time of the system. If the system can be taken offline, the best tuning method often involves subjecting the system to a step change in input, measuring the output as a function of time, and using this response to determine the control parameters.Choosing a Tuning MethodMethodAdvantagesDisadvantagesManual TuningNo math required. Online method.Requires experiencedpersonnel.Ziegler–NicholsProven Method. Online method.Process upset, sometrial-and-error, very aggressive tuning.Software ToolsConsistent tuning. Online or offline method. May includevalve and sensor analysis. Allow simulation before downloading.Some cost and training involved.Cohen-CoonGood process models.Some math. Offline method. Onlygood for first-order processes.3.1 Manual tuningIf the system must remain online, one tuning method is to first set the I and D values to zero. Increase the P until the output of the loop oscillates, then the P should be left set to be approximately half of that value for a "quarter amplitude decay" type response. Then increase D until any offset is correct in sufficient time for the process. However, too much D will cause instability. Finally, increase I, if required, until the loop is acceptably quick to reach its reference after a load disturbance. However, too much I will cause excessive response and overshoot. A fast PID loop tuning usually overshoots slightly to reach the setpoint more quickly; however, some systems cannot accept overshoot, in which case an "over-damped" closed-loop system is required, which will require a P setting significantly less than half that of the P setting causing oscillation.3.2Ziegler–Nichols methodAnother tuning method is formally known as the Ziegler–Nichols method, introduced by John G. Ziegler and Nathaniel B. Nichols. As in the method above, the I and D gains are first set to zero. The "P" gain is increased until it reaches the "critical gain" Kc at which the output of the loop starts to oscillate. Kc and the oscillation period Pc are used to set the gains as shown:3.3 PID tuning softwareMost modern industrial facilities no longer tune loops using the manual calculation methods shown above. Instead, PID tuning and loop optimization software are used to ensure consistent results. These software packages will gather the data, develop process models, and suggest optimal tuning. Some software packages can even develop tuning by gathering data from reference changes.Mathematical PID loop tuning induces an impulse in the system, and then uses the controlled system's frequency response to design the PID loop values. In loops with response times of several minutes, mathematical loop tuning is recommended, because trial and error can literally take days just to find a stable set of loop values.Optimal values are harder to find. Some digital loop controllers offer a self-tuning feature in which very small setpoint changes are sent to the process, allowing the controller itself to calculate optimal tuning values.Other formulas are available to tune the loop according to different performance criteria.4 Modifications to the PID algorithmThe basic PID algorithm presents some challenges in control applications that have been addressed by minor modifications to the PID form.One common problem resulting from the ideal PID implementations is integralwindup. This can be addressed by:Initializing the controller integral to a desired valueDisabling the integral function until the PV has entered the controllable region Limiting the time period over which the integral error is calculatedPreventing the integral term from accumulating above or below pre-determined boundsMany PID loops control a mechanical device (for example, a valve). Mechanical maintenance can be a major cost and wear leads to control degradation in the form of either stiction or a deadband in the mechanical response to an input signal. The rate of mechanical wear is mainly a function of how often a device is activated to make a change. Where wear is a significant concern, the PID loop may have an output deadband to reduce the frequency of activation of the output (valve). This is accomplished by modifying the controller to hold its output steady if the change would be small (within the defined deadband range). The calculated output must leave the deadband before the actual output will change.The proportional and derivative terms can produce excessive movement in the output when a system is subjected to an instantaneous "step" increase in the error, such as a large setpoint change. In the case of the derivative term, this is due to taking the derivative of the error, which is very large in the case of an instantaneous step change.5. Limitations of PID controlWhile PID controllers are applicable to many control problems, they can perform poorly in some applications.PID controllers, when used alone, can give poor performance when the PID loop gains must be reduced so that the control system does not overshoot, oscillate or "hunt" about the control setpoint value. The control system performance can be improved by combining the feedback (or closed-loop) control of a PID controller with feed-forward (or open-loop) control. Knowledge about the system (such as the desired acceleration and inertia) can be "fed forward" and combined with the PID output to improve the overall system performance. The feed-forward value alone can often provide the major portion of the controller output. The PID controller can then be used primarily to respond to whatever difference or "error" remains between the setpoint (SP) and the actual value of the process variable (PV). Since the feed-forward output is not affected by the process feedback, it can never cause the control system to oscillate, thus improving the system response and stability.For example, in most motion control systems, in order to accelerate a mechanical load under control, more force or torque is required from the prime mover, motor, or actuator. If a velocity loop PID controller is being used to control the speed of the load and command the force or torque being applied by the prime mover, then it is beneficial to take the instantaneous acceleration desired for the load, scale that value appropriately and add it to the output of the PID velocity loop controller. This means that whenever the load is being accelerated or decelerated, a proportional amount of force is commanded from the prime mover regardless of the feedback value. The PID loop in this situation uses the feedback information to effect any increase or decrease of the combined output in order to reduce the remaining difference between the process setpoint and thefeedback value. Working together, the combined open-loop feed-forward controller and closed-loop PID controller can provide a more responsive, stable and reliable control system.Another problem faced with PID controllers is that they are linear. Thus, performance of PID controllers in non-linear systems (such as HV AC systems) isvariable. Often PID controllers are enhanced through methods such as PID gain scheduling or fuzzy logic. Further practical application issues can arise from instrumentation connected to the controller. A high enough sampling rate, measurement precision, and measurement accuracy are required to achieve adequate control performance.A problem with the Derivative term is that small amounts of measurement or process noise can cause large amounts of change in the output. It is often helpful to filter the measurements with a low-pass filter in order to remove higher-frequency noise components. However, low-pass filtering and derivative control can cancel each other out, so reducing noise by instrumentation means is a much better choice. Alternatively, the differential band can be turned off in many systems with little loss of control. This is equivalent to using the PID controller as a PI controller.6. Cascade controlOne distinctive advantage of PID controllers is that two PID controllers can be used together to yield better dynamic performance. This is called cascaded PID control. In cascade control there are two PIDs arranged with one PID controlling the set point of another. A PID controller acts as outer loop controller, which controls the primary physical parameter, such as fluid level or velocity. The other controller acts as inner loop controller, which reads the output of outer loop controller as set point, usually controlling a more rapid changing parameter, flowrate or accelleration. It can be mathematically proved that the working frequency of the controller is increased and the time constant of the object is reduced by using cascaded PID controller.[vague]7. Physical implementation of PID controlIn the early history of automatic process control the PID controller was implemented as a mechanical device. These mechanical controllers used a lever, spring and a mass and were often energized by compressed air. These pneumatic controllers were once the industry standard.Electronic analog controllers can be made from a solid-state or tube amplifier, a capacitor and a resistance. Electronic analogPID control loops were often found within more complex electronic systems, for example, the head positioning of a disk drive, the power conditioning of a power supply, or even the movement-detection circuit of a modern seismometer. Nowadays, electronic controllers have largely been replaced by digital controllers implemented with microcontrollers or FPGAs.Most modern PID controllers in industry are implemented in software in programmable logic controllers (PLCs) or as a panel-mounted digital controller. Software implementations have the advantages that they are relatively cheap and are flexible with respect to the implementation of the PID algorithm.References[1]Byung,S.K.(2000)On Performance Assessment of Feedback Control Loops.Austin:The University of Texas Austin[2]Desborough,L.and Harris,T.(1992)Performance Assessment Measures for Univariate Feedback Control. The Canadian Journal of Chemical Engineering,70(12).1186-1197[3]Ender,D.B.(1993)Process Control Performance:Not as Good as You Think.Control Engineering,40(10)[4]Harris,T(1993)Pefformance Assessment Measllres for Univariate Feedforward/Feedback Control.The Canadian Journal of Chemical Engineering,71(8),1186-1197[5]Qin,S.J.(1 998)Contr01 Performance Monitoring: A Review and Assessment.Com.Chem.Eng.,(23),173.186[6]Sun,Jinming(2004)PID Performance Assessment and Parameters Tuning.Beijing:China University of Petroleum[7]Xu,Xi;Li,Tao and Bo,Xiaochen(2000)Matlab Toolbox Application--Control Engineering.Bering:Electron Industry Press附件2:外文资料翻译译文PID控制器左信孙金明(石油大学自动化研究所,北京,102249,中国)发表于2005.4.2摘要:一个比例积分微分(PID)控制器的性能评价进行使用PID实现的最小方差作为参照。
中英文文献翻译—离合器工作原理
附录How Clutches WorkIf you drive a manual transmission car, you may be surprised to find out that it has more than one clutch. And it turns out that folks with automatic transmission cars have clutches, too. In fact, there are clutches in many things you probably see or use every day: Many cordless drills have a clutch, chain saws have a centrifugal clutch and even some yo-yos have a clutch.CIn!cp I山g?e CgIIeL入D!g?Lg山 o\ cgL 2poM!u? cIn!cp Iocg!!ou. eee 山oLe cIn!cp !山g?e2In this article, you'll learn why you need a clutch, how the clutch in your car works and find out some interesting, and perhaps surprising, places where clutches can be found. Clutches are useful in devices that have two rotating shafts. In these devices, one of the shafts is typically driven by a motor or pulley, and the other shaft drives another device. In a drill, for instance, one shaft is driven by a motor and the other drives a drill chuck. The clutch connects the two shafts so that they can either be locked together and spin at the same speed,or be decoupled and spin at different speeds.In a car,you need a clutch because the engine spins all the time,but the car's wheels do not. In order for a car to stop without killing the engine, the wheels need to be disconnectedf rom the engine somehow. The clutch allows us to smoothly engage a spinning engine to a non-spinning transmission by controlling the slippage between them.To understand how a clutch works, it helps to know a little bit about friction, which is a measure of how hard it is to slide one object over another. Friction is caused by the peaks and valleys that are part of every surface -- even very smooth surfaces still have microscopic peaks and valleys. The larger these peaks and valleys are, the harder it is to slide the object. You can learn more about friction in How Brakes Work.A clutch works because of friction between a clutch plate and a flywheel. We'll look at how these parts work together in the next section.Fly Wheels,Clutch Plates and FrictionIn a car’s clutch, a flywheel connects to the engine, and a clutch plate connects to the transmission. You can see what this looks like in the figure below.When your foot is off the pedal, the springs push the pressure plate against the clutch disc, which in turn presses against the flywheel. This locks the engine to the transmission input shaft, causing them to spin at the same speed.Pressure plateThe amount of force the clutch can hold depends on the friction between the clutch plate and the flywheel, and how much force the spring puts on the pressure plate. The friction force in the clutch works just like the blocks described in the friction section of How Brakes Work, except that the spring presses on the clutch plate instead of weight pressing the block into the ground.W h en the clutch pedal is pressed, a cable or hydraulic piston pushes on the release fork, which presses the throw-out bearing against the middle of the diaphragm spring. As the middle of the diaphragm spring is pushed in, a series of pins near the outside of the spring causes the spring to pull the pressure plate away from the clutch disc (see below). This r eleases the clutch from the spinning engine.Common ProblemsFrom the 1950s to the 1970s, you could count on getting between 50,000 and 70,000 miles from your car's clutch. Clutches can now last for more than 80,000 miles if you use them gently and maintain them well. If not cared for, clutches can start to break down at 35,000 miles. Trucks that are consistently overloaded or that frequently tow heavy loads can also have problems with relatively new clutches.Photo courtesy Carolina MustangClutch plateThe clutch only wears while the clutch disc and the flywheel are spinning at different speeds. When they are locked together, the friction material is held tightly against the flywheel, and they spin in sync. It's only when the clutch disc is slipping against the flywheel that wearing occurs. So, if you are the type of driver who slips the clutch a lot, you'll wear out your clutch a lot faster.Sometimes the problem is not with slipping, but with sticking. If your clutch won't release properly, it will continue to turn the input shaft. This can cause grinding, or completely p revent your car from going into gear. Some common reasons a clutch may stick are: Broken or stretched clutch cable - The cable needs the right amount of tension to push and pull effectively.Leaky or defective slave and/or master clutch cylinders - Leaks keep the cylinders from building the necessary amount of pressure.Air in the hydraulic line - Air affects the hydraulics by taking up space the fluid needs to build pressure.Misadjusted linkage - When your foot hits the pedal, the linkage transmits the wrong amount of force.Mismatched clutch components - Not all aftermarket parts work with your clutch.depress fully. If you have to press hard on the pedal, there may be something wrong. Sticking or binding in the pedal linkage, cable, cross shaft, or pivot ball are common causes. S o metimes a blockage or worn seals in the hydraulic system can also cause a hard clutch. Another problem associated with clutches is a worn throw-out bearing, sometimes called a clutch release bearing. This bearing applies force to the fingers of the spinning pressure plate to release the clutch.If you hear a rumbling sound when the clutch engages,you might have a problem with the throw-out.Types of ClutchesThere are many other types of clutches in your car and in your garage.An automatic transmission contains several clutches. These clutches engage and disengage various sets of planetary gears. Each clutch is put into motion using pressurized hydraulic fluid. When the pressure drops, springs cause the clutch to release. Evenly spacedridges, called splines, line the inside and outside of the clutch to lock into the gears and the clutch housing. You can read more about these clutches in How Automatic Transmissions Work.An air conditioning, compressor in a car has an electromagnetic clutch. This allows the compressor to shut off even while the engine is running. When current flows through a magnetic coil in the clutch, the clutch engages. As soon as the current stops, such as when you turn off your air conditioning, the clutch disengages.Most cars that have an engine-driven cooling fan have a thermostatically controlled viscous clutch -- the temperature of the fluid actually drives the clutch. This clutch is positioned at the hub of the fan, in the airflow coming through the radiator. This type of clutch is a lot like the viscous coupling sometimes found in all-wheel drive cars. The fluid in the clutch gets thicker as it heats up, causing the fan to spin faster to catch up with the engine rotation. When the car is cold, the fluid in the clutch remains cold and the fan spins s lowly, allowing the engine to quickly warm up to its proper operating temperature.Many cars have limited slip differentials or viscous couplings, both of which use clutches to help increase traction. When your car turns, one wheel spins faster than the other, which makes the car hard to handle. The slip differential makes up for that with the help of its clutch. When one wheel spins faster than the others, the clutch engages to slow it down and match the other three. Driving over puddles of water or patches of ice can also spin your wheels. You can learn more about differentials and viscous couplings in How Differentials Work.Gas-powered chain saws and weed eaters have centrifugal clutches, so that the chains or strings can stop spinning without you having to turn off the engine. These clutches work automatically through the use of centrifugal force. The input is connected to the engine crankshaft. The output can drive a chain, belt or shaft. As the rotations per minute increase, w eighted arms swing out and force the clutch to engage. Centrifugal clutches are also often found in lawn mowers, go-karts, mopeds and mini-bikes. Even some yo-yos are m anufactured with centrifugal clutches.C lu tches are valuable and necessary to a number of applications. For more information on clutches and related topics, check out the links on the following page.离合器工作原理如果您驾驶手动变速箱的汽车,您可能会惊讶地发现,它有一个以上的离合器。
数据采集外文文献翻译中英文
数据采集外文文献翻译(含:英文原文及中文译文)文献出处:Txomin Nieva. DATA ACQUISITION SYSTEMS [J]. Computers in Industry, 2013, 4(2):215-237.英文原文DATA ACQUISITION SYSTEMSTxomin NievaData acquisition systems, as the name implies, are products and/or processes used to collect information to document or analyze some phenomenon. In the simplest form, a technician logging the temperature of an oven on a piece of paper is performing data acquisition. As technology has progressed, this type of process has been simplified and made more accurate, versatile, and reliable through electronic equipment. Equipment ranges from simple recorders to sophisticated computer systems. Data acquisition products serve as a focal point in a system, tying together a wide variety of products, such as sensors that indicate temperature, flow, level, or pressure. Some common data acquisition terms are shown below.Data collection technology has made great progress in the past 30 to 40 years. For example, 40 years ago, in a well-known college laboratory, the device used to track temperature rises in bronze made of helium was composed of thermocouples, relays, interrogators, a bundle of papers, anda pencil.Today's university students are likely to automatically process and analyze data on PCs. There are many ways you can choose to collect data. The choice of which method to use depends on many factors, including the complexity of the task, the speed and accuracy you need, the evidence you want, and more. Whether simple or complex, the data acquisition system can operate and play its role.The old way of using pencils and papers is still feasible for some situations, and it is cheap, easy to obtain, quick and easy to start. All you need is to capture multiple channels of digital information (DMM) and start recording data by hand.Unfortunately, this method is prone to errors, slower acquisition of data, and requires too much human analysis. In addition, it can only collect data in a single channel; but when you use a multi-channel DMM, the system will soon become very bulky and clumsy. Accuracy depends on the level of the writer, and you may need to scale it yourself. For example, if the DMM is not equipped with a sensor that handles temperature, the old one needs to start looking for a proportion. Given these limitations, it is an acceptable method only if you need to implement a rapid experiment.Modern versions of the strip chart recorder allow you to retrieve data from multiple inputs. They provide long-term paper records of databecause the data is in graphic format and they are easy to collect data on site. Once a bar chart recorder has been set up, most recorders have enough internal intelligence to operate without an operator or computer. The disadvantages are the lack of flexibility and the relative low precision, often limited to a percentage point. You can clearly feel that there is only a small change with the pen. In the long-term monitoring of the multi-channel, the recorders can play a very good role, in addition, their value is limited. For example, they cannot interact with other devices. Other concerns are the maintenance of pens and paper, the supply of paper and the storage of data. The most important is the abuse and waste of paper. However, recorders are fairly easy to set up and operate, providing a permanent record of data for quick and easy analysis.Some benchtop DMMs offer selectable scanning capabilities. The back of the instrument has a slot to receive a scanner card that can be multiplexed for more inputs, typically 8 to 10 channels of mux. This is inherently limited in the front panel of the instrument. Its flexibility is also limited because it cannot exceed the number of available channels. External PCs usually handle data acquisition and analysis.The PC plug-in card is a single-board measurement system that uses the ISA or PCI bus to expand the slot in the PC. They often have a reading rate of up to 1000 per second. 8 to 16 channels are common, and the collected data is stored directly in the computer and then analyzed.Because the card is essentially a part of the computer, it is easy to establish the test. PC-cards are also relatively inexpensive, partly because they have since been hosted by PCs to provide energy, mechanical accessories, and user interfaces. Data collection optionsOn the downside, the PC plug-in cards often have a 12-word capacity, so you can't detect small changes in the input signal. In addition, the electronic environment within the PC is often susceptible to noise, high clock rates, and bus noise. The electronic contacts limit the accuracy of the PC card. These plug-in cards also measure a range of voltages. To measure other input signals, such as voltage, temperature, and resistance, you may need some external signal monitoring devices. Other considerations include complex calibrations and overall system costs, especially if you need to purchase additional signal monitoring devices or adapt the PC card to the card. Take this into account. If your needs change within the capabilities and limitations of the card, the PC plug-in card provides an attractive method for data collection.Data electronic recorders are typical stand-alone instruments that, once equipped with them, enable the measurement, recording, and display of data without the involvement of an operator or computer. They can handle multiple signal inputs, sometimes up to 120 channels. Accuracy rivals unrivalled desktop DMMs because it operates within a 22 word, 0.004 percent accuracy range. Some data electronic automatic recordershave the ability to measure proportionally, the inspection result is not limited by the user's definition, and the output is a control signal.One of the advantages of using data electronic loggers is their internal monitoring signals. Most can directly measure several different input signals without the need for additional signal monitoring devices. One channel can monitor thermocouples, RTDs, and voltages.Thermocouples provide valuable compensation for accurate temperature measurements. They are typically equipped with multi-channel cards. Built-in intelligent electronic data recorder helps you set the measurement period and specify the parameters for each channel. Once you set it all up, the data electronic recorder will behave like an unbeatable device. The data they store is distributed in memory and can hold 500,000 or more readings.Connecting to a PC makes it easy to transfer data to a computer for further analysis. Most data electronic recorders can be designed to be flexible and simple to configure and operate, and most provide remote location operation options via battery packs or other methods. Thanks to the A/D conversion technology, certain data electronic recorders have a lower reading rate, especially when compared with PC plug-in cards. However, a reading rate of 250 per second is relatively rare. Keep in mind that many of the phenomena that are being measured are physical in nature, such as temperature, pressure, and flow, and there are generallyfewer changes. In addition, because of the monitoring accuracy of the data electron loggers, a large amount of average reading is not necessary, just as they are often stuck on PC plug-in cards.Front-end data acquisition is often done as a module and is typically connected to a PC or controller. They are used in automated tests to collect data, control and cycle detection signals for other test equipment. Send signal test equipment spare parts. The efficiency of the front-end operation is very high, and can match the speed and accuracy with the best stand-alone instrument. Front-end data acquisition works in many models, including VXI versions such as the Agilent E1419A multi-function measurement and VXI control model, as well as a proprietary card elevator. Although the cost of front-end units has been reduced, these systems can be very expensive unless you need to provide high levels of operation, and finding their prices is prohibited. On the other hand, they do provide considerable flexibility and measurement capabilities.Good, low-cost electronic data loggers have the right number of channels (20-60 channels) and scan rates are relatively low but are common enough for most engineers. Some of the key applications include:•product features•Hot die cutting of electronic products•Test of the environmentEnvironmental monitoring•Composition characteristics•Battery testBuilding and computer capacity monitoringA new system designThe conceptual model of a universal system can be applied to the analysis phase of a specific system to better understand the problem and to specify the best solution more easily based on the specific requirements of a particular system. The conceptual model of a universal system can also be used as a starting point for designing a specific system. Therefore, using a general-purpose conceptual model will save time and reduce the cost of specific system development. To test this hypothesis, we developed DAS for railway equipment based on our generic DAS concept model. In this section, we summarize the main results and conclusions of this DAS development.We analyzed the device model package. The result of this analysis is a partial conceptual model of a system consisting of a three-tier device model. We analyzed the equipment project package in the equipment environment. Based on this analysis, we have listed a three-level item hierarchy in the conceptual model of the system. Equipment projects are specialized for individual equipment projects.We analyzed the equipment model monitoring standard package in the equipment context. One of the requirements of this system is the ability to use a predefined set of data to record specific status monitoring reports. We analyzed the equipment project monitoring standard package in the equipment environment. The requirements of the system are: (i) the ability to record condition monitoring reports and event monitoring reports corresponding to the items, which can be triggered by time triggering conditions or event triggering conditions; (ii) the definition of private and public monitoring standards; (iii) Ability to define custom and predefined train data sets. Therefore, we have introduced the "monitoring standards for equipment projects", "public standards", "special standards", "equipment monitoring standards", "equipment condition monitoring standards", "equipment project status monitoring standards and equipment project event monitoring standards, respectively Training item triggering conditions, training item time triggering conditions and training item event triggering conditions are device equipment trigger conditions, equipment item time trigger conditions and device project event trigger condition specialization; and training item data sets, training custom data Sets and trains predefined data sets, which are device project data sets, custom data sets, and specialized sets of predefined data sets.Finally, we analyzed the observations and monitoring reports in the equipment environment. The system's requirement is to recordmeasurements and category observations. In addition, status and incident monitoring reports can be recorded. Therefore, we introduce the concept of observation, measurement, classification observation and monitoring report into the conceptual model of the system.Our generic DAS concept model plays an important role in the design of DAS equipment. We use this model to better organize the data that will be used by system components. Conceptual models also make it easier to design certain components in the system. Therefore, we have an implementation in which a large number of design classes represent the concepts specified in our generic DAS conceptual model. Through an industrial example, the development of this particular DAS demonstrates the usefulness of a generic system conceptual model for developing a particular system.中文译文数据采集系统Txomin Nieva数据采集系统, 正如名字所暗示的, 是一种用来采集信息成文件或分析一些现象的产品或过程。
外文翻译 外文文献 英文文献 胜任力模型研究
Research on Competency Model:A Literature Review andEmpirical StudiesAbstractWestern countries have applied competency models to addressing problems existed in their administrative and managerial systems since 1970s,and the findings is positine and promising. However, competency model hasn’t been introduced to China until 1990s and it is still unknown and mysterious to many Chinese managers. This paper aims to uncover the mysterious veil of competency model in order to broaden the horizon of Chinese managers and boost China's human resource development as well as management.Keywords:Competency,Competency Models,Empirical Studies of Competency ModelsIt has been more than 30 years since competency model was utilized to human resource management.In western countries,competency model first displayed its effectiveness in government administration, meanwhile many multinationals and their branch companies applied the competency model to their daily business management and their business was a great success. As the notion of competency is gradually come to light and accepted by people all around the world,more and more enterprises have been trying to build their own competency model under the help of professional consultant firms. As a result,competency model has gradually been a very fashionable phrase in the field of management and quite a few enterprises are thus benefited from it. In recent years, competency model has become a hot spot in the Chinese academia as well as big-,middle- and small-sized enterprises alike,many relevant writings and books have also been translated and published. However, competency and competency model are still mysterious to many Chinese scholars, business managers as well as government administrators.Purpose and Significance of the StudyThe purpose of the study aims to make a critical literature review of the competency model,clarify some confusion related to it and explore its application. The following questions are employed to guide this study:What is competency? What is competency model? What are the theoretical and empirical findings related to competency model?The study illustrates how we could take advantage of competency model in our harmonious society building. On one hand,the study will delineate competency and competency model in order to clarify confusions related to it since it is still strange and mysterious to many Chinese managers and administrators;on the other hand,thestudy would enrich Chinese HRD&HRM in the field of government administration and business management both theoretically and empirically.Research MethodThe present study has utilized qualitative analysis, induction and deduction. Since this research is a literature review in some sense, qualitative analysis will be an indispensable research method; Induction and deduction are applied to both theoretical and empirical studies.In order to enhance the credibility of present research,only the authoritative publications on competency model are reviewed,including books and papers written by foreign and Chinese scholars and HRDHRM practitioners. By searching for the keywords "competency" "competency model" and "competency model building" as well as "empirical studies on competency models",books and papers written by well-known foreign scholars such as McClelland D. C.,Lyle M. Spencer, Anntoinette D. Lucia, Richard Lepsinger etc.,are available; by the same token,books and papers written by Chinese scholars such as Zhi-gong He,Jianfeng Peng, Shaohua Fang, Nengquan Wu,etc.,could be consulted. All the books and papers are published between 1950s and 2007. In addition, many data cited in this paper comes from empirical studies at home and abroad.FindingsIn this part,a literature review of competency is firstly carried out;then competency model as well as its evolution,development and innovation is delineated;finally empirical studies are reviewed. Empirical studies mainly focus on competency model building and its application to human resource development and management.Understanding CompetencyIn 1973,American scholar David C. McClelland published his paper Testing for Competency Rather Than Intelligence which cited a large amount of research findings to illustratethe inappropriateness of assessing personnel qualities by abusing intelligence tests. Dr. McClelland further explained that some factors (personality, intelligence, value,etc.)which people had always taken for granted in determining work performance hadn't displayed their desired result. As a result,he emphasized that people should ignore those theoretical by pothese and subjective judgements which had been proved groundless in reality. He declared that people should tap directly those factors and behaviors which could really impact their performance (McClelland, 1973). These factors and behaviors were named "competency" by McClelland. The publishing of this paper symbolized the debut of competency research. From then on,many scholars started getting involved into the research on competency and they conceptualizedcompetency from different perspectives as shown in the following table: The above ten concepts of competency have a lot in common:①Competency is motive, trait,value,skill,self-image, social role,knowledge;②Competency is a combination;③Competency should be measurable, observable, instructional,phasic and hierarchical;④Competency is a determinant to outstanding performance.Thus competency is an underlying combination of individual characteristics such as motive, inner drive force, quality, attitude,sole role,self-image, knowledge and skill,it is causally related to criterion-referenced effective and/or superior performance in a job or situation and it is measurable,observable and instructional.Besides,many scholars and consultancy firms believe that competency could be explained under the help of three different models:Iceberg Model. This model treats competency as an iceberg, the part above the water represents behavior, knowledge and skills which are easy to measure and observe,while the part under the water symbolizes underlying qualities such as value,attitude,social role, self-image,traits which are hard to assess,and the deepest part under the water represents the most latent qualities such as inner drive force,social motive, etc. which are most difficult to observe and measure.Onion Model. This model treats competency as an onion, the outer layer represents skills and knowledge which are liable to acquire,the inner layer refers to qualities such as self-image,social role,attitude and value which are relatively difficult to appraise, while the core of the onion symbolizes traits and motives which are most difficult to cultivate and develop.Brain Model. This model stems from the brain mechanism. It presupposes that the brain could be divided into four parts. Each part functions differently. The upper-left part is in charge of competency such as analysing capacity, calculation, strong logic ability; the upper-right part is in charge of competency such as innovation and intuition;the bottom left part is in charge of competency such as organizing ability, planning ability; and the bottom-right part is in charge of competency such as communication ability,perception, etc. Different parts will exert corresponding influence on competency development.Conceptualizations of Competency ModelFew foreign scholars have directly put forward conceptualizations of competency model. By contrast,many Chinese scholars have expressed their opinions on it. The present paper only cites those concepts that have been published by authoritative publishing houses.Jianfeng Peng, a professor in Ch;na Renmin University,together with his students, has studied how to build competency models for effective HR management since 2003. He thought competency model was the combination of differentqualities which were necessary for people to successfully finish a job or achieve superior performance,these qualities included different motives,traits, self-images and social roles as well as knowledge and skill (Jianfeng Peng, 2003). Prof. Peng believed that a competency model was composed of 4-6 competencies that were closely related to performance. Competency models could help managers judge and distinguish key factors that led to superior performance or underperformance. As a result,competency model could be treated as a foundation to improve performance.Professor Nengquan Wu from Sun Yat-sen University published his book Competency Model:Design and Application in 2005,according to his understanding, competency model refers to "proficiencies that people define core competencies of different levels, delineate corresponding behaviors,determine key competencies as well as f inish certain work.”(Nengquan Wu,2005). Prof. Wu conceptualized competency model from the perspective of methodology. He believed that competency model was a unique HRM thinking mode, method and operation flow. On the basis of organizational strategy, competency model could be utilized to enhance organizational competitiveness and improve performance.Shaohua Fang, a senior HRM consultant and expert,provided us with the following definition:"Competency model is to conceptualize and describe the necessary knowledge,skills,qualities and abilities which an employee should have in order to finish work (Shaohua Fang, 2007)”.By taking advantage of definitions of different levels and related behavioral descriptions, people could determine the combination of core competencies and required proficiency to finish work. Hc} pointed out these behaviors and skills must be able to measure,observe and instruct and they should exert a great influence upon personal performance and business success.International Human Resource Institute(IHRI) has also defined competency model:"The so-called competency model is the standardized description and explanation of competencies that could actualize superior performance.”(·IHRI, 2005)IHRI declared that a competency model should include 6^-1 2 competencies.In summary, the first concept mentioned above attaches an importance to the composition of competency model and its function, while all of the rest three concepts emphasize cognitive abilities as well as criterion-referred performance. Thus competency, model is a combination of different competencies which could be observed,delineated,explained and calculated on one hand,and could facilitate superior performance on the other hand.Development and Evolution of Competency ModelIn early 1970, top officials in U. S. Department of State believed that theirdiplomats' se- lection based on intelligence test was ineffective. It was an upset situation for them to find that many seemly excellent people fail to live up to their expectations regarding their work performance. Under such circumstances, Dr. McClelland was invited to help Department of State design an effective personnel selection system which could appraise the actual performance of employees. In that program,McClelland and his colleague Charles Dailey adopted the method of Behavioral Event Interview (BEI) to collect information in older to study factors that influenced the diplomats' performance. Through a series of summaries and analyses, McClelland and Dailey found out the differences between an excellent diplomat and a mediocre diplomat as far as their behaviors and modes of thinking were concerned. In this way, competencies that a diplomat should possess were found out. This program is the earliest empirical application of competency model. And the research findings were two papers: Improving Officer Selection for the Foreign Service (McClelland&Dailey,1972) as well as Evaluating New Methods of Measuring the Qualities Needed in Superior Foreign Service Information Officers(McClelland& Dailey,1973).Mcber and American Management Association (A'MA) also started their research on competency model in the same year. They focused on providing the answer to the question:what kind of competencies should be displayed by successful managers rather than unsuccessful ones? AMA spent 5 years observing 1 800 managers. By comparing the performance of excellent managers and mediocre ones, AMA defined their competencies based on their traits. The research results showed that all the successful managers shared the following 5 competencies:professional knowledge,maturity of mentality, maturity of .entrepreneurship,people relations and maturity of the profession. Of which,only professional knowledge were shared by excellent and mediocre managers (Mcber&.AMA, 1970).Then Prof. Bray carried out 8 years research at AT&T based on technique of assessment center. From the aspectives of abilities, attitudes and traits, etc.,he built a competency model composed of 25 competencies such as interpersonal relations, expression ability, social sensitivity, creativity,flexibility,organizational ability,planning ability, decision-making ability, etc(Bray and Grant,1978).In China,however, researches on competency model are relatively much late.Chinese scholars Chongming Wang and Minke Chen published their paper about competency model in Psychological Science in 1992. They studied 220 senior and middle-level managers of 51 enterprises in 5 cities. After examining and testing the competency model for senior managers on the basis of factor analysis and structural equation modelling, they compiled "Key Managerial Behavior Assessment Scale" (Chongming Wang&Minke Chen,2002).Scholars such as Kan Shi, Jicheng Wang and Chaoping Li took advantage of Behaviocal Event Interview to assess the competency model for senior managers in the industry of telecommunication (Kan Shi,Jicheng Wang&Chaoping Li,2002). Jicheng Wang designed 5 universal competency models for technical personnel,sales people, community service personnel,managers as well as entrepreneurs respectively.Jianfeng Peng and his postgraduate student Xiaojuan Xing built 4 universal competency models for business managers,business technical personnel,marketing personnel as well as HR managers (Jianfeng Peng,2003 ).The above domestic studies illustrate that competency models for middle-level and senior managers have been built based on in-depth interview and questionnairing. Most publications only focus on conceptualizing competency model,its development,behavioral event interview as well as competency model building,most of the findings are theoretical rather than empirical. By contrast,foreign studies are much maturer both theoretically and empirically.Empirical StudiesEmpirical studies highlight the application of competency model to enterprises, governments and other institutions.Nowadays,empirical studies on competency models mainly focus on the following 4 aspects:Staffing and Selection. Besides job standards and skills prescription, more and more businesses have carried out their personnel staffing and selection in light of the candidates' competencies which are crucial to their future performance. This competency-based personnel staffing and selection has connected business strategies and targets to business employees themselves. As a result,the quality of staffing and selection is greatly improved.Performance Management. Businesses which have built their competency models are more interested in the competency rather than the result itself in their performance management. As a result, their performance management style has been competency-driven rather than result-driven. Managers haven’t attached an importance to short-term performance, but current and long-term performances. In such a managerial system,outstanding performance has been easily actualized. Each employee has made most of their core competencies and expertise to make a contribution to their business.Compensation Management.After the competency-based compensation management system is set up, businesses have concentrated on their employees’future development and potential value, which has stimulated employees and managers of all ranks to improve themselves both menetuacy and teconologcal. Competency oases compense lion management system has helped enterprises attract and retain moretalents. In a word,competency model has endowed employees with a sense of respect and creativity.Training and Development. Enterprises which have built their competency models tend to determine core competencies in light of business strategies,environments, employee development planning and performance appraisal. Enterprises decide their training and development priorities on the basis of competency model.Future TrendsDespite that there is a growing body of literature on competency model,research on competency model is still in a premature stage and many questions still remain unanswered. Therefore, further research is required to address several important issues.First of all,although there are growing studies on the impacts of the competency model on organizational outcomes,antecedents of competency model need to be identified and academically explored. Future studies are needed to examine the relationships between the features of competency model and its key antecedent variables such as organizational sttracture.leadership and external environment. For example,it can be reasoned that the features of competency model are likely to be positively correlated with the structures of enterprises, governments as well as other institutions. Secondly,the impact of competency model on performance needs to be thoroughly explored. More studies are needed to examine whether the features of competency model or organizational culture,has direct or indirect impacts on organizational performance. While quite a few HRD and HRM researchers and practitioners have demonstrated that the concept of competency model has a positive impact on organizational performance, however,such impact may be mediated by other important organizational variables. Finally, it is also important to consider the relationships of competency model and other important HR variables such as career development, managerial coaching as well as employee training.Conclusions and DiscussionsIn conclusion,competency model has increasingly exerted profound influence on human resource development and management. While this concept has received an increase in both academic and management fields,there are increasing empirical studies designed to examine the nature of the construct and its relationships with other important organizational variables. More studies are needed to enhance the theoretical and empirical foundations of competency model.胜任力模型研究:文献综述和实证研究摘要20世纪70年代以来,西方国家已经利用胜任力模型来解决存在于行政和管理系统中的问题,其结果是积极且有前途的。
中英文文献翻译
Database introduction and ACCESS2000The database is the latest technology of data management, and the important branch of computer science. The database , as its name suggests, is the warehouse to preserve the data. The warehouse to store apparatus in computer only, and data to deposit according to sure forms。
The so-called database is refers to the long-term storage the data acquisition which in the computer, organized, may share。
In the database data according to the certain data model organization, the description, and the storage, has a smaller redundance, the higher data independence and the easy extension, and may altogether shine for each kind of user。
The effective management database, frequently has needed some database management systems (DBMS) is the user provides to database operation each kind of order, the tool and the method, including database establishment and recording input, revision, retrieval, demonstration, deletion and statistics。
船舶与海洋工程论文中英文资料外文翻译文献
中英文资料外文翻译文献A Simple Prediction Formula of Roll Damping of Conventional Cargo Ships on the Basis of lkeda's Method and Its LimitationSince the roll damping of ships has significant effects of viscosity, it is difficult to calculate it theoretically. Therefore, experimental results or some prediction methods are used to get the roll damping in design stage of ships. Among some prediction methods, Ikeda’s one is widely used in many ship motion computer programs. Using the method, the roll damping of various ship hulls with various bilge keels can be calculated to investigate its characteristics. To calculate the roil damping of each ship, detailed data of the ship are needed to input. Therefore, a simpler prediction method is expected in primary design stage. Such a simple method must be useful to validate the results obtained by a computer code to predict it on the basis of Ikeda,s method, too. On the basis of the predicted roll damping by Ikeda’s method for various ships, a very simple prediction formula of the roll damping of ships is deduced in the present paper. Ship hull forms are systematically changed by changing length, beam, draft, mid-ship sectional coefficient and prismatic coefficient. It is found, however, that this simple formula can not be used for ships that have high position of the center of gravity. A modified method to improve accuracy for such ships is proposed.Key words: Roll damping, simple prediction formula, wave component, eddy component, bilge keel component.IntroductionIn 1970s, strip methods for predicting ship motions in 5-degree of freedoms in waves have been established. The methods are based on potential flow theories (Ursell-Tasai method, source distribution method and so on), and can predict pitch, heave, sway and yaw motions of ships in waves in fairly good accuracy. In roll motion, however, the strip methods do not work well because of significant viscous effects on the roll damping. Therefore, some empirical formulas or experimental dataare used to predict the roll damping in the strip methods.To improve the prediction of roll motions by these strip methods, one of the authors carried out a research project to develop a roll damping prediction method which has the same concept and the same order of accuracy as the strip methods which are based on hydrodynamic forces acting on strips. The review of the prediction method was made by Himeno [5] and Ikeda [6,7] with the computer program.The prediction method, which is now called Ikeda’s method, divides the roll damping into the frictional (BF), the wave (Bw),the eddy (Be) and the bilge keel (Bbk) components at zero forward speed, and at forward speed, the lift (Bi) is added. Increases of wave and friction components due to advance speed are also corrected on the basis of experimental results. Then the roll damping coefficient B44 (= roll damping moment (kgfm)/roll angular velocity (rad/sec)) can be expressed as follows: B44 B bk (1)At zero forward speed, each component except the friction and lift components are predicted for each cross section with unit length and the predicted values are summed up along the ship length. The friction component is predicted by Kato’s formula for a three-dimensional ship shape. Modification functions for predicting the forward speed effects on the roll damping components are developed for the friction, wave and eddy components. The computer program of the method was published, and the method has been widely used.For these 30 years, the original Ikeda’s method developed for conven tional cargo ships has been improved to apply many kinds of ships, for examples, more slender and round ships, fishing boats, barges, ships with skegs and so on. The original method is also widely used. However, sometimes, different conclusions of roll mot ions were derived even though the same Ikeda’s method was used in the calculations. Then, to check the accuracy of the computer programs of the same Ikeda’s method, a more simple prediction method with the almost same accuracy as the Ikeda’s original one h as been expected to be developed. It is said that in design stages of ships, Ikeda’s method is too complicated to use. To meet these needs, a simple roll damping prediction method was deduced by using regression analysis [8].Previous Prediction FormulaThe simple prediction formula proposed in previous paper can not be used for modem ships that have high position of center of gravity or long natural roll period such as large passenger ships with relatively flat hull shape. In order to investigate its limitation, the authors compared the result of this prediction method with original Ikeda’s one while out of its calculating limitation. Fig. 1 shows the result of the comparison with their method of roll damping. The upper one is on the condition that the center of gravity is low and the lower one on the condition that the center of gravity is high.From this figure, the roll damping estimated by this prediction formula is in good agreement with the roll damping calculated by the Ikeda’s method for low positi on of center of gravity, but the error margin grows for the high position of center of gravity. The results suggest that the previous prediction formula is necessary to be revised. Methodical Series ShipsModified prediction formula will be developed on the basis of the predicted results by Ikeda’s method using the methodical series ships. This series ships are constructed based on the Taylor Standard Series and its hull shapes are methodically changed by changing length, beam, draft, midship sectional coefficient and longitudinal prismatic coefficient. The geometries of the series ships are given by the following equations. Proposal of New Prediction Method of Roll DampingIn this chapter, the characteristics of each component of the roll damping, the frictional, the wave, the eddy and the bilge keel components at zero advanced speed, are discussed, and a simple prediction formula of each component is developed.As well known, the wave component of the roll damping for a two-dimensional cross section can be calculated by potential flow theories in fairly good accuracy. In Ikeda's method, the wave damping of a strip section is not calculated and the calculated values by any potential flow theories are used as the wave damping.reason why viscous effects are significant in only roll damping can be explained as follows. Fig. 4 shows the wave component of the roll damping for 2-D sections calculated by a potential flow theory.ConclusionsA simple prediction method of the roll damping of ships is developed on the basis of the Ikeda’s original prediction method which was developed in the same concept as a strip method for calculating ship motions in waves. Using the data of a ship, B/d, Cb,Cm, OG/d, G),bBK/B, Ibk/Lpp,(pa, the roll damping of a ship can be approx imately predicted. Moreover, the limit of application of Ikeda’s prediction method to modern ships that have buttock flow stern is demonstrated by the model experiment. The computer program of the method can be downloaded from the Home Page of Ikeda’s Labo (AcknowledgmentsThis work was supported by the Grant-in Aid for Scientific Research of the Japan Society for Promotion of Science (No. 18360415).The authors wish to express sincere appreciation to Prof. N. Umeda of Osaka University for valuable suggestions to this study.References五、Y. Ikeda, Y. Himeno, N. Tanaka, On roll damping force of shipEffects of friction of hull and normal force of bilge keels, Journal of the Kansai Society of Naval Architects 161 (1976) 41-49. (in Japanese)六、Y. Ikeda, K. Komatsu, Y. Himeno, N. Tanaka, On roll damping force of ship~Effects of hull surface pressure created by bilge keels, Journal of the Kansai Society of Naval Architects 165 (1977) 31-40. (in Japanese)七、Y. Ikeda, Y. Himeno, N. Tanaka, On eddy making component of roll damping force on naked hull, Journal of the Society of Naval Architects 142 (1977) 59-69. (in Japanese)八、Y. Ikeda, Y. Himeno, N. Tanaka, Components of roll damping of ship at forward speed, Journal of the Society of Naval Architects 143 (1978) 121-133. (in Japanese) 九、Y. Himeno, Prediction of Ship Roll Damping一State of the Art, Report of Department of Naval Architecture & Marine Engineering, University of Michigan, No.239, 1981.十、Y. Ikeda, Prediction Method of Roll Damping, Report of Department of Naval Architecture, University of Osaka Prefecture, 1982.十一、Y. Ikeda, Roll damping, in: Proceedings of 1stSymposium of Marine Dynamics Research Group, Japan, 1984, pp. 241-250. (in Japanese)十二、Y. Kawahara, Characteristics of roll damping of various ship types and as imple prediction formula of roll damping on the basis of Ikeda’s method, in: Proceedings of the 4th Asia-Pacific Workshop on Marine Hydrodymics, Taipei, China, 2008,pp. 79-86.十三、Y. Ikeda, T. Fujiwara, Y. Himeno, N. Tanaka, Velocity field around ship hull in roll motion, Journal of the Kansai Society of Naval Architects 171 (1978) 33-45. (in Japanese)十四、N. Tanaka, Y. Himeno, Y. Ikeda, K. Isomura,Experimental study on bilge keel effect for shallow draftship, Journal of the Kansai Society of Naval Architects 180 (1981) 69-75. (in Japanese)常规货船的横摇阻尼在池田方法基础上的一个简单预测方法及其局限性摘要:由于船的横摇阻尼对其粘度有显着的影响,所以很难在理论上计算。
中英文双语外文文献翻译:一种基于ARIMA模型、BPNN模型对消费物价指数(CPI)进行预测的新型分治模型
此文档是毕业设计外文翻译成品(含英文原文+中文翻译),无需调整复杂的格式!下载之后直接可用,方便快捷!本文价格不贵,也就几十块钱!一辈子也就一次的事!英文3890单词,20217字符(字符就是印刷符),中文6398汉字。
A Novel Divide-and-Conquer Model for CPI Prediction UsingARIMA, Gray Model and BPNNAbstract:This paper proposes a novel divide-and-conquer model for CPI prediction with the existing compilation method of the Consumer Price Index (CPI) in China. Historical national CPI time series is preliminary divided into eight sub-indexes including food, articles for smoking and drinking, clothing, household facilities, articles and maintenance services, health care and personal articles, transportation and communication, recreation, education and culture articles and services, and residence. Three models including back propagation neural network (BPNN) model, grey forecasting model (GM (1, 1)) and autoregressive integrated moving average (ARIMA) model are established to predict each sub-index, respectively. Then the best predicting result among the three models’for each sub-index is identified. To further improve the performance, special modification in predicting method is done to sub-CPIs whose forecasting results are not satisfying enough. After improvement and error adjustment, we get the advanced predicting results of the sub-CPIs. Eventually, the best predicting results of each sub-index are integrated to form the forecasting results of the national CPI. Empirical analysis demonstrates that the accuracy and stability of the introduced method in this paper is better than many commonly adopted forecasting methods, which indicates the proposed method is an effective and alternative one for national CPI prediction in China.1.IntroductionThe Consumer Price Index (CPI) is a widely used measurement of cost of living. It not only affects the government monetary, fiscal, consumption, prices, wages, social security, but also closely relates to the residents’daily life. As an indicator of inflation in China economy, the change of CPI undergoes intense scrutiny. For instance, The People's Bank of China raised the deposit reserve ratio in January, 2008 before the CPI of 2007 was announced, for it is estimated that the CPI in 2008 will increase significantly if no action is taken. Therefore, precisely forecasting the change of CPI is significant to many aspects of economics, some examples include fiscal policy, financial markets and productivity. Also, building a stable and accurate model to forecast the CPI will have great significance for the public, policymakers and research scholars.Previous studies have already proposed many methods and models to predict economic time series or indexes such as CPI. Some previous studies make use of factors that influence the value of the index and forecast it by investigating the relationship between the data of those factors and the index. These forecasts are realized by models such as Vector autoregressive (VAR)model1 and genetic algorithms-support vector machine (GA-SVM) 2.However, these factor-based methods, although effective to some extent, simply rely on the correlation between the value of the index and limited number of exogenous variables (factors) and basically ignore the inherent rules of the variation of the time series. As a time series itself contains significant amount of information3, often more than a limited number of factors can do, time series-based models are often more effective in the field of prediction than factor-based models.Various time series models have been proposed to find the inherent rules of the variation in the series. Many researchers have applied different time series models to forecasting the CPI and other time series data. For example, the ARIMA model once served as a practical method in predicting the CPI4. It was also applied to predict submicron particle concentrations frommeteorological factors at a busy roadside in Hangzhou, China5. What’s more, the ARIMA model was adopted to analyse the trend of pre-monsoon rainfall data forwestern India6. Besides the ARIMA model, other models such as the neural network, gray model are also widely used in the field of prediction. Hwang used the neural-network to forecast time series corresponding to ARMA (p, q) structures and found that the BPNNs generally perform well and consistently when a particular noise level is considered during the network training7. Aiken also used a neural network to predict the level of CPI and reached a high degree of accuracy8. Apart from the neural network models, a seasonal discrete grey forecasting model for fashion retailing was proposed and was found practical for fashion retail sales forecasting with short historical data and better than other state-of-art forecastingtechniques9. Similarly, a discrete Grey Correlation Model was also used in CPI prediction10. Also, Ma et al. used gray model optimized by particle swarm optimization algorithm to forecast iron ore import and consumption of China11. Furthermore, to deal with the nonlinear condition, a modified Radial Basis Function (RBF) was proposed by researchers.In this paper, we propose a new method called “divide-and-conquer model”for the prediction of the CPI.We divide the total CPI into eight categories according to the CPI construction and then forecast the eight sub-CPIs using the GM (1, 1) model, the ARIMA model and the BPNN. To further improve the performance, we again make prediction of the sub-CPIs whose forecasting results are not satisfying enough by adopting new forecasting methods. After improvement and error adjustment, we get the advanced predicting results of the sub-CPIs. Finally we get the total CPI prediction by integrating the best forecasting results of each sub-CPI.The rest of this paper is organized as follows. In section 2, we give a brief introduction of the three models mentioned above. And then the proposed model will be demonstrated in the section 3. In section 4 we provide the forecasting results of our model and in section 5 we make special improvement by adjusting the forecasting methods of sub-CPIs whose predicting results are not satisfying enough. And in section 6 we give elaborate discussion and evaluation of the proposed model. Finally, the conclusion is summarized in section 7.2.Introduction to GM(1,1), ARIMA & BPNNIntroduction to GM(1,1)The grey system theory is first presented by Deng in 1980s. In the grey forecasting model, the time series can be predicted accurately even with a small sample by directly estimating the interrelation of data. The GM(1,1) model is one type of the grey forecasting which is widely adopted. It is a differential equation model of which the order is 1 and the number of variable is 1, too. The differential equation is:Introduction to ARIMAAutoregressive Integrated Moving Average (ARIMA) model was first put forward by Box and Jenkins in 1970. The model has been very successful by taking full advantage of time series data in the past and present. ARIMA model is usually described as ARIMA (p, d, q), p refers to the order of the autoregressive variable, while d and q refer to integrated, and moving average parts of the model respectively. When one of the three parameters is zero, the model is changed to model “AR”, “MR”or “ARMR”. When none of the three parameters is zero, the model is given by:where L is the lag number,Ɛt is the error term.Introduction to BPNNArtificial Neural Network (ANN) is a mathematical and computational model which imitates the operation of neural networks of human brain. ANN consists of several layers of neurons. Neurons of contiguous layers are connected with each other. The values of connections between neurons are called “weight”. Back Propagation Neural Network (BPNN) is one of the most widely employed neural network among various types of ANN. BPNN was put forward by Rumelhart and McClelland in 1985. It is a common supervised learning network well suited for prediction. BPNN consists of three parts including one input layer, several hidden layers and one output layer, as is demonstrated in Fig 1. The learning process of BPNN is modifying the weights of connections between neurons based on the deviation between the actual output and the target output until the overall error is in the acceptable range.Fig. 1. Back-propagation Neural Network3.The Proposed MethodThe framework of the dividing-integration modelThe process of forecasting national CPI using the dividing-integration model is demonstrated in Fig 2.Fig. 2.The framework of the dividing-integration modelAs can be seen from Fig. 2, the process of the proposed method can be divided into the following steps: Step1: Data collection. The monthly CPI data including total CPI and eight sub-CPIs are collected from the official website of China’s State Statistics Bureau (/).Step2: Dividing the total CPI into eight sub-CPIs. In this step, the respective weight coefficient of eight sub-CPIs in forming the total CPI is decided by consulting authoritative source .(/). The eight sub-CPIs are as follows: 1. Food CPI; 2. Articles for Smoking and Drinking CPI; 3. Clothing CPI; 4. Household Facilities, Articles and Maintenance Services CPI; 5. Health Care and Personal Articles CPI; 6. Transportation and Communication CPI;7. Recreation, Education and Culture Articles and Services CPI; 8. Residence CPI. The weight coefficient of each sub-CPI is shown in Table 8.Table 1. 8 sub-CPIs weight coefficient in the total indexNote: The index number stands for the corresponding type of sub-CPI mentioned before. Other indexes appearing in this paper in such form have the same meaning as this one.So the decomposition formula is presented as follows:where TI is the total index; Ii (i 1,2, ,8) are eight sub-CPIs. To verify the formula, we substitute historical numeric CPI and sub-CPI values obtained in Step1 into the formula and find the formula is accurate.Step3: The construction of the GM (1, 1) model, the ARIMA (p, d, q) model and the BPNN model. The three models are established to predict the eight sub-CPIs respectively.Step4: Forecasting the eight sub-CPIs using the three models mentioned in Step3 and choosing the best forecasting result for each sub-CPI based on the errors of the data obtained from the three models.Step5: Making special improvement by adjusting the forecasting methods of sub-CPIs whose predicting results are not satisfying enough and get advanced predicting results of total CPI.Step6: Integrating the best forecasting results of 8 sub-CPIs to form the prediction of total CPI with the decomposition formula in Step2.In this way, the whole process of the prediction by the dividing-integration model is accomplished.3.2. The construction of the GM(1,1) modelThe process of GM (1, 1) model is represented in the following steps:Step1: The original sequence:Step2: Estimate the parameters a and u using the ordinary least square (OLS). Step3: Solve equation as follows.Step4: Test the model using the variance ratio and small error possibility.The construction of the ARIMA modelFirstly, ADF unit root test is used to test the stationarity of the time series. If the initial time series is not stationary, a differencing transformation of the data is necessary to make it stationary. Then the values of p and q are determined by observing the autocorrelation graph, partial correlation graph and the R-squared value.After the model is built, additional judge should be done to guarantee that the residual error is white noise through hypothesis testing. Finally the model is used to forecast the future trend ofthe variable.The construction of the BPNN modelThe first thing is to decide the basic structure of BP neural network. After experiments, we consider 3 input nodes and 1 output nodes to be the best for the BPNN model. This means we use the CPI data of time , ,toforecast the CPI of time .The hidden layer level and the number of hidden neurons should also be defined. Since the single-hidden- layer BPNN are very good at non-liner mapping, the model is adopted in this paper. Based on the Kolmogorov theorem and testing results, we define 5 to be the best number of hidden neurons. Thus the 3-5-1 BPNN structure is determined.As for transferring function and training algorithm, we select ‘tansig’as the transferring function for middle layer, ‘logsig’for input layer and ‘traingd’as training algorithm. The selection is based on the actual performance of these functions, as there are no existing standards to decide which ones are definitely better than others.Eventually, we decide the training times to be 35000 and the goal or the acceptable error to be 0.01.4.Empirical AnalysisCPI data from Jan. 2012 to Mar. 2013 are used to build the three models and the data from Apr. 2013 to Sept. 2013 are used to test the accuracy and stability of these models. What’s more, the MAPE is adopted to evaluate the performance of models. The MAPE is calculated by the equation:Data sourceAn appropriate empirical analysis based on the above discussion can be performed using suitably disaggregated data. We collect the monthly data of sub-CPIs from the website of National Bureau of Statistics of China (/).Particularly, sub-CPI data from Jan. 2012 to Mar. 2013 are used to build the three models and the data from Apr. 2013 to Sept. 2013 are used to test the accuracy and stability of these models.Experimental resultsWe use MATLAB to build the GM (1,1) model and the BPNN model, and Eviews 6.0 to build the ARIMA model. The relative predicting errors of sub-CPIs are shown in Table 2.Table 2.Error of Sub-CPIs of the 3 ModelsFrom the table above, we find that the performance of different models varies a lot, because the characteristic of the sub-CPIs are different. Some sub-CPIs like the Food CPI changes drastically with time while some do not have much fluctuation, like the Clothing CPI. We use different models to predict the sub- CPIs and combine them by equation 7.Where Y refers to the predicted rate of the total CPI, is the weight of the sub-CPI which has already been shown in Table 1and is the predicted value of the sub-CPI which has the minimum error among the three models mentioned above. The model chosen will be demonstrated in Table 3:Table 3.The model used to forecastAfter calculating, the error of the total CPI forecasting by the dividing-integration model is 0.0034.5.Model Improvement & Error AdjustmentAs we can see from Table 3, the prediction errors of sub-CPIs are mostly below 0.004 except for two sub- CPIs: Food CPI whose error reaches 0.0059 and Transportation & Communication CPI 0.0047.In order to further improve our forecasting results, we modify the prediction errors of the two aforementioned sub-CPIs by adopting other forecasting methods or models to predict them. The specific methods are as follows.Error adjustment of food CPIIn previous prediction, we predict the Food CPI using the BPNN model directly. However, the BPNN model is not sensitive enough to investigate the variation in the values of the data. For instance, although the Food CPI varies a lot from month to month, the forecasting values of it are nearly all around 103.5, which fails to make meaningful prediction.We ascribe this problem to the feature of the training data. As we can see from the original sub-CPI data on the website of National Bureau of Statistics of China, nearly all values of sub-CPIs are around 100. As for Food CPI, although it does have more absolute variations than others, its changes are still very small relative to the large magnitude of the data (100). Thus it will be more difficult for the BPNN model to detect the rules of variations in training data and the forecastingresults are marred.Therefore, we use the first-order difference series of Food CPI instead of the original series to magnify the relative variation of the series forecasted by the BPNN. The training data and testing data are the same as that in previous prediction. The parameters and functions of BPNN are automatically decided by the software, SPSS.We make 100 tests and find the average forecasting error of Food CPI by this method is 0.0028. The part of the forecasting errors in our tests is shown as follows in Table 4:Table 4.The forecasting errors in BPNN testError adjustment of transportation &communication CPIWe use the Moving Average (MA) model to make new prediction of the Transportation and Communication CPI because the curve of the series is quite smooth with only a few fluctuations. We have the following equation(s):where X1, X2…Xn is the time series of the Transportation and Communication CPI, is the value of moving average at time t, is a free parameter which should be decided through experiment.To get the optimal model, we range the value of from 0 to 1. Finally we find that when the value of a is 0.95, the forecasting error is the smallest, which is 0.0039.The predicting outcomes are shown as follows in Table5:Table 5.The Predicting Outcomes of MA modelAdvanced results after adjustment to the modelsAfter making some adjustment to our previous model, we obtain the advanced results as follows in Table 6: Table 6.The model used to forecast and the Relative ErrorAfter calculating, the error of the total CPI forecasting by the dividing-integration model is 0.2359.6.Further DiscussionTo validate the dividing-integration model proposed in this paper, we compare the results of our model with the forecasting results of models that do not adopt the dividing-integration method. For instance, we use the ARIMA model, the GM (1, 1) model, the SARIMA model, the BRF neural network (BRFNN) model, the Verhulst model and the Vector Autoregression (VAR) model respectively to forecast the total CPI directly without the process of decomposition and integration. The forecasting results are shown as follows in Table7.From Table 7, we come to the conclusion that the introduction of dividing-integration method enhances the accuracy of prediction to a great extent. The results of model comparison indicate that the proposed method is not only novel but also valid and effective.The strengths of the proposed forecasting model are obvious. Every sub-CPI time series have different fluctuation characteristics. Some are relatively volatile and have sharp fluctuations such as the Food CPI while others are relatively gentle and quiet such as the Clothing CPI. As a result, by dividing the total CPI into several sub-CPIs, we are able to make use of the characteristics of each sub-CPI series and choose the best forecasting model among several models for every sub-CPI’s prediction. Moreover, the overall prediction error is provided in the following formula:where TE refers to the overall prediction error of the total CPI, is the weight of the sub-CPI shown in table 1 and is the forecasting error of corresponding sub-CPI.In conclusion, the dividing-integration model aims at minimizing the overall prediction errors by minimizing the forecasting errors of sub-CPIs.7.Conclusions and future workThis paper creatively transforms the forecasting of national CPI into the forecasting of 8 sub-CPIs. In the prediction of 8 sub-CPIs, we adopt three widely used models: the GM (1, 1) model, the ARIMA model and the BPNN model. Thus we can obtain the best forecasting results for each sub-CPI. Furthermore, we make special improvement by adjusting the forecasting methods of sub-CPIs whose predicting results are not satisfying enough and get the advanced predicting results of them. Finally, the advanced predicting results of the 8 sub- CPIs are integrated to formthe forecasting results of the total CPI.Furthermore, the proposed method also has several weaknesses and needs improving. Firstly, The proposed model only uses the information of the CPI time series itself. If the model can make use of other information such as the information provided by factors which make great impact on the fluctuation of sub-CPIs, we have every reason to believe that the accuracy and stability of the model can be enhanced. For instance, the price of pork is a major factor in shaping the Food CPI. If this factor is taken into consideration in the prediction of Food CPI, the forecasting results will probably be improved to a great extent. Second, since these models forecast the future by looking at the past, they are not able to sense the sudden or recent change of the environment. So if the model can take web news or quick public reactions with account, it will react much faster to sudden incidence and affairs. Finally, the performance of sub-CPIs prediction can be higher. In this paper we use GM (1, 1), ARIMA and BPNN to forecast sub-CPIs. Some new method for prediction can be used. For instance, besides BPNN, there are other neural networks like genetic algorithm neural network (GANN) and wavelet neural network (WNN), which might have better performance in prediction of sub-CPIs. Other methods such as the VAR model and the SARIMA model should also be taken into consideration so as to enhance the accuracy of prediction.References1.Wang W, Wang T, and Shi Y. Factor analysis on consumer price index rising in China from 2005 to 2008. Management and service science 2009; p. 1-4.2.Qin F, Ma T, and Wang J. The CPI forecast based on GA-SVM. Information networking and automation 2010; p. 142-147.3.George EPB, Gwilym MJ, and Gregory CR. Time series analysis: forecasting and control. 4th ed. Canada: Wiley; 20084.Weng D. The consumer price index forecast based on ARIMA model. WASE International conferenceon information engineering 2010;p. 307-310.5.Jian L, Zhao Y, Zhu YP, Zhang MB, Bertolatti D. An application of ARIMA model to predict submicron particle concentrations from meteorological factors at a busy roadside in Hangzhou, China. Science of total enviroment 2012;426:336-345.6.Priya N, Ashoke B, Sumana S, Kamna S. Trend analysis and ARIMA modelling of pre-monsoon rainfall data forwestern India. Comptesrendus geoscience 2013;345:22-27.7.Hwang HB. Insights into neural-network forecasting of time seriescorresponding to ARMA(p; q) structures. Omega 2001;29:273-289.am A. Using a neural network to forecast inflation. Industrial management & data systems 1999;7:296-301.9.Min X, Wong WK. A seasonal discrete grey forecasting model for fashion retailing. Knowledge based systems 2014;57:119-126.11. Weimin M, Xiaoxi Z, Miaomiao W. Forecasting iron ore import and consumption of China using grey model optimized by particleswarm optimization algorithm. Resources policy 2013;38:613-620.12. Zhen D, and Feng S. A novel DGM (1, 1) model for consumer price index forecasting. Greysystems and intelligent services (GSIS)2009; p. 303-307.13. Yu W, and Xu D. Prediction and analysis of Chinese CPI based on RBF neural network. Information technology and applications2009;3:530-533.14. Zhang GP. Time series forecasting using a hybrid ARIMA and neural network model. Neurocomputing 2003;50:159-175.15. Pai PF, Lin CS. A hybrid ARIMA and support vector machines model in stock price forecasting. Omega 2005;33(6):497-505.16. Tseng FM, Yu HC, Tzeng GH. Combining neural network model with seasonal time series ARIMA model. Technological forecastingand social change 2002;69(1):71-87.17.Cho MY, Hwang JC, Chen CS. Customer short term load forecasting by using ARIMA transfer function model. Energy management and power delivery, proceedings of EMPD'95. 1995 international conference on IEEE, 1995;1:317-322.译文:一种基于ARIMA、灰色模型和BPNN对CPI(消费物价指数)进行预测的新型分治模型摘要:在本文中,利用我国现有的消费者价格指数(CPI)的计算方法,提出了一种新的CPI预测分治模型。
中英文双语外文文献翻译:一种基于...
中英⽂双语外⽂⽂献翻译:⼀种基于...此⽂档是毕业设计外⽂翻译成品(含英⽂原⽂+中⽂翻译),⽆需调整复杂的格式!下载之后直接可⽤,⽅便快捷!本⽂价格不贵,也就⼏⼗块钱!⼀辈⼦也就⼀次的事!英⽂3890单词,20217字符(字符就是印刷符),中⽂6398汉字。
A Novel Divide-and-Conquer Model for CPI Prediction UsingARIMA, Gray Model and BPNNAbstract:This paper proposes a novel divide-and-conquer model for CPI prediction with the existing compilation method of the Consumer Price Index (CPI) in China. Historical national CPI time series is preliminary divided into eight sub-indexes including food, articles for smoking and drinking, clothing, household facilities, articles and maintenance services, health care and personal articles, transportation and communication, recreation, education and culture articles and services, and residence. Three models including back propagation neural network (BPNN) model, grey forecasting model (GM (1, 1)) and autoregressive integrated moving average (ARIMA) model are established to predict each sub-index, respectively. Then the best predicting result among the three models’for each sub-index is identified. To further improve the performance, special modification in predicting method is done to sub-CPIs whose forecasting results are not satisfying enough. After improvement and error adjustment, we get the advanced predicting results of the sub-CPIs. Eventually, the best predicting results of each sub-index are integrated to form the forecasting results of the national CPI. Empirical analysis demonstrates that the accuracy and stability of the introduced method in this paper is better than many commonly adopted forecasting methods, which indicates the proposed method is an effective and alternative one for national CPI prediction in China.1.IntroductionThe Consumer Price Index (CPI) is a widely used measurement of cost of living. It not only affects the government monetary, fiscal, consumption, prices, wages, social security, but also closely relates to the residents’daily life. As an indicator of inflation in China economy, the change of CPI undergoes intense scrutiny. For instance, The People's Bank of China raised the deposit reserve ratio in January, 2008 before the CPI of 2007 was announced, for it is estimated that the CPI in 2008 will increase significantly if no action is taken. Therefore, precisely forecasting the change of CPI is significant to many aspects of economics, some examples include fiscal policy, financial markets and productivity. Also, building a stable and accurate model to forecast the CPI will have great significance for the public, policymakers and research scholars.Previous studies have already proposed many methods and models to predict economic time series or indexes such as CPI. Some previous studies make use of factors that influence the value of the index and forecast it by investigating the relationship between the data of those factors and the index. These forecasts are realized by models such as Vector autoregressive (VAR)model1 and genetic algorithms-support vector machine (GA-SVM) 2.However, these factor-based methods, although effective to some extent, simply rely on the correlation between the value of the index and limited number of exogenous variables (factors) and basically ignore the inherent rules of the variation of the time series. As a time series itself contains significant amount of information3, often more than a limited number of factors can do, time series-based models are often more effective in the field of prediction than factor-based models.Various time series models have been proposed to find the inherent rules of the variation in the series. Many researchers have applied different time series models to forecasting the CPI and other time series data. For example, the ARIMA model once served as a practical method in predicting the CPI4. It was also applied to predict submicron particle concentrations frommeteorological factors at a busy roadside in Hangzhou, China5. What’s more, the ARIMA model was adopted to analyse the trend of pre-monsoon rainfall data forwestern India6. Besides the ARIMA model, other models such as the neural network, gray model are also widely used in the field of prediction. Hwang used the neural-network to forecast time series corresponding to ARMA (p, q) structures and found that the BPNNs generally perform well and consistently when a particular noise level is considered during the network training7. Aiken also used a neural network to predict the level of CPI and reached a high degree of accuracy8. Apart from the neural network models, a seasonal discrete grey forecasting model for fashion retailing was proposed and was found practical for fashion retail sales forecasting with short historical data and better than other state-of-art forecastingtechniques9. Similarly, a discrete Grey Correlation Model was also used in CPI prediction10. Also, Ma et al. used gray model optimized by particle swarm optimization algorithm to forecast iron ore import and consumption of China11. Furthermore, to deal with the nonlinear condition, a modified Radial Basis Function (RBF) was proposed by researchers.In this paper, we propose a new method called “divide-and-conquer model”for the prediction of the CPI.We divide the total CPI into eight categories according to the CPI construction and then forecast the eight sub- CPIs using the GM (1, 1) model, the ARIMA model and the BPNN. To further improve the performance, we again make prediction of the sub-CPIs whoseforecasting results are not satisfying enough by adopting new forecasting methods. After improvement and error adjustment, we get the advanced predicting results of the sub-CPIs. Finally we get the total CPI prediction by integrating the best forecasting results of each sub-CPI.The rest of this paper is organized as follows. In section 2, we give a brief introduction of the three models mentioned above. And then the proposed model will be demonstrated in the section 3. In section 4 we provide the forecasting results of our model and in section 5 we make special improvement by adjusting the forecasting methods of sub-CPIs whose predicting results are not satisfying enough. And in section 6 we give elaborate discussion and evaluation of the proposed model. Finally, the conclusion is summarized in section 7.2.Introduction to GM(1,1), ARIMA & BPNNIntroduction to GM(1,1)The grey system theory is first presented by Deng in 1980s. In the grey forecasting model, the time series can be predicted accurately even with a small sample by directly estimating the interrelation of data. The GM(1,1) model is one type of the grey forecasting which is widely adopted. It is a differential equation model of which the order is 1 and the number of variable is 1, too. The differential equation is:Introduction to ARIMAAutoregressive Integrated Moving Average (ARIMA) model was first put forward by Box and Jenkins in 1970. The model has been very successful by taking full advantage of time series data in the past and present. ARIMA model is usually described as ARIMA (p, d, q), p refers to the order of the autoregressive variable, while d and q refer to integrated, and moving average parts of the model respectively. When one of the three parameters is zero, the model is changed to model “AR”, “MR”or “ARMR”. When none of the three parameters is zero, the model is given by:where L is the lag number,?t is the error term.Introduction to BPNNArtificial Neural Network (ANN) is a mathematical and computational model which imitates the operation of neural networks of human brain. ANN consists of several layers of neurons. Neurons of contiguous layers are connected with each other. The values of connections between neurons are called “weight”. Back Propagation Neural Network (BPNN) is one of the most widely employed neural network among various types of ANN. BPNN was put forward by Rumelhart and McClelland in 1985. It is a common supervised learning network well suited for prediction. BPNN consists of three parts including one input layer, several hidden layers and one output layer, as is demonstrated in Fig 1. The learning process of BPNN is modifying the weights of connections between neurons based on the deviation between the actual output and the target output until the overall error is in the acceptable range.Fig. 1. Back-propagation Neural Network3.The Proposed MethodThe framework of the dividing-integration modelThe process of forecasting national CPI using the dividing-integration model is demonstrated in Fig 2.Fig. 2.The framework of the dividing-integration modelAs can be seen from Fig. 2, the process of the proposed method can be divided into the following steps: Step1: Data collection. The monthly CPI data including total CPI and eight sub-CPIs are collected from the official website of China’s State Statistics Bureau (/doc/d62de4b46d175f0e7cd184254b35eefdc9d31514.html /).Step2: Dividing the total CPI into eight sub-CPIs. In this step, the respective weight coefficient of eight sub- CPIs in forming the total CPI is decided by consulting authoritative source .(/doc/d62de4b46d175f0e7cd184254b35eefdc9d31514.html /). The eight sub-CPIs are as follows: 1. Food CPI; 2. Articles for Smoking and Drinking CPI; 3. Clothing CPI; 4. Household Facilities, Articles and Maintenance Services CPI; 5. Health Care and Personal Articles CPI; 6. Transportation and Communication CPI;7. Recreation, Education and Culture Articles and Services CPI; 8. Residence CPI. The weight coefficient of each sub-CPI is shown in Table 8.Table 1. 8 sub-CPIs weight coefficient in the total indexNote: The index number stands for the corresponding type of sub-CPI mentioned before. Other indexes appearing in this paper in such form have the same meaning as this one.So the decomposition formula is presented as follows:where TI is the total index; Ii (i 1,2, ,8) are eight sub-CPIs. To verify the formula, we substitute historical numeric CPI and sub-CPI values obtained in Step1 into the formula and find the formula is accurate.Step3: The construction of the GM (1, 1) model, the ARIMA (p, d, q) model and the BPNN model. The three models are established to predict the eight sub-CPIs respectively.Step4: Forecasting the eight sub-CPIs using the three models mentioned in Step3 and choosing the best forecasting result for each sub-CPI based on the errors of the data obtained from the three models.Step5: Making special improvement by adjusting the forecasting methods of sub-CPIs whose predicting results are not satisfying enough and get advanced predicting results of total CPI. Step6: Integrating the best forecasting results of 8 sub-CPIs to form the prediction of total CPI with the decomposition formula in Step2.In this way, the whole process of the prediction by the dividing-integration model is accomplished.3.2. The construction of the GM(1,1) modelThe process of GM (1, 1) model is represented in the following steps:Step1: The original sequence:Step2: Estimate the parameters a and u using the ordinary least square (OLS). Step3: Solve equation as follows.Step4: Test the model using the variance ratio and small error possibility.The construction of the ARIMA modelFirstly, ADF unit root test is used to test the stationarity of the time series. If the initial time series is not stationary, a differencing transformation of the data is necessary to make it stationary. Then the values of p and q are determined by observing the autocorrelation graph, partial correlation graph and the R-squared value.After the model is built, additional judge should be done to guarantee that the residual error is white noise through hypothesis testing. Finally the model is used to forecast the future trend ofthe variable.The construction of the BPNN modelThe first thing is to decide the basic structure of BP neural network. After experiments, we consider 3 input nodes and 1 output nodes to be the best for the BPNN model. This means we use the CPI data of time , ,toforecast the CPI of time .The hidden layer level and the number of hidden neurons should also be defined. Since the single-hidden- layer BPNN are very good at non-liner mapping, the model is adopted in this paper. Based on the Kolmogorov theorem and testing results, we define 5 to be the best number of hidden neurons. Thus the 3-5-1 BPNN structure is determined.As for transferring function and training algorithm, we select ‘tansig’as the transferring function for middle layer, ‘logsig’for input layer and ‘traingd’as training algorithm. The selection is based on the actual performance of these functions, as there are no existing standards to decide which ones are definitely better than others.Eventually, we decide the training times to be 35000 and the goal or the acceptable error to be 0.01.4.Empirical AnalysisCPI data from Jan. 2012 to Mar. 2013 are used to build the three models and the data from Apr. 2013 to Sept. 2013 are used to test the accuracy and stability of these models. What’s more, the MAPE is adopted to evaluate the performance of models. The MAPE is calculated by the equation:Data sourceAn appropriate empirical analysis based on the above discussion can be performed using suitably disaggregated data. We collect the monthly data of sub-CPIs from the website of National Bureau of Statistics of China(/doc/d62de4b46d175f0e7cd184254b35eefdc9d31514.html /).Particularly, sub-CPI data from Jan. 2012 to Mar. 2013 are used to build the three models and the data from Apr. 2013 to Sept. 2013 are used to test the accuracy and stability of these models.Experimental resultsWe use MATLAB to build the GM (1,1) model and the BPNN model, and Eviews 6.0 to build the ARIMA model. The relative predicting errors of sub-CPIs are shown in Table 2.Table 2.Error of Sub-CPIs of the 3 ModelsFrom the table above, we find that the performance of different models varies a lot, because the characteristic of the sub-CPIs are different. Some sub-CPIs like the Food CPI changes drastically with time while some do not have much fluctuation, like the Clothing CPI. We use different models to predict the sub- CPIs and combine them by equation 7.Where Y refers to the predicted rate of the total CPI, is the weight of the sub-CPI which has already been shown in Table1and is the predicted value of the sub-CPI which has the minimum error among the three models mentioned above. The model chosen will be demonstrated in Table 3:Table 3.The model used to forecastAfter calculating, the error of the total CPI forecasting by the dividing-integration model is 0.0034.5.Model Improvement & Error AdjustmentAs we can see from Table 3, the prediction errors of sub-CPIs are mostly below 0.004 except for two sub- CPIs: Food CPI whose error reaches 0.0059 and Transportation & Communication CPI 0.0047.In order to further improve our forecasting results, we modify the prediction errors of the two aforementioned sub-CPIs by adopting other forecasting methods or models to predict them. The specific methods are as follows.Error adjustment of food CPIIn previous prediction, we predict the Food CPI using the BPNN model directly. However, the BPNN model is not sensitive enough to investigate the variation in the values of the data. For instance, although the Food CPI varies a lot from month to month, the forecasting values of it are nearly all around 103.5, which fails to make meaningful prediction.We ascribe this problem to the feature of the training data. As we can see from the original sub-CPI data on the website of National Bureau of Statistics of China, nearly all values of sub-CPIs are around 100. As for Food CPI, although it does have more absolute variations than others, its changes are still very small relative to the large magnitude of the data (100). Thus it will be more difficult for the BPNN model to detect the rules of variations in training data and the forecastingresults are marred.Therefore, we use the first-order difference series of Food CPI instead of the original series to magnify the relative variation of the series forecasted by the BPNN. The training data and testing data are the same as that in previous prediction. The parameters and functions of BPNN are automatically decided by the software, SPSS.We make 100 tests and find the average forecasting error of Food CPI by this method is 0.0028. The part of the forecasting errors in our tests is shown as follows in Table 4:Table 4.The forecasting errors in BPNN testError adjustment of transportation &communication CPIWe use the Moving Average (MA) model to make new prediction of the Transportation and Communication CPI because the curve of the series is quite smooth with only a few fluctuations. We have the following equation(s):where X1, X2…Xn is the time series of the Transportation and Communication CPI, is the value of moving average at time t, is a free parameter which should be decided through experiment.To get the optimal model, we range the value of from 0 to 1. Finally we find that when the value of a is 0.95, the forecasting error is the smallest, which is 0.0039.The predicting outcomes are shown as follows in Table5:Table 5.The Predicting Outcomes of MA modelAdvanced results after adjustment to the modelsAfter making some adjustment to our previous model, we obtain the advanced results as follows in Table 6: Table 6.The model used to forecast and the Relative ErrorAfter calculating, the error of the total CPI forecasting by the dividing-integration model is 0.2359.6.Further DiscussionTo validate the dividing-integration model proposed in this paper, we compare the results of our model with the forecasting results of models that do not adopt the dividing-integration method. For instance, we use the ARIMA model, the GM (1, 1) model, the SARIMA model, the BRF neural network (BRFNN) model, the Verhulst model and the Vector Autoregression (VAR) model respectively to forecast the total CPI directly without the process of decomposition and integration. The forecasting results are shown as follows in Table7.From Table 7, we come to the conclusion that the introduction of dividing-integration method enhances the accuracy of prediction to a great extent. The results of model comparison indicate that the proposed method is not only novel but also valid and effective.The strengths of the proposed forecasting model are obvious. Every sub-CPI time series have different fluctuation characteristics. Some are relatively volatile and have sharp fluctuations such as the Food CPI while others are relatively gentle and quiet such as the Clothing CPI. As a result, by dividing the total CPI into several sub-CPIs, we are able to make use of the characteristics of each sub-CPI series and choose the best forecasting model among several models for every sub-CPI’s prediction. Moreover, the overall prediction error is provided in the following formula:where TE refers to the overall prediction error of the total CPI, is the weight of the sub-CPI shown in table 1 and is the forecasting error of corresponding sub-CPI.In conclusion, the dividing-integration model aims at minimizing the overall prediction errors by minimizing the forecasting errors of sub-CPIs.7.Conclusions and future workThis paper creatively transforms the forecasting of national CPI into the forecasting of 8 sub-CPIs. In the prediction of 8 sub-CPIs, we adopt three widely used models: the GM (1, 1) model, the ARIMA model and the BPNN model. Thus we can obtain the best forecasting results for each sub-CPI. Furthermore, we make special improvement by adjusting the forecasting methods of sub-CPIs whose predicting results are not satisfying enough and get the advanced predicting results of them. Finally, the advanced predicting results of the 8 sub- CPIs are integrated to formthe forecasting results of the total CPI.Furthermore, the proposed method also has several weaknesses and needs improving. Firstly, The proposed model only uses the information of the CPI time series itself. If the model can make use of other information such as the information provided by factors which make great impact on the fluctuation of sub-CPIs, we have every reason to believe that the accuracy and stability of the model can be enhanced. For instance, the price of pork is a major factor in shaping the Food CPI. If this factor is taken into consideration in the prediction of Food CPI, the forecasting results will probably be improved to a great extent. Second, since these models forecast the future by looking at the past, they are not able to sense the sudden or recent change of the environment. So if the model can take web news or quick public reactions with account, it will react much faster to sudden incidence and affairs. Finally, the performance of sub-CPIs prediction can be higher. In this paper we use GM (1, 1), ARIMA and BPNN to forecast sub-CPIs. Some new method for prediction can be used. For instance, besides BPNN, there are other neural networks like genetic algorithm neural network (GANN) and wavelet neural network (WNN), which might have better performance in prediction of sub-CPIs. Other methods such as the VAR model and the SARIMA model should also be taken into consideration so as to enhance the accuracy of prediction.References1.Wang W, Wang T, and Shi Y. Factor analysis on consumer price index rising in China from 2005 to 2008. Management and service science 2009; p. 1-4.2.Qin F, Ma T, and Wang J. The CPI forecast based on GA-SVM. Information networking and automation 2010; p. 142-147.3.George EPB, Gwilym MJ, and Gregory CR. Time series analysis: forecasting and control. 4th ed. Canada: Wiley; 20084.Weng D. The consumer price index forecast based on ARIMA model. WASE International conferenceon information engineering 2010;p. 307-310.5.Jian L, Zhao Y, Zhu YP, Zhang MB, Bertolatti D. An application of ARIMA model to predict submicron particle concentrations from meteorological factors at a busy roadside in Hangzhou, China. Science of total enviroment2012;426:336-345.6.Priya N, Ashoke B, Sumana S, Kamna S. Trend analysis and ARIMA modelling of pre-monsoon rainfall data forwestern India. Comptesrendus geoscience 2013;345:22-27.7.Hwang HB. Insights into neural-network forecasting of time seriescorresponding to ARMA(p; q) structures. Omega2001;29:273-289./doc/d62de4b46d175f0e7cd184254b35eefdc9d31514.html am A. Using a neural network to forecast inflation. Industrial management & data systems 1999;7:296-301.9.Min X, Wong WK. A seasonal discrete grey forecasting model for fashion retailing. Knowledge based systems 2014;57:119-126.11. Weimin M, Xiaoxi Z, Miaomiao W. Forecasting iron ore import and consumption of China using grey model optimized by particleswarm optimization algorithm. Resources policy 2013;38:613-620.12. Zhen D, and Feng S. A novel DGM (1, 1) model for consumer price index forecasting. Greysystems and intelligent services (GSIS)2009; p. 303-307.13. Yu W, and Xu D. Prediction and analysis of Chinese CPI based on RBF neural network. Information technology and applications2009;3:530-533.14. Zhang GP. Time series forecasting using a hybrid ARIMA and neural network model. Neurocomputing 2003;50:159-175.15. Pai PF, Lin CS. A hybrid ARIMA and support vector machines model in stock price forecasting. Omega 2005;33(6):497-505.16. Tseng FM, Yu HC, Tzeng GH. Combining neural network model with seasonal time series ARIMA model. Technological forecastingand social change 2002;69(1):71-87.17.Cho MY, Hwang JC, Chen CS. Customer short term load forecasting by using ARIMA transfer function model. Energy management and power delivery, proceedings of EMPD'95. 1995 international conference on IEEE, 1995;1:317-322.译⽂:⼀种基于ARIMA、灰⾊模型和BPNN对CPI(消费物价指数)进⾏预测的新型分治模型摘要:在本⽂中,利⽤我国现有的消费者价格指数(CPI)的计算⽅法,提出了⼀种新的CPI预测分治模型。
建筑信息模型BIM中英文对照外文翻译文献
(文档含英文原文和中文翻译)中英文翻译外文文献:Changing roles of the clients,architects and contractors through BIMAbstractPurpose– This paper aims to present a general review of the practical implications of building information modelling (BIM) based on literature and case studies. It seeks to address the necessity for applying BIM and re-organising the processes and roles in hospital building projects. This type of project is complex due to complicated functional and technical requirements, decision making involving a large number of stakeholders, and long-term development processes. Design/methodology/approach– Through desk research and referring to the ongoing European research project InPro, the framework for integrated collaboration and the use of BIM are analysed. Through several real cases, the changing roles of clients, architects, and contractors through BIM application are investigated.Findings–One of the main findings is the identification of the main factors for a successful collaboration using BIM, which can be recognised as “POWER”: product information sharing (P),organisational roles synergy (O), work processes coordination (W), environment for teamwork (E), and reference data consolidation (R). Furthermore, it is also found that the implementation of BIM in hospital building projects is still limited due to certain commercial and legal barriers, as well as the fact that integrated collaboration has not yet been embedded in the real estate strategies of healthcare institutions.Originality/value– This paper contributes to the actual discussion in science and practice on the changing roles and processes that are required to develop and operate sustainable buildings with the support of integrated ICT frameworks and tools. It presents the state-of-the-art of European research projects and some of the first real cases of BIM application in hospital building projects. Keywords Europe, Hospitals, The Netherlands, Construction works, Response flexibility, Project planningPaper type General review1. IntroductionHospital building projects, are of key importance, and involve significant investment, and usually take a long-term development period. Hospital building projects are also very complex due to the complicated requirements regarding hygiene, safety, special equipments, and handling of a large amount of data. The building process is very dynamic and comprises iterative phasesand intermediate changes. Many actors with shifting agendas, roles and responsibilities are actively involved, such as: the healthcare institutions, national and local governments, project developers, financial institutions, architects, contractors, advisors, facility managers, and equipment manufacturers and suppliers. Such building projects are very much influenced, by the healthcare policy, which changes rapidly in response to the medical, societal and technological developments, and varies greatly between countries (World Health Organization, 2000). In The Netherlands, for example, the way a building project in the healthcare sector is organised is undergoing a major reform due to a fundamental change in the Dutch health policy that was introduced in 2008.The rapidly changing context posts a need for a building with flexibility over its lifecycle. In order to incorporate life-cycle considerations in the building design, construction technique, and facility management strategy, a multidisciplinary collaboration is required. Despite the attempt for establishing integrated collaboration, healthcare building projects still faces serious problems in practice, such as: budget overrun, delay, and sub-optimal quality in terms of flexibility, end-user’s dissatisfaction, and energy inefficiency. It is evident that the lack of communication and coordination between the actors involved in the different phases of a building project is among the most important reasons behind these problems. The communication between different stakeholders becomes critical, as each stakeholder possesses different set of skills. As a result, the processes for extraction, interpretation, and communication of complex design information from drawings and documents are often time-consuming and difficult. Advanced visualisation technologies, like 4D planning have tremendous potential to increase the communication efficiency and interpretation ability of the project team members. However, their use as an effective communication tool is still limited and not fully explored (Dawood and Sikka, 2008). There are also other barriers in the information transfer and integration, for instance: many existing ICT systems do not support the openness of the data and structure that is prerequisite for an effective collaboration between different building actors or disciplines.Building information modelling (BIM) offers an integrated solution to the previously mentioned problems. Therefore, BIM is increasingly used as an ICT support in complex building projects. An effective multidisciplinary collaboration supported by an optimal use of BIM require changing roles of the clients, architects, and contractors; new contractual relationships; andre-organised collaborative processes. Unfortunately, there are still gaps in the practical knowledge on how to manage the building actors to collaborate effectively in their changing roles, and to develop and utilise BIM as an optimal ICT support of the collaboration.This paper presents a general review of the practical implications of building information modelling (BIM) based on literature review and case studies. In the next sections, based on literature and recent findings from European research project InPro, the framework for integrated collaboration and the use of BIM are analysed. Subsequently, through the observation of two ongoing pilot projects in The Netherlands, the changing roles of clients, architects, and contractors through BIM application are investigated. In conclusion, the critical success factors as well as the main barriers of a successful integrated collaboration using BIM are identified.2. Changing roles through integrated collaboration and life-cycle design approachesA hospital building project involves various actors, roles, and knowledge domains. In The Netherlands, the changing roles of clients, architects, and contractors in hospital building projects are inevitable due the new healthcare policy. Previously under the Healthcare Institutions Act (WTZi), healthcare institutions were required to obtain both a license and a building permit for new construction projects and major renovations. The permit was issued by the Dutch Ministry of Health. The healthcare institutions were then eligible to receive financial support from the government. Since 2008, new legislation on the management of hospital building projects and real estate has come into force. In this new legislation, a permit for hospital building project under the WTZi is no longer obligatory, nor obtainable (Dutch Ministry of Health, Welfare and Sport, 2008). This change allows more freedom from the state-directed policy, and respectively, allocates more responsibilities to the healthcare organisations to deal with the financing and management of their real estate. The new policy implies that the healthcare institutions are fully responsible to manage and finance their building projects and real estate. The government’s support for the costs of healthcare facilities will no longer be given separately, but will be included in the fee for healthcare services. This means that healthcare institutions must earn back their investment on real estate through their services. This new policy intends to stimulate sustainable innovations in the design, procurement and management of healthcare buildings, which will contribute to effective and efficient primary healthcare services.The new strategy for building projects and real estate management endorses an integratedcollaboration approach. In order to assure the sustainability during construction, use, and maintenance, the end-users, facility managers, contractors and specialist contractors need to be involved in the planning and design processes. The implications of the new strategy are reflected in the changing roles of the building actors and in the new procurement method.In the traditional procurement method, the design, and its details, are developed by the architect, and design engineers. Then, the client (the healthcare institution) sends an application to the Ministry of Health to obtain an approval on the building permit and the financial support from the government. Following this, a contractor is selected through a tender process that emphasises the search for the lowest-price bidder. During the construction period, changes often take place due to constructability problems of the design and new requirements from the client. Because of the high level of technical complexity, and moreover, decision-making complexities, the whole process from initiation until delivery of a hospital building project can take up to ten years time. After the delivery, the healthcare institution is fully in charge of the operation of the facilities. Redesigns and changes also take place in the use phase to cope with new functions and developments in the medical world (van Reedt Dortland, 2009).The integrated procurement pictures a new contractual relationship between the parties involved in a building project. Instead of a relationship between the client and architect for design, and the client and contractor for construction, in an integrated procurement the client only holds a contractual relationship with the main party that is responsible for both design and construction ( Joint Contracts Tribunal, 2007). The traditional borders between tasks and occupational groups become blurred since architects, consulting firms, contractors, subcontractors, and suppliers all stand on the supply side in the building process while the client on the demand side. Such configuration puts the architect, engineer and contractor in a very different position that influences not only their roles, but also their responsibilities, tasks and communication with the client, the users, the team and other stakeholders.The transition from traditional to integrated procurement method requires a shift of mindset of the parties on both the demand and supply sides. It is essential for the client and contractor to have a fair and open collaboration in which both can optimally use their competencies. The effectiveness of integrated collaboration is also determined by the client’s capacity and strategy to organize innovative tendering procedures (Sebastian et al., 2009).A new challenge emerges in case of positioning an architect in a partnership with the contractor instead of with the client. In case of the architect enters a partnership with the contractor, an important issues is how to ensure the realisation of the architectural values as well as innovative engineering through an efficient construction process. In another case, the architect can stand at the client’s side in a strategic advisory role instead of being the designer. In this case, the architect’s responsibility is translating client’s requirements and wishes into the architectural values to be included in the design specification, and evaluating the contractor’s proposal against this. In any of this new role, the architect holds the responsibilities as stakeholder interest facilitator, custodian of customer value and custodian of design models.The transition from traditional to integrated procurement method also brings consequences in the payment schemes. In the traditional building process, the honorarium for the architect is usually based on a percentage of the project costs; this may simply mean that the more expensive the building is, the higher the honorarium will be. The engineer receives the honorarium based on the complexity of the design and the intensity of the assignment. A highly complex building, which takes a number of redesigns, is usually favourable for the engineers in terms of honorarium.A traditional contractor usually receives the commission based on the tender to construct the building at the lowest price by meeting the minimum specifications given by the client. Extra work due to modifications is charged separately to the client. After the delivery, the contractor is no longer responsible for the long-term use of the building. In the traditional procurement method, all risks are placed with the client.In integrated procurement method, the payment is based on the achieved building performance; thus, the payment is non-adversarial. Since the architect, engineer and contractor have a wider responsibility on the quality of the design and the building, the payment is linked to a measurement system of the functional and technical performance of the building over a certain period of time. The honorarium becomes an incentive to achieve the optimal quality. If the building actors succeed to deliver a higher added-value that exceed the minimum client’s requirements, they will receive a bonus in accordance to the client’s extra gain. The level of transparency is also improved. Open book accounting is an excellent instrument provided that the stakeholders agree on the information to be shared and to its level of detail (InPro, 2009).Next to the adoption of integrated procurement method, the new real estate strategy forhospital building projects addresses an innovative product development and life-cycle design approaches. A sustainable business case for the investment and exploitation of hospital buildings relies on dynamic life-cycle management that includes considerations and analysis of the market development over time next to the building life-cycle costs (investment/initial cost, operational cost, and logistic cost). Compared to the conventional life-cycle costing method, the dynamic life-cycle management encompasses a shift from focusing only on minimizing the costs to focusing on maximizing the total benefit that can be gained. One of the determining factors for a successful implementation of dynamic life-cycle management is the sustainable design of the building and building components, which means that the design carries sufficient flexibility to accommodate possible changes in the long term (Prins, 1992).Designing based on the principles of life-cycle management affects the role of the architect, as he needs to be well informed about the usage scenarios and related financial arrangements, the changing social and physical environments, and new technologies. Design needs to integrate people activities and business strategies over time. In this context, the architect is required to align the design strategies with the organisational, local and global policies on finance, business operations, health and safety, environment, etc. (Sebastian et al., 2009).The combination of process and product innovation, and the changing roles of the building actors can be accommodated by integrated project delivery or IPD (AIA California Council, 2007). IPD is an approach that integrates people, systems, business structures and practices into a process that collaboratively harnesses the talents and insights of all participants to reduce waste and optimize efficiency through all phases of design, fabrication and construction. IPD principles can be applied to a variety of contractual arrangements. IPD teams will usually include members well beyond the basic triad of client, architect, and contractor. At a minimum, though, an Integrated Project should include a tight collaboration between the client, the architect, and the main contractor ultimately responsible for construction of the project, from the early design until the project handover. The key to a successful IPD is assembling a team that is committed to collaborative processes and is capable of working together effectively. IPD is built on collaboration. As a result, it can only be successful if the participants share and apply common values and goals.3. Changing roles through BIM applicationBuilding information model (BIM) comprises ICT frameworks and tools that can support the integrated collaboration based on life-cycle design approach. BIM is a digital representation of physical and functional characteristics of a facility. As such it serves as a shared knowledge resource for information about a facility forming a reliable basis for decisions during its lifecycle from inception onward (National Institute of Building Sciences NIBS, 2007). BIM facilitates time and place independent collaborative working. A basic premise of BIM is collaboration by different stakeholders at different phases of the life cycle of a facility to insert, extract, update or modify information in the BIM to support and reflect the roles of that stakeholder. BIM in its ultimate form, as a shared digital representation founded on open standards for interoperability, can become a virtual information model to be handed from the design team to the contractor and subcontractors and then to the client (Sebastian et al., 2009).BIM is not the same as the earlier known computer aided design (CAD). BIM goes further than an application to generate digital (2D or 3D) drawings (Bratton, 2009). BIM is an integrated model in which all process and product information is combined, stored, elaborated, and interactively distributed to all relevant building actors. As a central model for all involved actors throughout the project lifecycle, BIM develops and evolves as the project progresses. Using BIM, the proposed design and engineering solutions can be measured against the client’s requirements and expected building performance. The functionalities of BIM to support the design process extend to multidimensional (nD), including: three-dimensional visualisation and detailing, clash detection, material schedule, planning, cost estimate, production and logistic information, and as-built documents. During the construction process, BIM can support the communication between the building site, the factory and the design office– which is crucial for an effective and efficient prefabrication and assembly processes as well as to prevent or solve problems related to unforeseen errors or modifications. When the building is in use, BIM can be used in combination with the intelligent building systems to provide and maintain up-to-date information of the building performance, including the life-cycle cost.To unleash the full potential of more efficient information exchange in the AEC/FM industry in collaborative working using BIM, both high quality open international standards and high quality implementations of these standards must be in place. The IFC open standard is generally agreed to be of high quality and is widely implemented in software. Unfortunately, thecertification process allows poor quality implementations to be certified and essentially renders the certified software useless for any practical usage with IFC. IFC compliant BIM is actually used less than manual drafting for architects and contractors, and show about the same usage for engineers. A recent survey shows that CAD (as a closed-system) is still the major form of technique used in design work (over 60 per cent) while BIM is used in around 20 percent of projects for architects and in around 10 per cent of projects for engineers and contractors (Kiviniemi et al., 2008).The application of BIM to support an optimal cross-disciplinary and cross-phase collaboration opens a new dimension in the roles and relationships between the building actors. Several most relevant issues are: the new role of a model manager; the agreement on the access right and Intellectual Property Right (IPR); the liability and payment arrangement according to the type of contract and in relation to the integrated procurement; and the use of open international standards.Collaborative working using BIM demands a new expert role of a model manager who possesses ICT as well as construction process know-how (InPro, 2009). The model manager deals with the system as well as with the actors. He provides and maintains technological solutions required for BIM functionalities, manages the information flow, and improves the ICT skills of the stakeholders. The model manager does not take decisions on design and engineering solutions, nor the organisational processes, but his roles in the chain of decision making are focused on:●the development of BIM, the definition of the structure and detail level of the model, and thedeployment of relevant BIM tools, such as for models checking, merging, and clash detections;●the contribution to collaboration methods, especially decision making and communicationprotocols, task planning, and risk management;●and the management of information, in terms of data flow and storage, identification ofcommunication errors, and decision or process (re-)tracking.Regarding the legal and organisational issues, one of the actual questions is: “In what way does the intellectual property right (IPR) in collaborative working using BIM differ from the IPR in a traditional teamwork?”. In terms of combined work, the IPR of each element is attached to itscreator. Although it seems to be a fully integrated design, BIM actually resulted from a combination of works/elements; for instance: the outline of the building design, is created by the architect, the design for the electrical system, is created by the electrical contractor, etc. Thus, in case of BIM as a combined work, the IPR is similar to traditional teamwork. Working with BIM with authorship registration functionalities may actually make it easier to keep track of the IPR(Chao-Duivis, 2009).How does collaborative working, using BIM, effect the contractual relationship? On the one hand, collaborative working using BIM does not necessarily change the liability position in the contract nor does it obligate an alliance contract. The General Principles of BIM Addendum confirms: ‘This does not effectuate or require a restructuring of contractual relationships or shifting of risks between or among the Project Participants other than as specifically required per the Protocol Addendum and its Attachments’ (ConsensusDOCS, 2008). On the other hand, changes in terms of payment schemes can be anticipated. Collaborative processes using BIM will lead to the shifting of activities from to the early design phase. Much, if not all, activities in the detailed engineering and specification phase will be done in the earlier phases. It means that significant payment for the engineering phase, which may count up to 40 per cent of the design cost, can no longer be expected. As engineering work is done concurrently with the design, a new proportion of the payment in the early design phase is necessary(Chao-Duivis, 2009).4. Review of ongoing hospital building projects using BIMIn The Netherlands, the changing roles in hospital building projects are part of the strategy, which aims at achieving a sustainable real estate in response to the changing healthcare policy. Referring to literature and previous research, the main factors that influence the success of the changing roles can be concluded as: the implementation of an integrated procurement method and a life-cycle design approach for a sustainable collaborative process; the agreement on the BIM structure and the intellectual rights; and the integration of the role of a model manager. The preceding sections have discussed the conceptual thinking on how to deal with these factors effectively. This current section observes two actual projects and compares the actual practice with the conceptual view respectively.The main issues, which are observed in the case studies, are:the selected procurement method and the roles of the involved parties within this method;●the implementation of the life-cycle design approach;●the type, structure, and functionalities of BIM used in the project;●the openness in data sharing and transfer of the model, and the intended use of BIM in thefuture; and●the roles and tasks of the model manager.The pilot experience of hospital building projects using BIM in the Netherlands can be observed at University Medical Centre St Radboud (further referred as UMC) and Maxima Medical Centre (further referred as MMC). At UMC, the new building project for the Faculty of Dentistry in the city of Nijmegen has been dedicated as a BIM pilot project. At MMC, BIM is used in designing new buildings for Medical Simulation and Mother-and-Child Centre in the city of Veldhoven.The first case is a project at the University Medical Centre (UMC) St Radboud. UMC is more than just a hospital. UMC combines medical services, education and research. More than 8500 staff and 3000 students work at UMC. As a part of the innovative real estate strategy, UMC has considered to use BIM for its building projects. The new development of the Faculty of Dentistry and the surrounding buildings on the Kapittelweg in Nijmegen has been chosen as a pilot project to gather practical knowledge and experience on collaborative processes with BIM support.The main ambition to be achieved through the use of BIM in the building projects at UMC can be summarised as follows:●using 3D visualisation to enhance the coordination and communication among the buildingactors, and the user participation in design;●facilitating optimal information accessibility and exchange for a high●consistency of the drawings and documents across disciplines and phases;●integrating the architectural design with structural analysis, energy analysis, cost estimation,and planning;●interactively evaluating the design solutions against the programme of requirements andspecifications;●reducing redesign/remake costs through clash detection during the design process; and●optimising the management of the facility through the registration of medical installationsand equipments, fixed and flexible furniture, product and output specifications, and operational data.The second case is a project at the Maxima Medical Centre (MMC). MMC is a large hospital resulted from a merger between the Diaconessenhuis in Eindhoven and St Joseph Hospital in Veldhoven. Annually the 3,400 staff of MMC provides medical services to more than 450,000 visitors and patients. A large-scaled extension project of the hospital in Veldhoven is a part of its real estate strategy. A medical simulation centre and a women-and-children medical centre are among the most important new facilities within this extension project. The design has been developed using 3D modelling with several functionalities of BIM.The findings from both cases and the analysis are as follows. Both UMC and MMC opted for a traditional procurement method in which the client directly contracted an architect, a structural engineer, and a mechanical, electrical and plumbing (MEP) consultant in the design team. Once the design and detailed specifications are finished, a tender procedure will follow to select a contractor. Despite the choice for this traditional method, many attempts have been made for a closer and more effective multidisciplinary collaboration. UMC dedicated a relatively long preparation phase with the architect, structural engineer and MEP consultant before the design commenced. This preparation phase was aimed at creating a common vision on the optimal way for collaboration using BIM as an ICT support. Some results of this preparation phase are: a document that defines the common ambition for the project and the collaborative working process and a semi-formal agreement that states the commitment of the building actors for collaboration. Other than UMC, MMC selected an architecture firm with an in-house engineering department. Thus, the collaboration between the architect and structural engineer can take place within the same firm using the same software application.Regarding the life-cycle design approach, the main attention is given on life-cycle costs, maintenance needs, and facility management. Using BIM, both hospitals intend to get a much better insight in these aspects over the life-cycle period. The life-cycle sustainability criteria are included in the assignments for the design teams. Multidisciplinary designers and engineers are asked to collaborate more closely and to interact with the end-users to address life-cycle requirements. However, ensuring the building actors to engage in an integrated collaboration to generate sustainable design solutions that meet the life-cycle performance expectations is still。
Archard的磨损设计计算模型及其应用方法
3、数值算法:对于一些特殊形式的矩阵,可以利用数值算法直接计 算行列式的值。
如果det(A)≠0,则T是可逆的;如果det(A)=0,则T是不可逆的。
3、数值算法:对于一些特殊形式的矩阵,可以利用数值算法直接计 算行列式的值。
3、在数字分析中,行列式可以用来进行数值计算和优化。例如,在求解多变 量函数的最优解时,可以利用行列式来计算梯度向量和Hessian矩阵,从而利用 数值优化算法来寻找最优解。此外,在计算机图形学中,行列式也广泛应用于矩 阵变换和仿射变换等领域。
一、背景介绍
该模型为机械零部件的抗磨损设计和优化提供了有效的工具,被广泛应用于 各种工程实践。
二、理论模型
二、理论模型
Archard的磨损设计计算模型基于以下假设: 1、磨损是疲劳和粘着作用的综合结果。
二、理论模型
2、磨损与接触应力成正比。 3、材料的耐磨性与其硬度等因素有关。
三、应用方法
二、地层压力计算模型
1、经验公式法:基于大量实钻数据和经验总结,通过简单的数学公式来估算 地层压力。常用的经验公式包括威布尔公式、贝克曼公式等。
二、地层压力计算模型
2、地球物理学方法:利用地震波、电阻率、声波等地球物理勘探数据,通过 反演方法计算地层压力。这种方法需要结合地质模型和地球物理数据,精度较高。
性质
性质
行列式具有以下性质: 1、行列式的值是唯一的,与矩阵的表示方法无关。 2、行列式与矩阵的乘法、加法、减法等运算无关。
性质
3、行列式的值可以用来描述矩阵的逆、转置、共轭等矩阵变换的性质。
3、数值算法:对于一些特殊形 式的矩阵,可以利用数值算法直 接计算行列式的值。
3、数值算法:对于一些特殊形式的矩阵,可以利用数值算法直接计 算行列式的值。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
英文原文Case StudyTheoretical and practical aspects of the wear of vane pumpsPart A. Adaptation of a model for predictive wear calculationAbstractThe aim of this investigation is the development of a mathematical tool for predicting the wear behaviour of vane pumps uscd in the standard method for indicating the wcar charactcristics of hydraulic fluids according to ASTM D 2882/DIN 51389.The derivation of the corresponding mathematical algorithm is based on the description of the combined abrasive andadhesive wear phenomena occurring on the ring and vanes of the pump by the shear energy hypothesis, in connection withstochastic modelling of the contacting rough surfaces as two-dimensional isotropic random fields. Starting from a comprehensive analysis of the decisive ring-vane tribo contact, which supplies essential input data for the wear calculation, the computational method is adapted to the concrete geometrical, motional and loading conditions of thetribo system vane pump and extended by inclusion of partial elastohydrodynamic lubrication in the mathematical modej.For comparison of the calculated wear behaviour with expenmental results, a test series on a rig described in Part B was carried out. A mineral oil-based lubricant without any additives was used to exclude the influence of additives which cannot be described in the mathematical model. A good qualitative correspondence between calculation and experiment regarding the temporal wear progress and the amount of calculated wear mass was achieved.Keywords: Mathematical modelling; Simulation of wear mechanisms; Wear testing devices; Hydraulic vane pumps; Elastohydrodynamic lubrication;Surface roughness1. IntroductionIn this study, the preliminary results of a newmethodological approach to the development of tribo- meters for complicated tribo sysLems are presented. The basic concept involves the derivation of a mathematical algofithm for wear calculation in an interactive process with experiments, which can be used model of the tribo system to be simulated. In this way, an additional design tool to achieve the correlation of the wear rates of the model and original system is created.The investigations are performed for the Vickers vane pump V104 C usedin the standard method forindicating the wear characteristics of hydraulic fluids according to ASTM D 2882/DIN 51 389. In a first step, a mathematical theory based on the description of abrasive and adhesive wear phenomena by the shear energy hypothesis, and including stochastic modelling of the contacting rough surfaces, is adapted to the tribological reality of the vane pump, extended byaspects of partial elastohydrodynamic lubrication and verified by corresponding experiments.Part A of this study is devoted to the mathematical modelling of the wear behaviour of the vane pump and to the verification of the resulting algorithm; experimental wear investigations represent the focal point of Part B, and these are compared with the results of the computational method derived in Part A.2. Analysis of the tribo contactThe Vickers vane pump V 104 C is constructed as a pump for constant volume flow per revolution. The system pressure is led to the bottom side of the 12 vanes in the rotor slots to seal the cells formed by each pair of vanes, the ring, the rotor and the bushings in the tribologically interesting line contact of the vane and inner curvature of the ring (Fig. 1). Simultaneously, all other vane sides are stressed with different and periodically alternating pressures of the fiuid. A comprehensive structure and stress analysis based on quasistatic modelling of all inertial forces acting on the pump, and considering the inner curvature of the ring, the swivel motion of the vanes in relation to the tangent of curvature and the loading assumptions, is described in Refs. [1-3]. Thereby, a characteristic graph for the contact force Fe as a function of the turn angle can be obtained, which depends on the geometry of the vanes used in each run and the system pressure. From this, the inner curvature of the ring can be divided into four zones of different loading conditions in vane-ring tribo contact (Fig. 2), which is in good agreement with the wear measurements on the rings: in the area of maximum contact force (zone n), the highest linear wear could be found [2,3] (see also PartB).3. Mathematical modelling3.1. Basic relations for wear calculationThe vane and ring show combined abrasive and adhesive wear phenomena (Fig. 3). The basic concepts of the theory for the predictive calculation of such wear phenomena are described in Refs. [4-6].Starting from the assumption that wear is caused by shear effects in the surface regions of contacting bodies in relative motion, the fundamental equation(1)for the linear wear intensity Ih in the stationary wear state can be derived, which contains the specific shear energy density es/ro, interpretable as a material constant, and the real areaArs of the asperity contacts undergoing shear. To determine this real contact area, the de- scription of the contacting rough surfaces as two-dimensional isotropic gaussian fields according to Ref.[7] is included in the modelling. Thus the implicit functional relationwith the weight function(2)is found, which can be used to calculate the surface ratio in Eq. (1) for unlubricated contacts from the hertzian pressure Pa acting in the investigated tribo contact by a complicated iterative process described in Refs. [6,8]. The concrete structure of the functions Fand c depends on the relative motion of the contacting bodies (sliding, rolling). The parameter a- (m0m4)/m22represents the properties of the rough surface by its spectral moments, which can be deter- mined statistically from surface profilometry, and the plasticity index妒= (mOm4)y4(E'/H) is a measure of the ratio of elastic and plastic microcontacts.3.2. Extension to lubricated contactsThe algorithm resulting from the basic relations for wear calculation was applied successfully to unlubricated tribo systems [8]. The first concepts for involving lubrication in the mathematical model are developed in Ref. [8]. They are based on the application of the classical theory of elastohydrodynamic lubrication (EHL) to the microcontacts of the asperities, neglecting the fact that there is also a "macrolubrication film" which separates the contacting bodies and is interrupted in the case of partial lubrication by the asperity microcontacts. Therefore their use for calculating practical wear problems leads to unsatisfactory results [9]. They are extended here by including the following assump- tions in the mathematical model.(1) Lubrication causes the separation of contacting bodies by a macrofilm with a mean thickness u. which can be expressed in terms of the surfaceroughness by [10](3)Where u0 is the mean film thinkness according to classical EHL theory between two ideally smooth bodies, which can be determined for line contact of the vane and ring by[11](2) In the case of partial lubrication, the macrofilm is interrupted during asperity contacts. A plastic microcontact is interpreted as a pure solid state contact, whereas for an elastic contact theroughness is superimposed by a microlubrication film. Because of the modelling of the asperities as spherical indenters, the microfilm thickness can be determined using the EHL theory for sphere-plane contacts, which is represented in the random model by the sliding number [8](5)(3) The hertzian pressure acting in the macrocontact works in two parts: as a hydrodynamic pressure pEH borne by the macrolubrication film and as a pressure pFK borne by the roughness in solid body contact.(4) For pure solid state contacts, it is assumed that the limit for the mean real pressure prFK which an asperity can resist without plastic deformation can be estimated by one-fifth to one-sixth of its hardness(6)Investigations on the contact stiffness in Ref. [11] have led to the conclusion that the elastic properties of the lubrication film cause a relief of the asperities, which means that the real pressure working on the asperity is damped. Therefore, in the mathematical model for lubricated tribo systems, an additional term fffin, which corrects the upper limit of the real pressure as a function of the film thickness, is introduced p,EH =prFK[1 -fcorr(U)] (7)This formula can be used to determine a modified plasticity index {PEH for lubricated contacts according to Ref. [8].Altogether, the basic model for wear calculation can be extended for lubricated tribo systems by replacing relation (2) by(8)(3)3.3. Adaptation to the tribo’system vane pumpTo apply the mathematical model for wear calculation to a concrete tribo system, all material data (specific material and fluid properties, roughness parameters) used by the algorithm must be determined (see Part B). Moreover, the model must be adapted to the mechanical conditions of the wear process investigated. On the one hand, this is related to the relative motion of the bodies in tribo contact, which influences the concrete structure of function f in formulae (2) and (8). In the case of vane-ring contact, sliding with superimposed rolling due to the swivel motion of the vanes was modelled(9)A detailed derivation of the corresponding formulae for fsliding and f.olling can be found in Refs.[8,9].On the other hand, the hertzian presstire Pa acting on tribo contact during the wear process has an esseritial importance in the wear calculation. For the tribo system vane pump, the mean contact force Fe in each loading zone can be regarded as constant, whereas the hertzianpressure decreases with time. The reason for this is the wear debris on the vane, which causes a change 'n the vane tip shape with time,leading to an increased contact radius and, accordingly, a larger contact areaTo describe this phenomenon by the mathematical wear model, the volume removal Wvl of one vane in terms of the respective contact radius Ri(t) at time t and the sliding distance SR(Rl(t》is given by(10)where the constants a and b can be determined by regression from the geometrical data of the tested vanes. The corresponding sliding distance necessary to reach a certain radius Ri due to vane wear can be expressed using the basic equation (1):(11)Thus, applying Eq. (11) together with Eq. (10) to the relation(12)it is possible to derive the following differential equation for the respective volume removal Wvll of the ring, which can be solved by a numerical procedure(13)The required wear intensities of the vane and ring can be calculated by Eq. (8) as a function of the contact radius from the hertzian pressures working in each loading zone, which are available from the contact force by the well-known hertzian formulae.3.4 Possibilities of verificationIf all input data are available for a concrete vane pump run (the concrete geometrical, material and mechanical conditions in the cartridge used and the specific fluid properties, see Part B), the mathematical model for the calculation of the wear of vane pumps derived above can describe quantitatively the following relations.(1) The sliding distance SR(RI) and, if the number of revolutions of the pump and the size of the inner ring surface are known, the respective run time t of the pump which is necessary to reach a certain shape of the vane tips due to wear.(2) The volume removal W,.:uri(t) and the wear masses WmW(t) of the vane and ring as a function of the run time t.(3) The mean local linear wear Wl(t) in every loading zone on the ring at time t.Thus an immediate comparison between the calculated and experimentally established wear behaviour, with regard to the wear progress in time, the local wear progress on the ring and the wear masses at a certain time t, becomes possible.4。