外文翻译中英文对照

合集下载

财务报表分析中英文对照外文翻译文献

财务报表分析中英文对照外文翻译文献

中英文对照外文翻译文献(文档含英文原文和中文翻译)原文:ANALYSIS OF FINANCIAL STATEMENTSWe need to use financial ratios in analyzing financial statements.—— The analysis of comparative financial statements cannot be made really effective unless it takes the form of a study of relationships between items in the statements. It is of little value, for example, to know that, on a given date, the Smith Company has a cash balance of $1oooo. But suppose we know that this balance is only -IV per cent of all current liabilities whereas a year ago cash was 25 per cent of all current liabilities. Since the bankers for the company usually require a cash balance against bank lines, used or unused, of 20 per cent, we can see at once that the firm's cash condition is exhibiting a questionable tendency.We may make comparisons between items in the comparative financial statements as follows:1. Between items in the comparative balance sheeta) Between items in the balance sheet for one date, e.g., cash may be compared with current liabilitiesb) Between an item in the balance sheet for one date and the same item in the balance sheet for another date, e.g., cash today may be compared with cash a year agoc) Of ratios, or mathematical proportions, between two items in the balance sheet for one date and a like ratio in the balance sheet for another date, e.g., the ratio of cash to current liabilities today may be compared with a like ratio a year ago and the trend of cash condition noted2. Between items in the comparative statement of income and expensea) Between items in the statement for a given periodb) Between one item in this period's statement and the same item in last period's statementc) Of ratios between items in this period's statement and similar ratios in last period's statement3. Between items in the comparative balance sheet and items in the comparative statement of income and expensea) Between items in these statements for a given period, e.g., net profit for this year may be calculated as a percentage of net worth for this yearb) Of ratios between items in the two statements for a period of years, e.g., the ratio of net profit to net worth this year may-be compared with like ratios for last year, and for the years preceding thatOur comparative analysis will gain in significance if we take the foregoing comparisons or ratios and; in turn, compare them with:I. Such data as are absent from the comparative statements but are of importance in judging a concern's financial history and condition, for example, the stage of the business cycle2. Similar ratios derived from analysis of the comparative statements of competing concerns or of concerns in similar lines of business What financialratios are used in analyzing financial statements.- Comparative analysis of comparative financial statements may be expressed by mathematical ratios between the items compared, for example, a concern's cash position may be tested by dividing the item of cash by the total of current liability items and using the quotient to express the result of the test. Each ratio may be expressed in two ways, for example, the ratio of sales to fixed assets may be expressed as the ratio of fixed assets to sales. We shall express each ratio in such a way that increases from period to period will be favorable and decreases unfavorable to financial condition.We shall use the following financial ratios in analyzing comparative financial statements:I. Working-capital ratios1. The ratio of current assets to current liabilities2. The ratio of cash to total current liabilities3. The ratio of cash, salable securities, notes and accounts receivable to total current liabilities4. The ratio of sales to receivables, i.e., the turnover of receivables5. The ratio of cost of goods sold to merchandise inventory, i.e., the turnover of inventory6. The ratio of accounts receivable to notes receivable7. The ratio of receivables to inventory8. The ratio of net working capital to inventory9. The ratio of notes payable to accounts payableIO. The ratio of inventory to accounts payableII. Fixed and intangible capital ratios1. The ratio of sales to fixed assets, i.e., the turnover of fixed capital2. The ratio of sales to intangible assets, i.e., the turnover of intangibles3. The ratio of annual depreciation and obsolescence charges to the assetsagainst which depreciation is written off4. The ratio of net worth to fixed assetsIII. Capitalization ratios1. The ratio of net worth to debt.2. The ratio of capital stock to total capitalization .3. The ratio of fixed assets to funded debtIV. Income and expense ratios1. The ratio of net operating profit to sales2. The ratio of net operating profit to total capital3. The ratio of sales to operating costs and expenses4. The ratio of net profit to sales5. The ratio of net profit to net worth6. The ratio of sales to financial expenses7. The ratio of borrowed capital to capital costs8. The ratio of income on investments to investments9. The ratio of non-operating income to net operating profit10. The ratio of net operating profit to non-operating expense11. The ratio of net profit to capital stock12. The ratio of net profit reinvested to total net profit available for dividends on common stock13. The ratio of profit available for interest to interest expensesThis classification of financial ratios is permanent not exhaustive. -Other ratios may be used for purposes later indicated. Furthermore, some of the ratios reflect the efficiency with which a business has used its capital while others reflect efficiency in financing capital needs. The ratios of sales to receivables, inventory, fixed and intangible capital; the ratios of net operating profit to total capital and to sales; and the ratios of sales to operating costs and expenses reflect efficiency in the use of capital.' Most of the other ratios reflect financial efficiency.B. Technique of Financial Statement AnalysisAre the statements adequate in general?-Before attempting comparative analysis of given financial statements we wish to be sure that the statements are reasonably adequate for the purpose. They should, of course, be as complete as possible. They should also be of recent date. If not, their use must be limited to the period which they cover. Conclusions concerning 1923 conditions cannot safely be based upon 1921 statements.Does the comparative balance sheet reflect a seasonable situation? If so, it is important to know financial conditions at both the high and low points of the season. We must avoid unduly favorable judgment of the business at the low point when assets are very liquid and debt is low, and unduly unfavorable judgment at the high point when assets are less liquid and debt likely to be relatively high.Does the balance sheet for any date reflect the estimated financial condition after the sale of a proposed new issue of securities? If so, in order to ascertain the actual financial condition at that date it is necessary to subtract the amount of the security issue from net worth, if the. issue is of stock, or from liabilities, if bonds are to be sold. A like amount must also be subtracted from assets or liabilities depending upon how the estimated proceeds of the issue are reflected in the statement.Are the statements audited or unaudited? It is often said that audited statements, that is, complete audits rather than statements "rubber stamped" by certified public accountants, are desirable when they can be obtained. This is true, but the statement analyst should be certain that the given auditing film's reputation is beyond reproach.Is working-capital situation favorable ?-If the comparative statements to be analyzed are reasonably adequate for the purpose, the next step is to analyze the concern's working-capital trend and position. We may begin by ascertaining the ratio of current assets to current liabilities. This ratioaffords-a test of the concern's probable ability to pay current obligations without impairing its net working capital. It is, in part, a measure of ability to borrow additional working capital or to renew short-term loans without difficulty. The larger the excess of current assets over current liabilities the smaller the risk of loss to short-term creditors and the better the credit of the business, other things being equal. A ratio of two dollars of current assets to one dollar of current liabilities is the "rule-of-thumb" ratio generally considered satisfactory, assuming all current assets are conservatively valued and all current liabilities revealed.The rule-of-thumb current ratio is not a satisfactory test ofworking-capital position and trend. A current ratio of less than two dollars for one dollar may be adequate, or a current ratio of more than two dollars for one dollar may be inadequate. It depends, for one thing, upon the liquidity of the current assets.The liquidity of current assets varies with cash position.-The larger the proportion of current assets in the form of cash the more liquid are the current assets as a whole. Generally speaking, cash should equal at least 20 per cent of total current liabilities (divide cash by total current liabilities). Bankers typically require a concern to maintain bank balances equal to 20 per cent of credit lines whether used or unused. Open-credit lines are not shown on the balance sheet, hence the total of current liabilities (instead of notes payable to banks) is used in testing cash position. Like the two-for-one current ratio, the 20 per cent cash ratio is more or less a rule-of-thumb standard.The cash balance that will be satisfactory depends upon terms of sale, terms of purchase, and upon inventory turnover. A firm selling goods for cash will find cash inflow more nearly meeting cash outflow than will a firm selling goods on credit. A business which pays cash for all purchases will need more ready money than one which buys on long terms of credit. The more rapidly the inventory is sold the more nearly will cash inflow equal cash outflow, other things equal.Needs for cash balances will be affected by the stage of the business cycle. Heavy cash balances help to sustain bank credit and pay expenses when a period of liquidation and depression depletes working capital and brings a slump in sales. The greater the effects of changes in the cycle upon a given concern the more thought the financial executive will need to give to the size of his cash balances.Differences in financial policies between different concerns will affect the size of cash balances carried. One concern may deem it good policy to carry as many open-bank lines as it can get, while another may carry only enough lines to meet reasonably certain needs for loans. The cash balance of the first firm is likely to be much larger than that of the second firm.The liquidity of current assets varies with ability to meet "acid test."- Liquidity of current assets varies with the ratio of cash, salable securities, notes and accounts receivable (less adequate reserves for bad debts), to total current liabilities (divide the total of the first four items by total current liabilities). This is the so-called "acid test" of the liquidity of current condition. A ratio of I: I is considered satisfactory since current liabilities can readily be paid and creditors risk nothing on the uncertain values of merchandise inventory. A less than 1:1 ratio may be adequate if receivables are quickly collected and if inventory is readily and quickly sold, that is, if its turnover is rapid andif the risks of changes in price are small.The liquidity of current assets varies with liquidity of receivables. This may be ascertained by dividing annual sales by average receivables or by receivables at the close of the year unless at that date receivables do not represent the normal amount of credit extended to customers. Terms of sale must be considered in judging the turnover of receivables. For example, if sales for the year are $1,200,000 and average receivables amount to $100,000, the turnover of receivables is $1,200,000/$100,000=12. Now, if credit terms to customers are net in thirty days we can see that receivables are paid promptly.Consideration should also be given market conditions and the stage of the business cycle. Terms of credit are usually longer in farming sections than in industrial centers. Collections are good in prosperous times but slow in periods of crisis and liquidation.Trends in the liquidity of receivables will also be reflected in the ratio of accounts receivable to notes receivable, in cases where goods are typically sold on open account. A decline in this ratio may indicate a lowering of credit standards since notes receivable are usually given to close overdue open accounts. If possible, a schedule of receivables should be obtained showing those not due, due, and past due thirty, sixty, and ninety days. Such a, schedule is of value in showing the efficiency of credits and collections and in explaining the trend in turnover of receivables. The more rapid the turnover of receivables the smaller the risk of loss from bad debts; the greater the savings of interest on the capital invested in receivables, and the higher the profit on total capital, other things being equal.Author(s): C. O. Hardy and S. P. Meech译文:财务报表分析A.财务比率我们需要使用财务比率来分析财务报表,比较财务报表的分析方法不能真正有效的得出想要的结果,除非采取的是研究在报表中项目与项目之间关系的形式。

外文翻译中英文对照

外文翻译中英文对照

Strengths优势All these private sector banks hold strong position on CRM part, they have professional, dedicated and well-trained employees.所以这些私人银行在客户管理部分都持支持态度,他们拥有专业的、细致的、训练有素的员工。

Private sector banks offer a wide range of banking and financial products and financial services to corporate and retail customers through a variety of delivery channels such as ATMs, Internet-banking, mobile-banking, etc. 私有银行通过许多传递通道(如自动取款机、网上银行、手机银行等)提供大范围的银行和金融产品、金融服务进行合作并向客户零售。

The area could be Investment management banking, life and non-life insurance, venture capital and asset management, retail loans such as home loans, personal loans, educational loans, car loans, consumer durable loans, credit cards, etc. 涉及的领域包括投资管理银行、生命和非生命保险、风险投资与资产管理、零售贷款(如家庭贷款、个人贷款、教育贷款、汽车贷款、耐用消费品贷款、信用卡等)。

Private sector banks focus on customization of products that are designed to meet the specific needs of customers. 私人银行主要致力于为一些特殊需求的客户进行设计和产品定制。

岩土工程中英文对照外文翻译文献

岩土工程中英文对照外文翻译文献

中英文对照外文翻译(文档含英文原文和中文翻译)原文:Safety Assurance for Challenging Geotechnical Civil Engineering Constructions in Urban AreasAbstractSafety is the most important aspect during design, construction and service time of any structure, especially for challenging projects like high-rise buildings and tunnels in urban areas. A high level design considering the soil-structure interaction, based on a qualified soil investigation is required for a safe and optimised design. Dueto the complexity of geotechnical constructions the safety assurance guaranteed by the 4-eye-principle is essential. The 4-eye-principle consists of an independent peer review by publicly certified experts combined with the observational method. The paper presents the fundamental aspects of safety assurance by the 4-eye-principle. The application is explained on several examples, as deep excavations, complex foundation systems for high-rise buildings and tunnel constructions in urban areas. The experiences made in the planning, design and construction phases are explained and for new inner urban projects recommendations are given.Key words: Natural Asset; Financial Value; Neural Network1.IntroductionA safety design and construction of challenging projects in urban areas is based on the following main aspects:Qualified experts for planning, design and construction;Interaction between architects, structural engineers and geotechnical engineers;Adequate soil investigation;Design of deep foundation systems using the FiniteElement-Method (FEM) in combination with enhanced in-situ load tests for calibrating the soil parameters used in the numerical simulations;Quality assurance by an independent peer review process and the observational method (4-eye-principle).These facts will be explained by large construction projects which are located in difficult soil and groundwater conditions.2.The 4-Eye-PrincipleThe basis for safety assurance is the 4-eye-principle. This 4-eye-principle is a process of an independent peer review as shown in Figure 1. It consists of 3 parts. The investor, the experts for planning and design and the construction company belong to the first division. Planning and design are done accordingto the requirements of the investor and all relevant documents to obtain the building permission are prepared. The building authorities are the second part and are responsible for the buildingpermission which is given to the investor. The thirddivision consists of the publicly certified experts.They are appointed by the building authorities but work as independent experts. They are responsible for the technical supervision of the planning, design and the construction.In order to achieve the license as a publicly certified expert for geotechnical engineering by the building authorities intensive studies of geotechnical engineering in university and large experiences in geotechnical engineering with special knowledge about the soil-structure interaction have to be proven.The independent peer review by publicly certified experts for geotechnical engineering makes sure that all information including the results of the soil investigation consisting of labor field tests and the boundary conditions defined for the geotechnical design are complete and correct.In the case of a defect or collapse the publicly certified expert for geotechnical engineering can be involved as an independent expert to find out the reasons for the defect or damage and to develop a concept for stabilization and reconstruction [1].For all difficult projects an independent peer review is essential for the successful realization of the project.3.Observational MethodThe observational method is practical to projects with difficult boundary conditions for verification of the design during the construction time and, if necessary, during service time. For example in the European Standard Eurocode 7 (EC 7) the effect and the boundary conditions of the observational method are defined.The application of the observational method is recommended for the following types of construction projects [2]:very complicated/complex projects;projects with a distinctive soil-structure-interaction,e.g. mixed shallow and deep foundations, retaining walls for deep excavations, Combined Pile-Raft Foundations (CPRFs);projects with a high and variable water pressure;complex interaction situations consisting of ground,excavation and neighbouring buildings and structures;projects with pore-water pressures reducing the stability;projects on slopes.The observational method is always a combination of the common geotechnical investigations before and during the construction phase together with the theoretical modeling and a plan of contingency actions(Figure 2). Only monitoring to ensure the stability and the service ability of the structure is not sufficient and,according to the standardization, not permitted for this purpose. Overall the observational method is an institutionalized controlling instrument to verify the soil and rock mechanical modeling [3,4].The identification of all potential failure mechanismsis essential for defining the measure concept. The concept has to be designed in that way that all these mechanisms can be observed. The measurements need to beof an adequate accuracy to allow the identification ocritical tendencies. The required accuracy as well as the boundary values need to be identified within the design phase of the observational method . Contingency actions needs to be planned in the design phase of the observational method and depend on the ductility of the systems.The observational method must not be seen as a potential alternative for a comprehensive soil investigation campaign. A comprehensive soil investigation campaignis in any way of essential importance. Additionally the observational method is a tool of quality assurance and allows the verification of the parameters and calculations applied in the design phase. The observational method helps to achieve an economic and save construction [5].4.In-Situ Load TestOn project and site related soil investigations with coredrillings and laboratory tests the soil parameters are determined. Laboratory tests are important and essential for the initial definition of soil mechanical properties of the soil layer, but usually not sufficient for an entire and realistic capture of the complex conditions, caused by theinteraction of subsoil and construction [6].In order to reliably determine the ultimate bearing capacity of piles, load tests need to be carried out [7]. Forpile load tests often very high counter weights or strong anchor systems are necessary. By using the Osterberg method high loads can be reached without install inganchors or counter weights. Hydraulic jacks induce the load in the pile using the pile itself partly as abutment.The results of the field tests allow a calibration of the numerical simulations.The principle scheme of pile load tests is shown in Figure 3.5.Examples for Engineering Practice5.1. Classic Pile Foundation for a High-Rise Building in Frankfurt Clay and LimestoneIn the downtown of Frankfurt am Main, Germany, on aconstruction site of 17,400 m2 the high-rise buildingproject “PalaisQuartier” has been realized (Figure 4). The construction was finished in 2010.The complex consists of several structures with a total of 180,000 m2 floor space, there of 60,000 m2 underground (Figure 5). The project includes the historic building “Thurn-und Taxis-Palais” whose facade has been preserved (Unit A). The office building (Unit B),which is the highest building of the project with a height of 136 m has 34 floors each with a floor space of 1340 m2. The hotel building (Unit C) has a height of 99 m with 24 upper floors. The retail area (Unit D)runs along the total length of the eastern part of the site and consists of eight upper floors with a total height of 43 m.The underground parking garage with five floors spans across the complete project area. With an 8 m high first sublevel, partially with mezzanine floor, and four more sub-levels the foundation depth results to 22 m below ground level. There by excavation bottom is at 80m above sea level (msl). A total of 302 foundation piles(diameter up to 1.86 m, length up to 27 m) reach down to depths of 53.2 m to 70.1 m. above sea level depending on the structural requirements.The pile head of the 543 retaining wall piles (diameter1.5 m, length up to 38 m)were located between 94.1 m and 99.6 m above sea level, the pile base was between 59.8 m and 73.4 m above sea level depending on the structural requirements. As shown in the sectional view(Figure 6), the upper part of the piles is in the Frankfurt Clay and the base of the piles is set in the rocky Frankfurt Limestone.Regarding the large number of piles and the high pile loads a pile load test has been carried out for optimization of the classic pile foundation. Osterberg-Cells(O-Cells) have been installed in two levels in order to assess the influence of pile shaft grouting on the limit skin friction of the piles in the Frankfurt Limestone(Figure 6). The test pile with a total length of 12.9 m and a diameter of 1.68 m consist of three segments and has been installed in the Frankfurt Limestone layer 31.7 m below ground level. The upper pile segment above the upper cell level and the middle pile segment between the two cell levels can be tested independently. In the first phase of the test the upper part was loaded by using the middle and the lower part as abutment. A limit of 24 MN could be reached (Figure 7). The upper segment was lifted about 1.5 cm, the settlement of the middle and lower part was 1.0 cm. The mobilized shaft friction was about 830 kN/m2.Subsequently the upper pile segment was uncoupled by discharging the upper cell level. In the second test phase the middle pile segment was loaded by using the lower segment as abutment. The limit load of the middle segment with shaft grouting was 27.5 MN (Figure 7).The skin friction was 1040 kN/m2, this means 24% higher than without shaft grouting. Based on the results of the pile load test using O-Cells the majority of the 290 foundation piles were made by applying shaft grouting. Due to pile load test the total length of was reduced significantly.5.2. CPRF for a High-Rise Building in Clay MarlIn the scope of the project Mirax Plaza in Kiev, Ukraine,2 high-rise buildings, each of them 192 m (46 storeys)high, a shopping and entertainment mall and an underground parking are under construction (Figure 8). The area of the project is about 294,000 m2 and cuts a 30 m high natural slope.The geotechnical investigations have been executed 70m deep. The soil conditions at the construction site are as follows: fill to a depth of 2 m to 3mquaternary silty sand and sandy silt with a thickness of 5 m to 10 m tertiary silt and sand (Charkow and Poltaw formation) with a thickness of 0 m to 24 m tertiary clayey silt and clay marl of the Kiev and But schak formation with a thickness of about 20 m tertiary fine sand of the But schak formation up to the investigation depthThe ground water level is in a depth of about 2 m below the ground surface. The soil conditions and a cross section of the project are shown in Figure 9.For verification of the shaft and base resistance of the deep foundation elements and for calibration of the numerical simulations pile load tests have been carried out on the construction yard. The piles had a diameter of 0.82 m and a length of about 10 m to 44 m. Using the results of the load tests the back analysis for verification of the FEM simulations was done. The soil properties in accordance with the results of the back analysis were partly 3 times higher than indicated in the geotechnical report. Figure 10 shows the results of the load test No. 2 and the numerical back analysis. Measurement and calculation show a good accordance.The obtained results of the pile load tests and of the executed back analysis were applied in 3-dimensionalFEM-simulations of the foundation for Tower A, taking advantage of the symmetry of the footprint of the building. The overall load of the Tower A is about 2200 MN and the area of the foundation about 2000 m2 (Figure11).The foundation design considers a CPRF with 64 barrettes with 33 m length and a cross section of 2.8 m × 0.8m. The raft of 3 m thickness is located in Kiev Clay Marl at about 10 m depth below the ground surface. The barrettes are penetrating the layer of Kiev Clay Marl reaching the Butschak Sands.The calculated loads on the barrettes were in the range of 22.1 MN to 44.5 MN. The load on the outer barrettes was about 41.2 MN to 44.5 MN which significantly exceeds the loads on the inner barrettes with the maximum value of 30.7 MN. This behavior is typical for a CPRF.The outer deep foundation elements take more loads because of their higher stiffness due to the higher volume of the activated soil. The CPRF coefficient is 0.88 =CPRF . Maximum settlements of about 12 cm werecalculated due to the settlement-relevant load of 85% of the total design load. The pressure under the foundation raft is calculated in the most areas not exceeding 200 kN/m2, at the raft edge the pressure reaches 400 kN/m2.The calculated base pressure of the outer barrettes has anaverage of 5100 kN/m2 and for inner barrettes an average of 4130 kN/m2. The mobilized shaft resistance increases with the depth reaching 180 kN/m2 for outer barrettes and 150 kN/m2 for inner barrettes.During the construction of Mirax Plaza the observational method according to EC 7 is applied. Especially the distribution of the loads between the barrettes and the raft is monitored. For this reason 3 earth pressure devices were installed under the raft and 2 barrettes (most loaded outer barrette and average loaded inner barrette) were instrumented over the length.In the scope of the project Mirax Plaza the new allowable shaft resistance and base resistance were defined for typical soil layers in Kiev. This unique experience will be used for the skyscrapers of new generation in Ukraine.The CPRF of the high-rise building project MiraxPlaza represents the first authorized CPRF in the Ukraine. Using the advanced optimization approaches and taking advantage of the positive effect of CPRF the number of barrettes could be reduced from 120 barrettes with 40 mlength to 64 barrettes with 33 m length. The foundation optimization leads to considerable decrease of the utilized resources (cement, aggregates, water, energy etc.)and cost savings of about 3.3 Million US$.译文:安全保证岩土公民发起挑战工程建设在城市地区摘要安全是最重要的方面在设计、施工和服务时间的任何结构,特别是对具有挑战性的项目,如高层建筑和隧道在城市地区。

钢筋混凝土中英文对照外文翻译文献

钢筋混凝土中英文对照外文翻译文献

中英文资料对照外文翻译目录1 中文翻译 (1)1.1钢筋混凝土 (1)1.2土方工程 (2)1.3结构的安全度 (3)2 外文翻译 (6)2.1 Reinforced Concrete (6)2.2 Earthwork (7)2.3 Safety of Structures (9)1 中文翻译1.1钢筋混凝土素混凝土是由水泥、水、细骨料、粗骨料(碎石或;卵石)、空气,通常还有其他外加剂等经过凝固硬化而成。

将可塑的混凝土拌合物注入到模板内,并将其捣实,然后进行养护,以加速水泥与水的水化反应,最后获得硬化的混凝土。

其最终制成品具有较高的抗压强度和较低的抗拉强度。

其抗拉强度约为抗压强度的十分之一。

因此,截面的受拉区必须配置抗拉钢筋和抗剪钢筋以增加钢筋混凝土构件中较弱的受拉区的强度。

由于钢筋混凝土截面在均质性上与标准的木材或钢的截面存在着差异,因此,需要对结构设计的基本原理进行修改。

将钢筋混凝土这种非均质截面的两种组成部分按一定比例适当布置,可以最好的利用这两种材料。

这一要求是可以达到的。

因混凝土由配料搅拌成湿拌合物,经过振捣并凝固硬化,可以做成任何一种需要的形状。

如果拌制混凝土的各种材料配合比恰当,则混凝土制成品的强度较高,经久耐用,配置钢筋后,可以作为任何结构体系的主要构件。

浇筑混凝土所需要的技术取决于即将浇筑的构件类型,诸如:柱、梁、墙、板、基础,大体积混凝土水坝或者继续延长已浇筑完毕并且已经凝固的混凝土等。

对于梁、柱、墙等构件,当模板清理干净后应该在其上涂油,钢筋表面的锈及其他有害物质也应该被清除干净。

浇筑基础前,应将坑底土夯实并用水浸湿6英寸,以免土壤从新浇的混凝土中吸收水分。

一般情况下,除使用混凝土泵浇筑外,混凝土都应在水平方向分层浇筑,并使用插入式或表面式高频电动振捣器捣实。

必须记住,过分的振捣将导致骨料离析和混凝土泌浆等现象,因而是有害的。

水泥的水化作用发生在有水分存在,而且气温在50°F以上的条件下。

薪酬管理体系中英文对照外文翻译文献

薪酬管理体系中英文对照外文翻译文献

薪酬管理体系中英文对照外文翻译文献XXX people。

XXX enterprise management。

as it has a XXX attract。

retain。

and motivate employees。

particularly key talent。

As such。

it has XXX。

retain。

objective。

XXX on the design of salary XXX.2 The Importance of Salary System DesignThe design of a salary system is XXX's success。

An effective salary system can help attract and retain employees。

XXX。

XXX them to perform at their best。

In contrast。

a poorly designed salary system can lead to employee n and XXX。

which can XXX.To design an effective salary system。

XXX factors。

including the industry。

the enterprise's size and stage of development。

and the specific needs and goals of the XXX。

XXX.3 XXXXXX。

XXX incentives can help align the XXX with those of the enterprise and its shareholders。

XXX to perform at their best.When designing equity incentives。

道路与桥梁工程中英文对照外文翻译文献

道路与桥梁工程中英文对照外文翻译文献

中英文对照外文翻译(文档含英文原文和中文翻译)Bridge research in EuropeA brief outline is given of the development of the European Union, together with the research platform in Europe. The special case of post-tensioned bridges in the UK is discussed. In order to illustrate the type of European research being undertaken, an example is given from the University of Edinburgh portfolio: relating to the identification of voids in post-tensioned concrete bridges using digital impulse radar.IntroductionThe challenge in any research arena is to harness the findings of different research groups to identify a coherent mass of data, which enables research and practice to be better focused. A particular challenge exists with respect to Europe where language barriers are inevitably very significant. The European Community was formed in the 1960s based upon a political will within continental Europe to avoid the European civil wars, which developed into World War 2 from 1939 to 1945. The strong political motivation formed the original community of which Britain was not a member. Many of the continental countries saw Britain’s interest as being purelyeconomic. The 1970s saw Britain joining what was then the European Economic Community (EEC) and the 1990s has seen the widening of the community to a European Union, EU, with certain political goals together with the objective of a common European currency.Notwithstanding these financial and political developments, civil engineering and bridge engineering in particular have found great difficulty in forming any kind of common thread. Indeed the educational systems for University training are quite different between Britain and the European continental countries. The formation of the EU funding schemes —e.g. Socrates, Brite Euram and other programs have helped significantly. The Socrates scheme is based upon the exchange of students between Universities in different member states. The Brite Euram scheme has involved technical research grants given to consortia of academics and industrial partners within a number of the states— a Brite Euram bid would normally be led by an industrialist.In terms of dissemination of knowledge, two quite different strands appear to have emerged. The UK and the USA have concentrated primarily upon disseminating basic research in refereed journal publications: ASCE, ICE and other journals. Whereas the continental Europeans have frequently disseminated basic research at conferences where the circulation of the proceedings is restricted.Additionally, language barriers have proved to be very difficult to break down. In countries where English is a strong second language there has been enthusiastic participation in international conferences based within continental Europe —e.g. Germany, Italy, Belgium, The Netherlands and Switzerland. However, countries where English is not a strong second language have been hesitant participants }—e.g. France.European researchExamples of research relating to bridges in Europe can be divided into three types of structure:Masonry arch bridgesBritain has the largest stock of masonry arch bridges. In certain regions of the UK up to 60% of the road bridges are historic stone masonry arch bridges originally constructed for horse drawn traffic. This is less common in other parts of Europe as many of these bridges were destroyed during World War 2.Concrete bridgesA large stock of concrete bridges was constructed during the 1950s, 1960s and 1970s. At the time, these structures were seen as maintenance free. Europe also has a large number of post-tensioned concrete bridges with steel tendon ducts preventing radar inspection. This is a particular problem in France and the UK.Steel bridgesSteel bridges went out of fashion in the UK due to their need for maintenance as perceived in the 1960s and 1970s. However, they have been used for long span and rail bridges, and they are now returning to fashion for motorway widening schemes in the UK.Research activity in EuropeIt gives an indication certain areas of expertise and work being undertaken in Europe, but is by no means exhaustive.In order to illustrate the type of European research being undertaken, an example is given from the University of Edinburgh portfolio. The example relates to the identification of voids in post-tensioned concrete bridges, using digital impulse radar.Post-tensioned concrete rail bridge analysisOve Arup and Partners carried out an inspection and assessment of the superstructure of a 160 m long post-tensioned, segmental railway bridge in Manchester to determine its load-carrying capacity prior to a transfer of ownership, for use in the Metrolink light rail system..Particular attention was paid to the integrity of its post-tensioned steel elements. Physical inspection, non-destructive radar testing and other exploratory methods were used to investigate for possible weaknesses in the bridge.Since the sudden collapse of Ynys-y-Gwas Bridge in Wales, UK in 1985, there has been concern about the long-term integrity of segmental, post-tensioned concrete bridges which may b e prone to ‘brittle’ failure without warning. The corrosion protection of the post-tensioned steel cables, where they pass through joints between the segments, has been identified as a major factor affecting the long-term durability and consequent strength of this type of bridge. The identification of voids in grouted tendon ducts at vulnerable positions is recognized as an important step in the detection of such corrosion.Description of bridgeGeneral arrangementBesses o’ th’ Barn Bridge is a 160 m long, three span, segmental, post-tensionedconcrete railway bridge built in 1969. The main span of 90 m crosses over both the M62 motorway and A665 Bury to Prestwick Road. Minimum headroom is 5.18 m from the A665 and the M62 is cleared by approx 12.5 m.The superstructure consists of a central hollow trapezoidal concrete box section 6.7 m high and 4 m wide. The majority of the south and central spans are constructed using 1.27 m long pre-cast concrete trapezoidal box units, post-tensioned together. This box section supports the in site concrete transverse cantilever slabs at bottom flange level, which carry the rail tracks and ballast.The center and south span sections are of post-tensioned construction. These post-tensioned sections have five types of pre-stressing:1. Longitudinal tendons in grouted ducts within the top and bottom flanges.2. Longitudinal internal draped tendons located alongside the webs. These are deflected at internal diaphragm positions and are encased in in site concrete.3. Longitudinal macalloy bars in the transverse cantilever slabs in the central span .4. Vertical macalloy bars in the 229 mm wide webs to enhance shear capacity.5. Transverse macalloy bars through the bottom flange to support the transverse cantilever slabs.Segmental constructionThe pre-cast segmental system of construction used for the south and center span sections was an alternative method proposed by the contractor. Current thinking suggests that such a form of construction can lead to ‘brittle’ failure of the ent ire structure without warning due to corrosion of tendons across a construction joint,The original design concept had been for in site concrete construction.Inspection and assessmentInspectionInspection work was undertaken in a number of phases and was linked with the testing required for the structure. The initial inspections recorded a number of visible problems including:Defective waterproofing on the exposed surface of the top flange.Water trapped in the internal space of the hollow box with depths up to 300 mm.Various drainage problems at joints and abutments.Longitudinal cracking of the exposed soffit of the central span.Longitudinal cracking on sides of the top flange of the pre-stressed sections.Widespread sapling on some in site concrete surfaces with exposed rusting reinforcement.AssessmentThe subject of an earlier paper, the objectives of the assessment were:Estimate the present load-carrying capacity.Identify any structural deficiencies in the original design.Determine reasons for existing problems identified by the inspection.Conclusion to the inspection and assessmentFollowing the inspection and the analytical assessment one major element of doubt still existed. This concerned the condition of the embedded pre-stressing wires, strands, cables or bars. For the purpose of structural analysis these elements、had been assumed to be sound. However, due to the very high forces involved,、a risk to the structure, caused by corrosion to these primary elements, was identified.The initial recommendations which completed the first phase of the assessment were:1. Carry out detailed material testing to determine the condition of hidden structural elements, in particularthe grouted post-tensioned steel cables.2. Conduct concrete durability tests.3. Undertake repairs to defective waterproofing and surface defects in concrete.Testing proceduresNon-destructi v e radar testingDuring the first phase investigation at a joint between pre-cast deck segments the observation of a void in a post-tensioned cable duct gave rise to serious concern about corrosion and the integrity of the pre-stress. However, the extent of this problem was extremely difficult to determine. The bridge contains 93 joints with an average of 24 cables passing through each joint, i.e. there were approx. 2200 positions where investigations could be carried out. A typical section through such a joint is that the 24 draped tendons within the spine did not give rise to concern because these were protected by in site concrete poured without joints after the cables had been stressed.As it was clearly impractical to consider physically exposing all tendon/joint intersections, radar was used to investigate a large numbers of tendons and hence locate duct voids within a modest timescale. It was fortunate that the corrugated steel ducts around the tendons were discontinuous through the joints which allowed theradar to detect the tendons and voids. The problem, however, was still highly complex due to the high density of other steel elements which could interfere with the radar signals and the fact that the area of interest was at most 102 mm wide and embedded between 150 mm and 800 mm deep in thick concrete slabs.Trial radar investigations.Three companies were invited to visit the bridge and conduct a trial investigation. One company decided not to proceed. The remaining two were given 2 weeks to mobilize, test and report. Their results were then compared with physical explorations.To make the comparisons, observation holes were drilled vertically downwards into the ducts at a selection of 10 locations which included several where voids were predicted and several where the ducts were predicted to be fully grouted. A 25-mm diameter hole was required in order to facilitate use of the chosen horoscope. The results from the University of Edinburgh yielded an accuracy of around 60%.Main radar sur v ey, horoscope verification of v oids.Having completed a radar survey of the total structure, a baroscopic was then used to investigate all predicted voids and in more than 60% of cases this gave a clear confirmation of the radar findings. In several other cases some evidence of honeycombing in the in site stitch concrete above the duct was found.When viewing voids through the baroscopic, however, it proved impossible to determine their actual size or how far they extended along the tendon ducts although they only appeared to occupy less than the top 25% of the duct diameter. Most of these voids, in fact, were smaller than the diameter of the flexible baroscopic being used (approximately 9 mm) and were seen between the horizontal top surface of the grout and the curved upper limit of the duct. In a very few cases the tops of the pre-stressing strands were visible above the grout but no sign of any trapped water was seen. It was not possible, using the baroscopic, to see whether those cables were corroded.Digital radar testingThe test method involved exciting the joints using radio frequency radar antenna: 1 GHz, 900 MHz and 500 MHz. The highest frequency gives the highest resolution but has shallow depth penetration in the concrete. The lowest frequency gives the greatest depth penetration but yields lower resolution.The data collected on the radar sweeps were recorded on a GSSI SIR System 10.This system involves radar pulsing and recording. The data from the antenna is transformed from an analogue signal to a digital signal using a 16-bit analogue digital converter giving a very high resolution for subsequent data processing. The data is displayed on site on a high-resolution color monitor. Following visual inspection it is then stored digitally on a 2.3-gigabyte tape for subsequent analysis and signal processing. The tape first of all records a ‘header’ noting the digital radar settings together with the trace number prior to recording the actual data. When the data is played back, one is able to clearly identify all the relevant settings —making for accurate and reliable data reproduction.At particular locations along the traces, the trace was marked using a marker switch on the recording unit or the antenna.All the digital records were subsequently downloaded at the University’s NDT laboratory on to a micro-computer.(The raw data prior to processing consumed 35 megabytes of digital data.)Post-processing was undertaken using sophisticated signal processing software. Techniques available for the analysis include changing the color transform and changing the scales from linear to a skewed distribution in order to highlight、突出certain features. Also, the color transforms could be changed to highlight phase changes. In addition to these color transform facilities, sophisticated horizontal and vertical filtering procedures are available. Using a large screen monitor it is possible to display in split screens the raw data and the transformed processed data. Thus one is able to get an accurate indication of the processing which has taken place. The computer screen displays the time domain calibrations of the reflected signals on the vertical axis.A further facility of the software was the ability to display the individual radar pulses as time domain wiggle plots. This was a particularly valuable feature when looking at individual records in the vicinity of the tendons.Interpretation of findingsA full analysis of findings is given elsewhere, Essentially the digitized radar plots were transformed to color line scans and where double phase shifts were identified in the joints, then voiding was diagnosed.Conclusions1. An outline of the bridge research platform in Europe is given.2. The use of impulse radar has contributed considerably to the level of confidence in the assessment of the Besses o’ th’ Barn Rail Bridge.3. The radar investigations revealed extensive voiding within the post-tensioned cable ducts. However, no sign of corrosion on the stressing wires had been found except for the very first investigation.欧洲桥梁研究欧洲联盟共同的研究平台诞生于欧洲联盟。

电子商务与现代物流中英文对照外文翻译文献

电子商务与现代物流中英文对照外文翻译文献

电子商务与现代物流中英文对照外文翻译文献In this model。

the XXX its own logistics system。

the enterprise can control the entire process of delivery。

XXX。

this model requires a XXX.3.Third-party logistics model.XXX ns to a third-party logistics provider。

The third-party logistics provider handles the entire logistics process。

XXX。

the enterprise may lose some control over the logistics process and may have to pay higher fees for the services provided.Second。

the impact of electronic commerce on physical n1.Shortening of the n chain.XXX intermediaries in the n process。

such as XXX.2.Increased demand for logistics services.As more consumers shop online。

XXX.3.XXX.Electronic commerce has led to the XXX logistics models。

XXX connect consumers with individuals who XXX.Overall。

electronic commerce has had a significant impacton physical n。

金属热处理中英文对照外文翻译文献

金属热处理中英文对照外文翻译文献

中英文对照外文翻译文献(文档含英文原文和中文翻译)原文:Heat treatment of metalThe generally accepted definition for heat treating metals and metal alloys is “heating and cooling a solid metal or alloy in a way so as to obtain specific conditions or properties.” Heating for the sole purpose of hot working (as in forging operations) is excluded from this definition.Likewise,the types of heat treatment that are sometimes used for products such as glass or plastics are also excluded from coverage by this definition.Transformation CurvesThe basis for heat treatment is the time-temperature-transformation curves or TTT curves where,in a single diagram all the three parameters are plotted.Because of the shape of the curves,they are also sometimes called C-curves or S-curves.To plot TTT curves,the particular steel is held at a given temperature and the structure is examined at predetermined intervals to record the amount of transformation taken place.It is known that the eutectoid steel (T80) under equilibrium conditions contains,all austenite above 723℃,whereas below,it is the pearlite.To form pearlite,the carbon atoms should diffuse to form cementite.The diffusion being a rate process,would require sufficient time for complete transformation of austenite to pearlite.From different samples,it is possible to note the amount of the transformation taking place at any temperature.These points are then plotted on a graph with time and temperature as the axes.Through these points,transformation curves can be plotted as shown in Fig.1 for eutectoid steel.The curve at extreme left represents the time required for the transformation of austenite to pearlite to start at any given temperature.Similarly,the curve at extreme right represents the time required for completing the transformation.Between the two curves are the pointsrepresenting partial transformation. The horizontal lines Ms and Mf represent the start and finish of martensitic transformation.Classification of Heat Treating ProcessesIn some instances,heat treatment procedures are clear-cut in terms of technique and application.whereas in other instances,descriptions or simple explanations are insufficient because the same technique frequently may be used to obtain different objectives.For example, stress relieving and tempering are often accomplished with the same equipment and by use of identical time and temperature cycles.The objectives,however,are different for the two processes. The following descriptions of the principal heat treating processes are generally arranged according to their interrelationships.Normalizing consists of heating a ferrous alloy to a suitable temperature (usually 50°F to 100°F or 28℃ to 56℃) above its specific upper transformation temperature.This is followed by cooling in still air to at least some temperature well below its transformation temperature range.For low-carbon steels, the resulting structure and properties are the same as those achieved by full annealing;for most ferrous alloys, normalizing and annealing are not synonymous.Normalizing usually is used as a conditioning treatment, notably for refining the grains of steels that have been subjected to high temperatures for forging or other hot working operations. The normalizing process usually is succeeded by another heat treating operation such as austenitizing for hardening, annealing, or tempering.Annealing is a generic term denoting a heat treatment that consists of heating to and holding at a suitable temperature followed by cooling at a suitable rate. It is used primarily to soften metallicmaterials, but also to simultaneously produce desired changes in other properties or in microstructure. The purpose of such changes may be, but is not confined to, improvement of machinability, facilitation of cold work (known as in-process annealing), improvement of mechanical or electrical properties, or to increase dimensional stability. When applied solely to relive stresses, it commonly is called stress-relief annealing, synonymous with stress relieving.When the term “annealing” is applied to ferrous alloys without qualification, full annealing is applied. This is achieved by heating above the alloy’s transformation temperature, then applying a cooling cycle which provides maximum softness. This cycle may vary widely, depending on composition and characteristics of the specific alloy.Quenching is a rapid cooling of a steel or alloy from the austenitizing temperature by immersing the work piece in a liquid or gaseous medium. Quenching medium commonly used include water, 5% brine, 5% caustic in an aqueous solution, oil, polymer solutions, or gas (usually air or nitrogen).Selection of a quenching medium depends largely on the hardenability of material and the mass of the material being treating (principally section thickness).The cooling capabilities of the above-listed quenching media vary greatly. In selecting a quenching medium, it is best to avoid a solution that has more cooling power than is needed to achieve the results, thus minimizing the possibility of cracking and warp of the parts being treated. Modifications of the term quenching include direct quenching, fog quenching, hot quenching, interrupted quenching, selective quenching, spray quenching, and time quenching.Tempering. In heat treating of ferrous alloys, tempering consists of reheating the austenitized and quench-hardened steel or iron to some preselected temperature that is below the lower transformation temperature (generally below 1300 ℃ or 705 ℃ ). Tempering offers a means of obtaining various combinations of mechanical properties. Tempering temperatures used for hardened steels are often no higher than 300 ℃(150 ℃). The term “tempering” should not be confused with either process annealing or stress relieving. Even though time and temperature cycles for the three processes may be the same, the conditions of the materials being processed and the objectives may be different.Stress relieving. Like tempering, stress relieving is always done by heating to some temperature below the lower transformation temperature for steels and irons. For nonferrous metals, the temperature may vary from slightly above room temperature to several hundred degrees, depending on the alloy and the amount of stress relief that is desired.The primary purpose of stress relieving is to relieve stresses that have been imparted to the workpiece from such processes as forming, rolling, machining or welding. The usual procedure is toheat workpiece to the pre-established temperature long enough to reduce the residual stresses (this is a time-and temperature-dependent operation) to an acceptable level; this is followed by cooling at a relatively slow rate to avoid creation of new stresses.The generally accepted definition for heat treating metals and metal alloys is “heating and cooling a solid metal or alloy in a way so as to obtain specific conditions or properties.” Heating for the sole purpose of hot working (as in forging operations) is excluded from this definition.Likewise,the types of heat treatment that are sometimes used for products such as glass or plastics are also excluded from coverage by this definition.Transformation CurvesThe basis for heat treatment is the time-temperature-transformation curves or TTT curves where,in a single diagram all the three parameters are plotted.Because of the shape of the curves,they are also sometimes called C-curves or S-curves.To plot TTT curves,the particular steel is held at a given temperature and the structure is examined at predetermined intervals to record the amount of transformation taken place.It is known that the eutectoid steel (T80) under equilibrium conditions contains,all austenite above 723℃,whereas below,it is pearlite.To form pearlite,the carbon atoms should diffuse to form cementite.The diffusion being a rate process,would require sufficient time for complete transformation of austenite to pearlite.From different samples,it is possible to note the amount of the transformation taking place at any temperature.These points are then plotted on a graph with time and temperature as the axes.Through these points,transformation curves can be plotted as shown in Fig.1 for eutectoid steel.The curve at extreme left represents the time required for the transformation of austenite to pearlite to start at any given temperature.Similarly,the curve at extreme right represents the time required for completing the transformation.Between the two curves are the points representing partial transformation. The horizontal lines Ms and Mf represent the start and finish of martensitic transformation.Classification of Heat Treating ProcessesIn some instances,heat treatment procedures are clear-cut in terms of technique and application.whereas in other instances,descriptions or simple explanations are insufficient because the same technique frequently may be used to obtain different objectives.For example, stress relieving and tempering are often accomplished with the same equipment and by use of identical time and temperature cycles.The objectives,however,are different for the two processes.The following descriptions of the principal heat treating processes are generally arranged according to their interrelationships.Normalizing consists of heating a ferrous alloy to a suitable temperature (usually 50°F to 100°F or 28℃ to 56℃) above its specific upper transformation temperature.This is followed by cooling in still air to at least some temperature well below its transformation temperature range.For low-carbon steels, the resulting structure and properties are the same as those achieved by full annealing;for most ferrous alloys, normalizing and annealing are not synonymous.Normalizing usually is used as a conditioning treatment, notably for refining the grains of steels that have been subjected to high temperatures for forging or other hot working operations. The normalizing process usually is succeeded by another heat treating operation such as austenitizing for hardening, annealing, or tempering.Annealing is a generic term denoting a heat treatment that consists of heating to and holding at a suitable temperature followed by cooling at a suitable rate. It is used primarily to soften metallic materials, but also to simultaneously produce desired changes in other properties or in microstructure. The purpose of such changes may be, but is not confined to, improvement of machinability, facilitation of cold work (known as in-process annealing), improvement of mechanical or electrical properties, or to increase dimensional stability. When applied solely to relive stresses, it commonly is called stress-relief annealing, synonymous with stress relieving.When the term “annealing” is applied to ferrous alloys without qualification, full annealing is applied. This is achieved by heating above the alloy’s transformation temperature, then applying a cooling cycle which provides maximum softness. This cycle may vary widely, depending on composition and characteristics of the specific alloy.Quenching is a rapid cooling of a steel or alloy from the austenitizing temperature by immersing the workpiece in a liquid or gaseous medium. Quenching medium commonly used include water, 5% brine, 5% caustic in an aqueous solution, oil, polymer solutions, or gas (usually air or nitrogen).Selection of a quenching medium depends largely on the hardenability of material and the mass of the material being treating (principally section thickness).The cooling capabilities of the above-listed quenching media vary greatly. In selecting aquenching medium, it is best to avoid a solution that has more cooling power than is needed to achieve the results, thus minimizing the possibility of cracking and warp of the parts being treated. Modifications of the term quenching include direct quenching, fog quenching, hot quenching, interrupted quenching, selective quenching, spray quenching, and time quenching.Tempering. In heat treating of ferrous alloys, tempering consists of reheating the austenitized and quench-hardened steel or iron to some preselected temperature that is below the lower transformation temperature (generally below 1300 ℃ or 705 ℃). Tempering offers a means of obtaining various combinations of mechanical properties. Tempering temperatures used for hardened steels are often no higher than 300 ℃(150 ℃). The term “tempering” should not be confused with either process annealing or stress relieving. Even though time and temperature cycles for the three processes may be the same, the conditions of the materials being processed and the objectives may be different.Stress relieving. Like tempering, stress relieving is always done by heating to some temperature below the lower transformation temperature for steels and irons. For nonferrous metals, the temperature may vary from slightly above room temperature to several hundred degrees, depending on the alloy and the amount of stress relief that is desired.The primary purpose of stress relieving is to relieve stresses that have been imparted to the workpiece from such processes as forming, rolling, machining or welding. The usual procedure is to heat workpiece to the pre-established temperature long enough to reduce the residual stresses (this is a time-and temperature-dependent operation) to an acceptable level; this is followed by cooling at a relatively slow rate to avoid creation of new stresses.The generally accepted definition for heat treating metals and metal alloys is “heating and cooling a solid metal or alloy in a way so as to obtain specific conditions or properties.” Heating for the sole purpose of hot working (as in forging operations) is excluded from this definition.Likewise,the types of heat treatment that are sometimes used for products such as glass or plastics are also excluded from coverage by this definition.Transformation CurvesThe basis for heat treatment is the time-temperature-transformation curves or TTT curves where,in a single diagram all the three parameters are plotted.Because of the shape of the curves,they are also sometimes called C-curves or S-curves.To plot TTT curves,the particular steel is held at a given temperature and the structure is examined at predetermined intervals to record the amount of transformation taken place.It is known that the eutectoid steel (T80) under equilibrium conditions contains,all austenite above 723℃,whereas below,it is pearlite.To form pearlite,the carbon atoms should diffuse to form cementite.The diffusion being a rate process,would require sufficient time for complete transformation of austenite to pearlite.From different samples,it is possible to note the amount of the transformation taking placeat any temperature.These points are then plotted on a graph with time and temperature as the axes.Through these points,transformation curves can be plotted as shown in Fig.1 for eutectoid steel.The curve at extreme left represents the time required for the transformation of austenite to pearlite to start at any given temperature.Similarly,the curve at extreme right represents the time required for completing the transformation.Between the two curves are the points representing partial transformation. The horizontal lines Ms and Mf represent the start and finish of martensitic transformation.Classification of Heat Treating ProcessesIn some instances,heat treatment procedures are clear-cut in terms of technique and application.whereas in other instances,descriptions or simple explanations are insufficient because the same technique frequently may be used to obtain different objectives.For example, stressrelieving and tempering are often accomplished with the same equipment and by use of identical time and temperature cycles.The objectives,however,are different for the two processes.The following descriptions of the principal heat treating processes are generally arranged according to their interrelationships.Normalizing consists of heating a ferrous alloy to a suitable temperature (usually 50°F to 100°F or 28℃ to 56℃) above its specific upper transformation temperature.This is followed by cooling in still air to at least some temperature well below its transformation temperature range.For low-carbon steels, the resulting structure and properties are the same as those achieved by full annealing;for most ferrous alloys, normalizing and annealing are not synonymous.Normalizing usually is used as a conditioning treatment, notably for refining the grains of steels that have been subjected to high temperatures for forging or other hot working operations. The normalizing process usually is succeeded by another heat treating operation such as austenitizing for hardening, annealing, or tempering.Annealing is a generic term denoting a heat treatment that consists of heating to and holding at a suitable temperature followed by cooling at a suitable rate. It is used primarily to soften metallic materials, but also to simultaneously produce desired changes in other properties or in microstructure. The purpose of such changes may be, but is not confined to, improvement of machinability, facilitation of cold work (known as in-process annealing), improvement of mechanical or electrical properties, or to increase dimensional stability. When applied solely to relive stresses, it commonly is called stress-relief annealing, synonymous with stress relieving.When the term “annealing” is applied to ferrous alloys without qualification, full annealing is applied. This is achieved by heating above the alloy’s transformation temperature, then applying a cooling cycle which provides maximum softness. This cycle may vary widely, depending on composition and characteristics of the specific alloy.Quenching is a rapid cooling of a steel or alloy from the austenitizing temperature by immersing the workpiece in a liquid or gaseous medium. Quenching medium commonly used include water, 5% brine, 5% caustic in an aqueous solution, oil, polymer solutions, or gas (usually air or nitrogen).Selection of a quenching medium depends largely on the hardenability of material and the mass of the material being treating (principally section thickness).The cooling capabilities of the above-listed quenching media vary greatly. In selecting a quenching medium, it is best to avoid a solution that has more cooling power than is needed to achieve the results, thus minimizing the possibility of cracking and warp of the parts being treated. Modifications of the term quenching include direct quenching, fog quenching, hot quenching, interrupted quenching, selective quenching, spray quenching, and time quenching.Tempering. In heat treating of ferrous alloys, tempering consists of reheating the austenitized and quench-hardened steel or iron to some preselected temperature that is below the lower transformation temperature (generally below 1300 ℃ or 705 ℃). Tempering offers a means of obtaining various combinations of mechanical properties. Tempering temperatures used for hardened steels are often no higher than 300 oF (150 ℃). The term “tempering” should not be confused with either process annealing or stress relieving. Even though time and temperature cycles for the three processes may be the same, the conditions of the materials being processed and the objectives may be different.Stress relieving. Like tempering, stress relieving is always done by heating to some temperature below the lower transformation temperature for steels and irons. For nonferrous metals, the temperature may vary from slightly above room temperature to several hundred degrees, depending on the alloy and the amount of stress relief that is desired.The primary purpose of stress relieving is to relieve stresses that have been imparted to the workpiece from such processes as forming, rolling, machining or welding. The usual procedure is to heat workpiece to the pre-established temperature long enough to reduce the residual stresses (this is a time-and temperature-dependent operation) to an acceptable level; this is followed by cooling at a relatively slow rate to avoid creation of new stresses.金属热处理对于热处理金属和金属合金普遍接受的定义是对于热处理金属和金属合金普遍接受的定义是“加热和冷却的方式了坚实的金“加热和冷却的方式了坚实的金属或合金,以获得特定条件或属性为唯一目的。

建筑设计中英文对照外文翻译文献

建筑设计中英文对照外文翻译文献

中英文对照外文翻译文献(文档含英文原文和中文翻译)原文:Housing Problems and Options for the Elderly 1. IntroductionHousing is a critical element in the lives of older persons. The affordability of housing affects the ability of the elderly to afford other necessities of life such as food and medical care. Housing that is located near hospitals and doctors, shopping, transportation, and recreational facilities can facilitate access to services that can enhance the quality of life. Housing can also be a place of memories of the past and a connection to friends and neighbors. Housing with supportive features and access to services can also make it possible for persons to age in place. In this session, we will be examining housing problems andoptions for the elderly. Along the way, we will be testing your housing IQ with a series of questions and exercises.2. Housing Situation of Older PersonsHow typical is the housing situation of the olders?We will begin by examining five areas :(1)Prevalence of home ownership (2)Length of stay in current residence (3)Living arrangements (4)Attachments of older persons to where they live (5)Moving behavior.With whom older persons live can influence housing affordability, space needs, and the ability to age in place. About 54% of older persons live with their spouses, 31% live alone, almost 13% live with related persons other than their spouse and about 2% live with unrelated persons. With increasing age, older persons (primarily women) are more likely to live alone or with a relative other than a spouse. Frail older women living alone are the persons most likely to reside in homes with ‘extra’ rooms and to need both physically supportive housing features and services to "age in place". This segment of the population is also the group most likely to move to more supportive housing settings such as assisted living.Many older persons have strong psychological attachments to their homes related to length of residence. The home often represents the place where they raised their children and a lifetime of memories. It is also a connection to an array of familiar persons such as neighbors and shopkeepers as well as near by places including houses of worship, libraries and community services. For manyolder persons, the home is an extension of their own personalities which is found in the furnishings . In addition, the home can represent a sense of economic security for the future, especially for homeowners who have paid off their mortgages. For owners, the home is usually their most valuable financial asset. The home also symbolizes a sense of independence in that the resident is able to live on his or her own. For these types of reasons, it is understandable that in response to a question about housing preferences, AARP surveys of older persons continue to find that approximately 80% of older persons report that what they want is to "stay in their own homes and never move." This phenomena has been termed the preference to "age in place."Although most older persons move near their current communities, some seek retirement communities in places with warmer weather in the southwest, far west and the south.3. The Federal Government's Housing Programs for the ElderlyThe federal government has had two basic housing strategies to address housing problems of the elderly. One strategy, termed the "supply side" approach, seeks to build new housing complexes such as public housing and Section 202 housing for older persons. Public housing is administered by quasi-governmental local public housing authorities. Section 202 Housing for the elderly and disabled is sponsored by non-profit organizations including religious and non-sectarian organizations. Approximately 1.5 million olderpersons or 3% of the elderly population live in federally assisted housing, with about 387,000 living in Section 202 housing. Over time, the government has shifted away from such new construction programs because of the cost of such housing, the problems that a number of non-elderly housing programs have experienced, and a philosophy that the government should no longer be directly involved with the building of housing. Section 202 housing, a very popular and successful program, is one of the few supply-side programs funded by the federal government, although the budget allocation during the last ten years has allowed for the construction of only about 6,000 units per year compared to a high of almost 20,000 units in the late 1970s. Instead of funding new construction, federal housing initiatives over the last decade have emphasized ‘demand side’ subsidies that provide low-income renters with a certificate or a voucher that they can use in a variety of multiunit settings, including apartments in the private sector that meet rental and condition guidelines. These vouchers and certificates are aimed at reducing excessive housing costs. Some certificates are termed ‘project based’ subsidies and are tied to federally subsidized housing such as Section 202. Because housing programs are not an entitlement, however, supply-side and demand side programs together are only able to meet the needs of about 1/3 of elderly renters who qualify on the basis of income.While advocates for housing have been trying to hold on to the existing programs in the face of huge budget cuts at HUD, much of the attention has been shifting towards meeting the shelter and service needs of the frail elderly. This emphasis reflects the increasing number of older persons in their eightiesand nineties who need a physically supportive environment linked with services. This group of older persons includes a high percentage of older residents of public and Section 202 housing. Initially built for independent older persons who were initially in the late sixties and early seventies, this type of housing now includes older persons in their eighties and nineties, many of whom have aged in place. Consequently, the government is faced with creating strategies to bring services into these buildings and retrofit them to better suit the needs of frail older persons. A major initiative of the early 1990s, which may be stalled by current budget problems at HUD, has been for the federal government to pay for service coordinators to assess the needs of residents of government assisted housing complexes and link them with services. As of 1998, there were approximately 1,000 service coordinators attached to government assisted housing complexes across the country.4. The Housing Continuum: A Range of Options for ElderlyA long-standing assumption in the field of housing has been that as persons become more frail, they will have to move along a housing continuum from one setting to another. As the figure on housing options suggests, along this continuum are found a range of housing options including single family homes, apartments, congregate living, assisted living, and board and care homes (Kendig & Pynoos, 1996). The end point of the housing continuum has been thenursing home. These options vary considerably in terms of their availability, affordability, and ability to meet the needs of very frail older persons.The concept of a continuum of supportive care is based on the assumption that housing options can be differentiated by the amount and types of services offered; the supportiveness of the physical setting in terms of accessibility, features, and design; and the competency level of the persons to whom the housing is targeted. The figure on housing options indicates how such options generally meet the needs of older persons who are categorized,as independent, semi-dependent and dependent. Semi-dependent older persons can be thought of as needing some assistance from other persons with instrumental activities of daily living (IADLs) such as cooking, cleaning, and shopping. In addition to needing assistance with some IADLs, dependent older persons may require assistance with more basic activities such as toileting, eating and bathing. Although semi-dependent and dependent older persons can be found throughout the housing continuum, independent older persons are very unlikely to reside in housing types such as assisted living specifically designed and equipped to meet the needs of frail older persons unless their spouses require these needs.Although the continuum of housing identifies a range of housing types, there is increasing recognition that frail older persons do not necessarily have to move from one setting to another if they need assistance. Semi-dependent or dependent older persons can live in a variety of settings, including their own homes and apartments, if the physical environment is made more supportive, caregivers are available to provide assistance and affordable services areaccessible.5. ConclusionsHousing plays a critical role in the lives of older persons. Most older homeowners who function independently express a high level of satisfaction with their dwelling units. However, high housing costs, especially for renters, remain a financial burden for many older persons and problems associated with housing condition persist especially for low- income renters and persons living in rural areas. Federal housing programs such as public housing, Section 202 housing, and Section 8 housing certificates have only been able to address the basic housing problems of only about one-third of eligible older persons because of limited budgets. Moreover, a shortage of viable residential options exists for frail older persons. Up until the last decade, housing for the elderly was conceived of primarily as shelter. It has become increasingly recognized that frail older persons who needed services and physically supportive features often had to move from their homes or apartments to settings such as board and care or nursing homes to receive assistance. Over time, however, the concept of a variety of housing types that can be linked has replaced the original idea of the continuum of housing. It is possible for frail older persons to live in a variety of existing residential settings, including their own homes and apartments with the addition of services and home modifications. Consequently, the last decade has seen a number of efforts to modify homes, add service coordinators to multi-unit housing and create options such as accessory and ECHO units. Although thesestrategies have been enhanced by a somewhat greater availability of home care services, Medicaid policy still provides incentives to house frail older persons in nursing homes. The most visible development in the field of housing for frail older persons has been the growth of private sector assisted living which is now viewed by many state governments as a residential alternative to nursing homes. The AL movement itself has raised a number of regulatory and financing issues that cross-cut housing and long term care such as what constitutes a residential environment, insuring that residents can age in place, accommodating resident preferences, protecting the rights of individuals and insuring quality of care. Nevertheless, the emergence of AL along with a wider range of other housing options holds out the promise that older persons will have a larger range of choices among living arrangements.译文:老年人的住宅问题与选择一、简介住宅在老年人生活的极为重要。

桥梁工程中英文对照外文翻译文献

桥梁工程中英文对照外文翻译文献

桥梁工程中英文对照外文翻译文献(文档含英文原文和中文翻译)BRIDGE ENGINEERING AND AESTHETICSEvolvement of bridge Engineering,brief reviewAmong the early documented reviews of construction materials and structu re types are the books of Marcus Vitruvios Pollio in the first century B.C.The basic principles of statics were developed by the Greeks , and were exemplifi ed in works and applications by Leonardo da Vinci,Cardeno,and Galileo.In the fifteenth and sixteenth century, engineers seemed to be unaware of this record , and relied solely on experience and tradition for building bridges and aqueduc ts .The state of the art changed rapidly toward the end of the seventeenth cent ury when Leibnitz, Newton, and Bernoulli introduced mathematical formulatio ns. Published works by Lahire (1695)and Belidor (1792) about the theoretical a nalysis of structures provided the basis in the field of mechanics of materials .Kuzmanovic(1977) focuses on stone and wood as the first bridge-building materials. Iron was introduced during the transitional period from wood to steel .According to recent records , concrete was used in France as early as 1840 for a bridge 39 feet (12 m) long to span the Garoyne Canal at Grisoles, but r einforced concrete was not introduced in bridge construction until the beginnin g of this century . Prestressed concrete was first used in 1927.Stone bridges of the arch type (integrated superstructure and substructure) were constructed in Rome and other European cities in the middle ages . Thes e arches were half-circular , with flat arches beginning to dominate bridge wor k during the Renaissance period. This concept was markedly improved at the e nd of the eighteenth century and found structurally adequate to accommodate f uture railroad loads . In terms of analysis and use of materials , stone bridges have not changed much ,but the theoretical treatment was improved by introd ucing the pressure-line concept in the early 1670s(Lahire, 1695) . The arch the ory was documented in model tests where typical failure modes were considered (Frezier,1739).Culmann(1851) introduced the elastic center method for fixed-e nd arches, and showed that three redundant parameters can be found by the us e of three equations of coMPatibility.Wooden trusses were used in bridges during the sixteenth century when P alladio built triangular frames for bridge spans 10 feet long . This effort also f ocused on the three basic principles og bridge design : convenience(serviceabili ty) ,appearance , and endurance(strength) . several timber truss bridges were co nstructed in western Europe beginning in the 1750s with spans up to 200 feet (61m) supported on stone substructures .Significant progress was possible in t he United States and Russia during the nineteenth century ,prompted by the ne ed to cross major rivers and by an abundance of suitable timber . Favorable e conomic considerations included initial low cost and fast construction .The transition from wooden bridges to steel types probably did not begin until about 1840 ,although the first documented use of iron in bridges was the chain bridge built in 1734 across the Oder River in Prussia . The first truss completely made of iron was in 1840 in the United States , followed by Eng land in 1845 , Germany in 1853 , and Russia in 1857 . In 1840 , the first ir on arch truss bridge was built across the Erie Canal at Utica .The Impetus of AnalysisThe theory of structures ,developed mainly in the ninetheenth century,foc used on truss analysis, with the first book on bridges written in 1811. The Wa rren triangular truss was introduced in 1846 , supplemented by a method for c alculating the correcet forces .I-beams fabricated from plates became popular in England and were used in short-span bridges.In 1866, Culmann explained the principles of cantilever truss bridges, an d one year later the first cantilever bridge was built across the Main River in Hassfurt, Germany, with a center span of 425 feet (130m) . The first cantileve r bridge in the United States was built in 1875 across the Kentucky River.A most impressive railway cantilever bridge in the nineteenth century was the Fir st of Forth bridge , built between 1883 and 1893 , with span magnitudes of 1711 feet (521.5m).At about the same time , structural steel was introduced as a prime mater ial in bridge work , although its quality was often poor . Several early exampl es are the Eads bridge in St.Louis ; the Brooklyn bridge in New York ; and t he Glasgow bridge in Missouri , all completed between 1874 and 1883.Among the analytical and design progress to be mentioned are the contrib utions of Maxwell , particularly for certain statically indeterminate trusses ; the books by Cremona (1872) on graphical statics; the force method redefined by Mohr; and the works by Clapeyron who introduced the three-moment equation s.The Impetus of New MaterialsSince the beginning of the twentieth century , concrete has taken its place as one of the most useful and important structural materials . Because of the coMParative ease with which it can be molded into any desired shape , its st ructural uses are almost unlimited . Wherever Portland cement and suitable agg regates are available , it can replace other materials for certain types of structu res, such as bridge substructure and foundation elements .In addition , the introduction of reinforced concrete in multispan frames at the beginning of this century imposed new analytical requirements . Structures of a high order of redundancy could not be analyzed with the classical metho ds of the nineteenth century .The importance of joint rotation was already dem onstrated by Manderla (1880) and Bendixen (1914) , who developed relationshi ps between joint moments and angular rotations from which the unknown mom ents can be obtained ,the so called slope-deflection method .More simplification s in frame analysis were made possible by the work of Calisev (1923) , who used successive approximations to reduce the system of equations to one simpl e expression for each iteration step . This approach was further refined and int egrated by Cross (1930) in what is known as the method of moment distributi on .One of the most import important recent developments in the area of analytical procedures is the extension of design to cover the elastic-plastic range , also known as load factor or ultimate design. Plastic analysis was introduced with some practical observations by Tresca (1846) ; and was formulated by Sa int-Venant (1870) , The concept of plasticity attracted researchers and engineers after World War Ⅰ, mainly in Germany , with the center of activity shifting to England and the United States after World War Ⅱ.The probabilistic approa ch is a new design concept that is expected to replace the classical determinist ic methodology.A main step forward was the 1969 addition of the Federal Highway Adim inistration (F HWA)”Criteria for Reinforced Concrete Bridge Members “ that co vers strength and serviceability at ultimate design . This was prepared for use in conjunction with the 1969 American Association of State Highway Offficials (AASHO) Standard Specification, and was presented in a format that is readil y adaptable to the development of ultimate design specifications .According to this document , the proportioning of reinforced concrete members ( including c olumns ) may be limited by various stages of behavior : elastic , cracked , an d ultimate . Design axial loads , or design shears . Structural capacity is the r eaction phase , and all calculated modified strength values derived from theoret ical strengths are the capacity values , such as moment capacity ,axial load ca pacity ,or shear capacity .At serviceability states , investigations may also be n ecessary for deflections , maximum crack width , and fatigue .Bridge TypesA notable bridge type is the suspension bridge , with the first example bu ilt in the United States in 1796. Problems of dynamic stability were investigate d after the Tacoma bridge collapse , and this work led to significant theoretica l contributions Steinman ( 1929 ) summarizes about 250 suspension bridges bu ilt throughout the world between 1741 and 1928 .With the introduction of the interstate system and the need to provide stru ctures at grade separations , certain bridge types have taken a strong place in bridge practice. These include concrete superstructures (slab ,T-beams,concrete box girders ), steel beam and plate girders , steel box girders , composite const ruction , orthotropic plates , segmental construction , curved girders ,and cable-stayed bridges . Prefabricated members are given serious consideration , while interest in box sections remains strong .Bridge Appearance and AestheticsGrimm ( 1975 ) documents the first recorded legislative effort to control t he appearance of the built environment . This occurred in 1647 when the Cou ncil of New Amsterdam appointed three officials . In 1954 , the Supreme Cou rt of the United States held that it is within the power of the legislature to de termine that communities should be attractive as well as healthy , spacious as well as clean , and balanced as well as patrolled . The Environmental Policy Act of 1969 directs all agencies of the federal government to identify and dev elop methods and procedures to ensure that presently unquantified environmenta l amentities and values are given appropriate consideration in decision making along with economic and technical aspects .Although in many civil engineering works aesthetics has been practiced al most intuitively , particularly in the past , bridge engineers have not ignored o r neglected the aesthetic disciplines .Recent research on the subject appears to lead to a rationalized aesthetic design methodology (Grimm and Preiser , 1976 ) .Work has been done on the aesthetics of color ,light ,texture , shape , and proportions , as well as other perceptual modalities , and this direction is bot h theoretically and empirically oriented .Aesthetic control mechanisms are commonly integrated into the land-use re gulations and design standards . In addition to concern for aesthetics at the sta te level , federal concern focuses also on the effects of man-constructed enviro nment on human life , with guidelines and criteria directed toward improving quality and appearance in the design process . Good potential for the upgradin g of aesthetic quality in bridge superstructures and substructures can be seen in the evaluation structure types aimed at improving overall appearance .Lords and lording groupsThe loads to be considered in the design of substructures and bridge foun dations include loads and forces transmitted from the superstructure, and those acting directly on the substructure and foundation .AASHTO loads . Section 3 of AASHTO specifications summarizes the loa ds and forces to be considered in the design of bridges (superstructure and sub structure ) . Briefly , these are dead load ,live load , iMPact or dynamic effec t of live load , wind load , and other forces such as longitudinal forces , cent rifugal force ,thermal forces , earth pressure , buoyancy , shrinkage and long t erm creep , rib shortening , erection stresses , ice and current pressure , collisi on force , and earthquake stresses .Besides these conventional loads that are ge nerally quantified , AASHTO also recognizes indirect load effects such as fricti on at expansion bearings and stresses associated with differential settlement of bridge components .The LRFD specifications divide loads into two distinct cate gories : permanent and transient .Permanent loadsDead Load : this includes the weight DC of all bridge components , appu rtenances and utilities, wearing surface DW nd future overlays , and earth fill EV. Both AASHTO and LRFD specifications give tables summarizing the unit weights of materials commonly used in bridge work .Transient LoadsVehicular Live Load (LL) Vehicle loading for short-span bridges :considera ble effort has been made in the United States and Canada to develop a live lo ad model that can represent the highway loading more realistically than the H or the HS AASHTO models . The current AASHTO model is still the applica ble loading.桥梁工程和桥梁美学桥梁工程的发展概况早在公元前1世纪,Marcus Vitrucios Pollio 的著作中就有关于建筑材料和结构类型的记载和评述。

数据采集系统中英文对照外文翻译文献

数据采集系统中英文对照外文翻译文献

中英文对照外文翻译(文档含英文原文和中文翻译)Data Acquisition SystemsData acquisition systems are used to acquire process operating data and store it on,secondary storage devices for later analysis. Many or the data acquisition systems acquire this data at very high speeds and very little computer time is left to carry out any necessary, or desirable, data manipulations or reduction. All the data are stored on secondary storage devices and manipulated subsequently to derive the variables ofin-terest. It is very often necessary to design special purpose data acquisition systems and interfaces to acquire the high speed process data. This special purpose design can be an expensive proposition.Powerful mini- and mainframe computers are used to combine the data acquisition with other functions such as comparisons between the actual output and the desirable output values, and to then decide on the control action which must be taken to ensure that the output variables lie within preset limits. The computing power required will depend upon the type of process control system implemented. Software requirements for carrying out proportional, ratio or three term control of process variables are relatively trivial, and microcomputers can be used to implement such process control systems. It would not be possible to use many of the currently available microcomputers for the implementation of high speed adaptive control systems which require the use of suitable process models and considerable online manipulation of data.Microcomputer based data loggers are used to carry out intermediate functions such as data acquisition at comparatively low speeds, simple mathematical manipulations of raw data and some forms of data reduction. The first generation of data loggers, without any programmable computing facilities, was used simply for slow speed data acquisition from up to one hundred channels. All the acquired data could be punched out on paper tape or printed for subsequent analysis. Such hardwired data loggers are being replaced by the new generation of data loggers which incorporate microcomputers and can be programmed by the user. They offer an extremely good method of collecting the process data, using standardized interfaces, and subsequently performing the necessary manipulations to provide the information of interest to the process operator. The data acquired can be analyzed to establish correlations, if any, between process variables and to develop mathematical models necessary for adaptive and optimal process control.The data acquisition function carried out by data loggers varies from one to 9 in system to another. Simple data logging systems acquire data from a few channels while complex systems can receive data from hundreds, or even thousands, of input channels distributed around one or more processes. The rudimentary data loggers scan the selected number of channels, connected to sensors or transducers, in a sequential manner and the data are recorded in a digital format. A data logger can be dedicated in the sense that it can only collect data from particular types of sensors and transducers. It is best to use a nondedicated data logger since any transducer or sensor can be connected to the channels via suitable interface circuitry. This facility requires the use of appropriate signal conditioning modules.Microcomputer controlled data acquisition facilitates the scanning of a large number of sensors. The scanning rate depends upon the signal dynamics which means that some channels must be scanned at very high speeds in order to avoid aliasing errors while there is very little loss of information by scanning other channels at slower speeds. In some data logging applications the faster channels require sampling at speeds of up to 100 times per second while slow channels can be sampled once every five minutes. The conventional hardwired, non-programmable data loggers sample all the channels in a sequential manner and the sampling frequency of all the channels must be the same. This procedure results in the accumulation of very large amounts of data, some of which is unnecessary, and also slows down the overall effective sampling frequency. Microcomputer based data loggers can be used to scan some fast channels at a higher frequency than other slow speed channels.The vast majority of the user programmable data loggers can be used to scan up to 1000 analog and 1000 digital input channels. A small number of data loggers, with a higher degree of sophistication, are suitable for acquiring data from up to 15, 000 analog and digital channels. The data from digital channels can be in the form of Transistor- Transistor Logic or contact closure signals. Analog data must be converted into digital format before it is recorded and requires the use of suitable analog to digital converters (ADC).The characteristics of the ADC will define the resolution that can be achieved and the rate at which the various channels can be sampled. An in-crease in the number of bits used in the ADC improves the resolution capability. Successive approximation ADC's arefaster than integrating ADC's. Many microcomputer controlled data loggers include a facility to program the channel scanning rates. Typical scanning rates vary from 2 channels per second to 10, 000 channels per second.Most data loggers have a resolution capability of ±0.01% or better, It is also pos-sible to achieve a resolution of 1 micro-volt. The resolution capability, in absolute terms, also depends upon the range of input signals, Standard input signal ranges are 0-10 volt, 0-50 volt and 0-100 volt. The lowest measurable signal varies form 1 t, volt to 50, volt. A higher degree of recording accuracy can be achieved by using modules which accept data in small, selectable ranges. An alternative is the auto ranging facil-ity available on some data loggers.The accuracy with which the data are acquired and logged-on the appropriate storage device is extremely important. It is therefore necessary that the data acquisi-tion module should be able to reject common mode noise and common mode voltage. Typical common mode noise rejection capabilities lie in the range 110 dB to 150 dB. A decibel (dB) is a tern which defines the ratio of the power levels of two signals. Thus if the reference and actual signals have power levels of N, and Na respectively, they will have a ratio of n decibels, wheren=10 Log10(Na /Nr)Protection against maximum common mode voltages of 200 to 500 volt is available on typical microcomputer based data loggers.The voltage input to an individual data logger channel is measured, scaled and linearised before any further data manipulations or comparisons are carried out.In many situations, it becomes necessary to alter the frequency at which particu-lar channels are sampled depending upon the values of data signals received from a particular input sensor. Thus a channel might normally be sampled once every 10 minutes. If, however, the sensor signals approach the alarm limit, then it is obviously desirable to sample that channel once every minute or even faster so that the operators can be informed, thereby avoiding any catastrophes. Microcomputer controlledintel-ligent data loggers may be programmed to alter the sampling frequencies depending upon the values of process signals. Other data loggers include self-scanning modules which can initiate sampling.The conventional hardwired data loggers, without any programming facilities, simply record the instantaneous values of transducer outputs at a regular samplingin-terval. This raw data often means very little to the typical user. To be meaningful, this data must be linearised and scaled, using a calibration curve, in order to determine the real value of the variable in appropriate engineering units. Prior to the availability of programmable data loggers, this function was usually carried out in the off-line mode on a mini- or mainframe computer. The raw data values had to be punched out on pa-per tape, in binary or octal code, to be input subsequently to the computer used for analysis purposes and converted to the engineering units. Paper tape punches are slow speed mechanical devices which reduce the speed at which channels can be scanned. An alternative was to print out the raw data values which further reduced the data scanning rate. It was not possible to carry out any limit comparisons or provide any alarm information. Every single value acquired by the data logger had to be recorded eventhough it might not serve any useful purpose during subsequent analysis; many data values only need recording when they lie outside the pre-set low and high limits.If the analog data must be transmitted over any distance, differences in ground potential between the signal source and final location can add noise in the interface design. In order to separate common-mode interference form the signal to be recorded or processed, devices designed for this purpose, such as instrumentation amplifiers, may be used. An instrumentation amplifier is characterized by good common-mode- rejection capability, a high input impedance, low drift, adjustable gain, and greater cost than operational amplifiers. They range from monolithic ICs to potted modules, and larger rack-mounted modules with manual scaling and null adjustments. When a very high common-mode voltage is present or the need for extremely-lowcom-mon-mode leakage current exists(as in many medical-electronics applications),an isolation amplifier is required. Isolation amplifiers may use optical or transformer isolation.Analog function circuits are special-purpose circuits that are used for a variety of signal conditioning operations on signals which are in analog form. When their accu-racy is adequate, they can relieve the microprocessor of time-consuming software and computations. Among the typical operations performed are multiplications, division, powers, roots, nonlinear functions such as for linearizing transducers, rimsmeasure-ments, computing vector sums, integration and differentiation, andcurrent-to-voltage or voltage- to-current conversion. Many of these operations can be purchased in available devices as multiplier/dividers, log/antilog amplifiers, and others.When data from a number of independent signal sources must be processed by the same microcomputer or communications channel, a multiplexer is used to channel the input signals into the A/D converter.Multiplexers are also used in reverse, as when a converter must distribute analog information to many different channels. The multiplexer is fed by a D/A converter which continually refreshes the output channels with new information.In many systems, the analog signal varies during the time that the converter takes to digitize an input signal. The changes in this signal level during the conversion process can result in errors since the conversion period can be completed some time after the conversion command. The final value never represents the data at the instant when the conversion command is transmitted. Sample-hold circuits are used to make an acquisition of the varying analog signal and to hold this signal for the duration of the conversion process. Sample-hold circuits are common in multichannel distribution systems where they allow each channel to receive and hold the signal level.In order to get the data in digital form as rapidly and as accurately as possible, we must use an analog/digital (A/D) converter, which might be a shaft encoder, a small module with digital outputs, or a high-resolution, high-speed panel instrument. These devices, which range form IC chips to rack-mounted instruments, convert ana-log input data, usually voltage, into an equivalent digital form. The characteristics of A/D converters include absolute and relative accuracy, linearity, monotonic, resolu-tion, conversion speed, and stability. A choice of input ranges, output codes, and other features are available. The successive-approximation technique is popular for a large number ofapplications, with the most popular alternatives being the counter-comparator types, and dual-ramp approaches. The dual-ramp has been widely-used in digital voltmeters.D/A converters convert a digital format into an equivalent analog representation. The basic converter consists of a circuit of weighted resistance values or ratios, each controlled by a particular level or weight of digital input data, which develops the output voltage or current in accordance with the digital input code. A special class of D/A converter exists which have the capability of handling variable reference sources. These devices are the multiplying DACs. Their output value is the product of the number represented by the digital input code and the analog reference voltage, which may vary form full scale to zero, and in some cases, to negative values.Component Selection CriteriaIn the past decade, data-acquisition hardware has changed radically due to ad-vances in semiconductors, and prices have come down too; what have not changed, however, are the fundamental system problems confronting the designer. Signals may be obscured by noise, rfi,ground loops, power-line pickup, and transients coupled into signal lines from machinery. Separating the signals from these effects becomes a matter for concern.Data-acquisition systems may be separated into two basic categories:(1)those suited to favorable environments like laboratories -and(2)those required for hostile environments such as factories, vehicles, and military installations. The latter group includes industrial process control systems where temperature information may be gathered by sensors on tanks, boilers, wats, or pipelines that may be spread over miles of facilities. That data may then be sent to a central processor to provide real-time process control. The digital control of steel mills, automated chemical production, and machine tools is carried out in this kind of hostile environment. The vulnerability of the data signals leads to the requirement for isolation and other techniques.At the other end of the spectrum-laboratory applications, such as test systems for gathering information on gas chromatographs, mass spectrometers, and other sophis-ticated instruments-the designer's problems are concerned with the performing of sen-sitive measurements under favorable conditions rather than with the problem ofpro-tecting the integrity of collected data under hostile conditions.Systems in hostile environments might require components for wide tempera-tures, shielding, common-mode noise reduction, conversion at an early stage, redun-dant circuits for critical measurements, and preprocessing of the digital data to test its reliability. Laboratory systems, on the other hand, will have narrower temperature ranges and less ambient noise. But the higher accuracies require sensitive devices, and a major effort may be necessary for the required signal /noise ratios.The choice of configuration and components in data-acquisition design depends on consideration of a number of factors:1. Resolution and accuracy required in final format.2. Number of analog sensors to be monitored.3. Sampling rate desired.4. Signal-conditioning requirement due to environment and accuracy.5. Cost trade-offs.Some of the choices for a basic data-acquisition configuration include:1 .Single-channel techniques.A. Direct conversion.B. Preamplification and direct conversion.C. Sample-hold and conversion.D. Preamplification, sample-hold, and conversion.E. Preamplification, signal-conditioning, and direct conversion.F. Preamplification, signal-conditioning, sample-hold, and conversion.2. Multichannel techniques.A. Multiplexing the outputs of single-channel converters.B. Multiplexing the outputs of sample-holds.C. Multiplexing the inputs of sample-holds.D. Multiplexing low-level data.E. More than one tier of multiplexers.Signal-conditioning may include:1. Radiometric conversion techniques.B. Range biasing.D. Logarithmic compression.A. Analog filtering.B. Integrating converters.C. Digital data processing.We shall consider these techniques later, but first we will examine some of the components used in these data-acquisition system configurations.MultiplexersWhen more than one channel requires analog-to-digital conversion, it is neces-sary to use time-division multiplexing in order to connect the analog inputs to a single converter, or to provide a converter for each input and then combine the converter outputs by digital multiplexing.Analog MultiplexersAnalog multiplexer circuits allow the timesharing of analog-to-digital converters between a numbers of analog information channels. An analog multiplexer consists of a group of switches arranged with inputs connected to the individual analog channels and outputs connected in common(as shown in Fig. 1).The switches may be ad-dressed by a digital input code.Many alternative analog switches are available in electromechanical and solid-state forms. Electromechanical switch types include relays, stepper switches,cross-bar switches, mercury-wetted switches, and dry-reed relay switches. The best switching speed is provided by reed relays(about 1 ms).The mechanical switches provide high do isolation resistance, low contact resistance, and the capacity to handle voltages up to 1 KV, and they are usually inexpensive. Multiplexers using mechanical switches are suited to low-speed applications as well as those having high resolution requirements. They interface well with the slower A/D converters, like the integrating dual-slope types. Mechanical switches have a finite life, however, usually expressed innumber of operations. A reed relay might have a life of 109 operations, which wouldallow a 3-year life at 10 operations/second.Solid-state switch devices are capable of operation at 30 ns, and they have a life which exceeds most equipment requirements. Field-effect transistors(FETs)are used in most multiplexers. They have superseded bipolar transistors which can introduce large voltage offsets when used as switches.FET devices have a leakage from drain to source in the off state and a leakage from gate or substrate to drain and source in both the on and off states. Gate leakage in MOS devices is small compared to other sources of leakage. When the device has a Zener-diode-protected gate, an additional leakage path exists between the gate and source.Enhancement-mode MOS-FETs have the advantage that the switch turns off when power is removed from the MUX. Junction-FET multiplexers always turn on with the power off.A more recent development, the CMOS-complementary MOS-switch has the advantage of being able to multiplex voltages up to and including the supply voltages. A±10-V signal can be handled with a ±10-V supply.Trade-off Considerations for the DesignerAnalog multiplexing has been the favored technique for achieving lowest system cost. The decreasing cost of A/D converters and the availability of low-cost, digital integrated circuits specifically designed for multiplexing provide an alternative with advantages for some applications. A decision on the technique to use for a givensys-tem will hinge on trade-offs between the following factors:1. Resolution. The cost of A/D converters rises steeply as the resolution increases due to the cost of precision elements. At the 8-bit level, the per-channel cost of an analog multiplexer may be a considerable proportion of the cost of a converter. At resolutions above 12 bits, the reverse is true, and analog multiplexing tends to be more economical.2. Number of channels. This controls the size of the multiplexer required and the amount of wiring and interconnections. Digital multiplexing onto a common data bus reduces wiring to a minimum in many cases. Analog multiplexing is suited for 8 to 256 channels; beyond this number, the technique is unwieldy and analog errors be-come difficult to minimize. Analog and digital multiplexing is often combined in very large systems.3. Speed of measurement, or throughput. High-speed A/D converters can add a considerable cost to the system. If analog multiplexing demands a high-speedcon-verter to achieve the desired sample rate, a slower converter for each channel with digital multiplexing can be less costly.4. Signal level and conditioning. Wide dynamic ranges between channels can be difficult with analog multiplexing. Signals less than 1V generally require differential low-level analog multiplexing which is expensive, with programmable-gain amplifiers after the MUX operation. The alternative of fixed-gain converters on each channel, with signal-conditioning designed for the channel requirement, with digital multi-plexing may be more efficient.5. Physical location of measurement points. Analog multiplexing is suitedfor making measurements at distances up to a few hundred feet from the converter, since analog lines may suffer from losses, transmission-line reflections, and interference. Lines may range from twisted wire pairs to multiconductor shielded cable, depending on signal levels, distance, and noise environments. Digital multiplexing is operable to thousands of miles, with the proper transmission equipment, for digital transmission systems can offer the powerful noise-rejection characteristics that are required for29 Data Acquisition Systems long-distance transmission.Digital MultiplexingFor systems with small numbers of channels, medium-scale integrated digital multiplexers are available in TTL and MOS logic families. The 74151 is a typical example. Eight of these integrated circuits can be used to multiplex eight A/D con-verters of 8-bit resolution onto a common data bus.This digital multiplexing example offers little advantages in wiring economy, but it is lowest in cost, and the high switching speed allows operation at sampling rates much faster than analog multiplexers. The A/D converters are required only to keep up with the channel sample rate, and not with the commutating rate. When large numbers of A/D converters are multiplexed, the data-bus technique reduces system interconnections. This alone may in many cases justify multiple A/D converters. Data can be bussed onto the lines in bit-parallel or bit-serial format, as many converters have both serial and parallel outputs. A variety of devices can be used to drive the bus, from open collector and tristate TTL gates to line drivers and optoelectronic isolators. Channel-selection decoders can be built from 1-of-16 decoders to the required size. This technique also allows additional reliability in that a failure of one A/D does not affect the other channels. An important requirement is that the multiplexer operate without introducing unacceptable errors at the sample-rate speed. For a digital MUX system, one can determine the speed from propagation delays and the time required to charge the bus capacitance.Analog multiplexers can be more difficult to characterize. Their speed is a func-tion not only of internal parameters but also external parameters such as channel, source impedance, stray capacitance and the number of channels, and the circuit lay-out. The user must be aware of the limiting parameters in the system to judge their ef-fect on performance.The nonideal transmission and open-circuit characteristics of analog multiplexers can introduce static and dynamic errors into the signal path. These errors include leakage through switches, coupling of control signals into the analog path, and inter-actions with sources and following amplifiers. Moreover, the circuit layout can com-pound these effects.Since analog multiplexers may be connected directly to sources which may have little overload capacity or poor settling after overloads, the switches should have a break-before-make action to prevent the possibility of shorting channels together. It may be necessary to avoid shorted channels when power is removed and a chan-nels-off with power-down characteristic is desirable. In addition to the chan-nel-addressing lines, which are normally binary-coded, it is useful to have inhibited or enable lines to turn all switches off regardless of the channel being addressed. This simplifies the external logic necessary to cascade multiplexers and can also be useful in certain modes of channeladdressing. Another requirement for both analog and digital multiplexers is the tolerance of line transients and overload conditions, and the ability to absorb the transient energy and recover without damage.数据采集系统数据采集系统是用来获取数据处理和存储在二级存储设备,为后来的分析。

纺织专业气流纺纱中英文对照外文翻译文献

纺织专业气流纺纱中英文对照外文翻译文献

中英文对照外文翻译(文档含英文原文和中文翻译)Rotor SpinningRotor spinning involves the separation of fibers by vigorous drafting, and then collection and twisting of the fibers in a rotor. In the actual rotor spinning machine, draw frame sliver is presented to a spring-loaded feed plate and feed roller. Fibers within the sliver are then individualized by a combing roller covered with saw-tooth wire clothing. Once opened, the fibers pass through a transport tube in which they are separated further and parallelized before being deposited on the inside wall of the rotor. Centrifugal forces, generated by the rotor turning at high speeds, cause the fibers to collect along the wall of the rotor, forming a ring. The fiber ring is then swept from the rotor by a newly formed end of yarn which contains untwisted fibers. With each rotation of the rotor, twist is inserted, converting the fiber bundle into yarn as it is pulled out of the rotor through a navel. The yarn is then taken up onto a cross-wound package, eliminating the need for a separate winding process as in ring spinning. As the yarn is drawn from the rotor, some fibers lying in the peeling point may wrap around the yarn, resulting in the formation of undesirable, random wrapper fibers which are characteristic of the open-end yarn structure. Rotor spinning can be divided into four major areas: fiber separation, fiber transport, fiber reassembly, and twist insertion.Schematic representation of the rotor spinning process.Fiber SeparationFiber separation is critical in rotor spinning for effective orientation of the fibers before yarn formation within the rotor. The sliver must be separated into individual fibers for effective delivery to the rotor. If the fibers are not separated effectively, a quality yarn with the best possible fiber orientation cannot be formed.The most common method for fiber separation incorporates the use of combing roller covered with saw-tooth wire. Sliver is fed into the rotational action of the combing roller by action of a feed roller/feed plate mechanism. As the sliver is fed into the wires of the combing roller, individual fibers are caught by the teeth on the roller and pulled from the sliver. At this point the centrifugal forces and aerodynamics of the system transport the fibers from the teeth on the surface of the combing roller to an airstream where the fibers are separated further and eventually deposited into the rotor in small layers over many revolutions.Another critical function of the combing roller is the removal of trash from the sliver. Well-cleaned sliver should be presented to the system. Some dust and dirt particles, however, will still be present in the cleanest sliver, especially if cotton is being processed. The trash extraction unit of the combing roller is designed to allow lighter fibers to be carried by air to the transport duct while the heavy trash particles, because of their mass, will deflect through an opening below the combing roller and out of the system. If the fibers are not clean on delivery to the open-end system,excessive fine particles and dust will deposit in the rotor, preventing uniform fiberalignment. As a result of particle buildup, yarn of poor quality (with poor fiber orientation, lower strength, and increased imperfections) is produced.Fiber transportOnce removed from the combing roller, the fibers must be transported to the rotor without becoming excessively disoriented. The fiber transport tube is responsible for moving individualized fibers from the combing roller teeth and transporting them via air currents to the rotor. The transport tube is generally tapered to accelerate the air and fibers during movement through the tube. This fiber acceleration helps to straighten out some fiber hooks existing from the fibers leaving the combing roller.Fiber ReassemblyUpon exiting the transport tube, the fibers are accumulated in the rotor which is the heart of the open-end spinning process. Within the rotor, fibers are collected into an untwisted strand against the rotor wall via centrifugal forces, and then the strand is drawn off as yarn.As the fibers are delivered to the rotor wall, the centrifugal forces cause them to slide down the wall into a groove. It takes many layers of fiber to make up a strand of sufficient density for yarn; therefore, the yarn is built over a period of many revolutions. As a result, numerous doublings occur within the groove (approximately 100) wherein further blending takes place and short-term unevenness that occurs at drawing is reduced. Consequently, the rotor yarns are extremely even with few thick and thin defects.For short staple spinning, rotor diameters range from 31 to 56 mm and may be constructed with a variety of shallow “groove shapes”. The rotor design has a significant effect on the yarn structure and physical properties, resulting from the fiber orientation and the twist imparted on the yarn while it lies within the rotor groove. The rotor typically has a conical shape, and the inner surface along the wall is known as the collecting groove, the diameter of which is the specified rotor diameter. The rotor diameter depends on the machine speed, as well as on fiber properties, such as fiber length. As a rule of thumb, the rotor diameter should be no less than 1.2 times the staple length of the fiber; ends down at spinning otherwise increase.Illustrations of different rotor profiles available for rotor spinning.The shape of the rotor groove should be considered because of the effects on twisting forces that occur in the groove to form the yarn. A variety of different rotor groove shapes exist to allow for different final yarn properties, including yarn strength, bulk, torque, and uniformity characteristics. For instance, the T-rotor, because of its narrow groove diameter, produces yarns with a tight configuration more nearly like ring spun yarns than does the G-rotor. However, the bulk of the yarns produced from a G-rotor provides for better knit fabric hand and cover. As a result, specific rotors must be chosen to generate the appropriate yarn appearance and physical properties desired in the end product. S-rotors and U-rotors are generally used for sock, blanket, and towel yarns. G-rotors are normally used for apparel knitting yarns, and T-rotors are most often used for weaving yarns.Twist InsertionTwist occurs in the open-end spinning process as a result of the action of the rotor, navel, and take-up rollers. Once a sufficient number of fibers has collected in the rotor, twisting action from the rotation of the rotor propagates from the rotation of the rotor propagates from the navel back to the peeling point at the rotor (the point at which the fibers leave the rotor).At the peeling point, the fiber strand is slightly twisted and peeled off the collecting surface at which time full twist is imparted. The strand is then carriedperpendicularly out through a navel along the axis of the rotor.Figure schematically diagrams the yarn formation process within the rotor during open-end spinning. The rotor rotates in direction “a ” at a fixed rate. At point “B ” the newly formed yarn moves through the yarn withdrawal tube (or navel) where it isremoved form the rotor and wound onto a package. The actual yarn formation occurs in area “c”, wherein the individual fibers begin to collect twist. Once slightly twisted, the fibers reach point “p”, the peeling point, and the bundle is directed out of the rotor groove where it is fully twisted.Each revolution of the rotor theoretically introduces about one turn of twist into the yarn; however, slippage occurring during actual twist insertion is believed to cause lower actual twist than the number of rotor rotations. Because the fibers are not held firmly by the nip of a pair of rollers, as in ring spinning, the fibers can migrate independently during twisting. In fact when tsist is measured in rotor yarns, the measured twist is usually 15 percent to 40 percent lower than the machine twist. Machine twist is determined by the following formula:Rotor Speed (rpm)Twist (turns/m)=Delivery Speed (m/min)Not all twist imparted to the yarn is directly caused by rotor rotation. As the yarn travels through the navel and doffing tube, a significant amount of contact occurs. This rolling action on the navel surface produces a false twist that is trapped in a section of the yarn inside the rotor. In addition, a proportion of the real twist arising from the rotation of the rotor projects backward into the rotor. Therefore, the total twist is the sum (or difference) of the two kinds of twist. Overall, the false twist provides for more stability of the yarn between the navel and the rotor groove than does the genuine preset twist.The final yarn at the package contains only real twist, yet the false twist has definite effect on final yarn characteristics. With increases in rotor speeds, false twist is increased correspondingly due to higher yarn tension and more centrifugal forces in the rotor. This increase in false twist tends to increase the amount of wrapper fibers in the yarn.Wrapper Fiber FormationThe inner core structure of rotor yarn resembles that of ring spun yarn structure; however, rotor yarn has a unique structural buildup of outside yarn layers that affects the aesthetic as well as the physical characteristics of the yarn. Once each revolution some fibers entering the rotor from the transport tube interfere with the yarn peeling from the collecting surface. Portions of the fibers entering the rotor are captured inadvertently into the yarn. Instead of being twisted into the inner yarn structure, these fibers wrap around the outside of the yarn. The formation of these fibers, called “wrapper fibers”, is illustrated in figure.The fewer wrapper fibers that are present, the more that rotor yarns resemble ring spun yarns. However, methods to reduce wrapper fiber formation in rotor spinning cause reductions in productivity, as the minimum twist required for spinning increases. In general, wrapper fibers should be minimized to achieve an aesthetically appealing yarn while maintaining productivity.Also, the presence of wrapper fibers in rotor yarn has been shown to contribute toincreased needle wear in knitting. It is theorized that wrapper fibers move across the knitting needles like “speed bumps on a highway,”sending waves of vibration through the needle and contributing to accelerated wear. The formation of wrapper fibers is largely affected by several machine-related and fiber-related factors including: rotor speed, rotor diameter, fiber length, friction between the fiber and rotor groove, and aggressiveness of the navel. With increasing rotor speed, the levels of both false twist and yarn rotation become higher; hence, wrapper fibers are wrapped around theSequence of illustrations showing one mechanism of wrapper fiber formation on the surface of a rotor yarn with: (A) the fiber peeling point which moves slightly clockwise during the above sequence, (1) a fiber entering the rotor, (2) this fiber beginning to wrap around the body of the yarn rather than being twisted into the tail of the yarn, (3) the fiber continuing to wrap, and (4) the final view of such a wrapper fiber.core more often. At higher speeds the rotor diameter or the navel should be changed to reduce false twist; otherwise, the yarn qualities will deteriorate.With smaller rotors the presence of wrapper fibers is less pronounced than with larger rotors. Even though more wrapper fibers exist owing to the fact that more fibers are delivered to the peeling point of the rotor, the wrapper fibers are wound fewer times around the yarn core than with large rotors. Therefore, yarns produced on smaller rotors tend to be more hairy, but less bulky than similar yarns produced with larger rotors.Overall, the factors relating to wrapper fiber formation must be adjusted so that the minimum number of wrapper fibers are produced for a given speed. Wrapperfibers cannot be entirely removed, or productivity would be restricted; however, excessive wrapper fibers will result in low quality, aesthetically displeasing yarn. Advantages and Disadvantages of Rotor Spun YarnThe primary attraction of rotor yarn is its production cost advantage over ring spun yarn. Because of its high degree of automation and higher productivity, a pound of rotor yarn can be produced with approximately one third the labor needed to produce ring spun yarn. Part of the labor reduction is attributed to the high automation of the system. Some other primary advantages include: (1) Lower defect levels compared to the other spinning systems, particularly fewer yarn long thick and thin places; (2) Superior knit fabric appearance; (3) Lower fiber shedding at knitting or weaving than ring spun yarn; (4) Less torque than ring spun yarn; (5) Less energy per unit produced required than for ring spinning; (6) Less floor spaced required compared to ring and air jet spinning; (7) Sophisticated real time quality and production monitoring on each yarn position; and (8) Superior dyeability compared to ring spun yarn.As with any spinning system, some disadvantages exist with rotor yarns. From the initial development of rotor yarn, concerns have existed regarding the harshness of the yarn compared to ring spun yarn. Some developments have been made to offset the difference through special spinning setups or fabric finishing; however, fabrics produced from rotor and ring spun yarn are still readily distinguishable. These are other disadvantages of rotor yarn: (1) Low strength (approximately only 70 percent of ring spun yarn); (2) High pilling propensity compared to air jet yarn; (3) Accelerated needle wear at knitting compared to ring spun yarn; and (4) High maintenance costs compared to ring and air jet spinning.Critical Spin box Factors for Spinning Performance and QualityDraftOne of the first decisions that must be made when beginning to produce a yarn at rotor spinning is the weight of the sliver that should be fed into the machine. The relationship between the sliver weight and the yarn weight is the draft required by the machine. The machine draft can be calculated with the equation:Yarn Count (Ne)Draft=Sliver weight (gr/yd)8.33The preferred draft is different for the various machines available. For the Rieter R1, draft levels above 200 generally help yarn strength, evenness, and IPI defects. Because the Schlafhorst machines have a smaller combing roller, the preferred draft level is lower, usually less than 200.Rotor SpeedRotor speed has a strong correlation with yarn strength, elongation, evenness, shedding, and yarn breaks if all else is held constant. Increases in rotor speed cause increases in spinning tension, which disrupt fiber formation in the rotor. However, if arotor speed increase is made in conjunction with a rotor diameter decrease, it is possible to avoid a spinning tension increase and preserve yarn quality and ends down levels.SuctionA vacuum is generated at the end of the rotor spinning machine to provide suction at each spinning position. The suction helps to remove the fibers from the combing roller and to move them through the fiber transport channel. The removal of fibers occurs before the fibers make a full turn on the combing roller. The air that travels with the fibers through the transport channel is accelerated by approximately 50 percent as it moves into the rotor because of the taper of the channel and the extra air generated by the rotor and vacuum. The air then exits around the edge of the rotor. If any turbulence exists (because of improper setting of the rotor), if air leaks occur due to worn seals, of if the vacuum level is insufficient, yarn formation in the rotor will be adversely affected, and quality and efficiency will deteriorate.Combing Roller ZoneThe combing roller zone is schematically illustrated in figure. The sliver is delivered to the combing roller by the feed which turns at a speed that is based on both the draft and the yarn delivery speed set on the machine. The critical factors in the combing roller zone for optimal quality and running performance include: feed clutch wear, feed tray-to-combing roller spacing, combing roller speed, combing roller wire selection, and combing roller wire condition.Rotor ZoneThe rotor zone can be defined as the combination of the rotor, twin disc assembly, and rotor drive belt. Critical factors influencing quality and machine performance in this zone include: rotor speed/diameter, rotor groove, rotor stem wear, rotor cup wear, twin disc wear, and rotor belt wear and alignment.Yarn Withdrawal Zone (Navel and Doff tube)The fiber bundle formed in the rotor is withdrawn through a navel and doff tube. These components not only guide the newly formed yarn out of the spin box, but contribute significantly to spinning performance and to yarn characteristics As mentioned previously, movement of yarn against the navel introduces a false twist that strengthens the yarn between the rotor and delivery roller. Similarly, inserts can be added into the doff tube to increase friction on the yarn and therefore to increase false twist. However, some measure taken to increase false twist cause evenness of the yarn to deteriorate. Critical factors in the yarn withdrawal zone include navel selection, navel spacing (the distance from the navel surface to the peeling point of the fibers from the rotor), and doff tube selection.Critical Winding and Piecing Factors for Spinning Performance and Quality Winding ZoneThe winding zone consists of the area from which the yarn exits the spin box to the drum that turns the package. The optimal setup of the winding zone is strongly dependent on the end use of the yarn. For instance, if the yarn is to be dyed, the desired package density would be low to allow dye to pass through the package. Thus, tension would be set lower than for yarns for weaving or knitting applications. Weaving packages are usually wound with relatively high tension to allow for a high density, heavy package, so it has to be changed less frequently at the next process, thereby helping processing efficiency. If a knitting yarn is being produced, wax must be applied at the winding zone to help to lubricate the yarn in order to reduce yarn-to-metal friction at the knitting machine.Critical factors to control in the winding zone to produce a high quality package include: yarn tension setting, angle of wind, cradle pressure, cradle pressure, cradle alignment, yarn traverse displacement, delivery roller wear, wax application, and drive tire condition. The winding tension is the ratio in speed of the winding drum and the delivery roller that pulls the yarn out of the spin box. As a rule of thumb, the higher the tension, the lower the yarn elongation, and the harder the package. Automatic Piece ConsiderationThe modern rotor spinning machines are equipped with a piece that travels along themachine, automatically rejoining the yarns that have stopped spinning. When thepiece reaches a stopped position, the spin box is opened and cleaned automatically using a plastic scraper that cleans the groove and a brush that removes loose residue. These are critical factors involving piece-ups: piece efficiency, piecing strength, and piecing appearance.译文:气流纺纱气流纺纱是通过强大的气流是纤维分离,然后把纤维加捻卷绕在转子上。

(完整)工程造价专业外文文献翻译(中英文对照

(完整)工程造价专业外文文献翻译(中英文对照

外文文献:Project Cost Control: The Way it WorksBy R. Max WidemanIn a recent consulting assignment we realized that there was some lack of understanding of the whole system of project cost control, how it is setup and applied. So we decided to write up a description of how it works。

Project cost control is not that difficult to follow in theory.First you establish a set of reference baselines. Then, as work progresses, you monitor the work, analyze the findings, forecast the end results and compare those with the reference baselines. If the end results are not satisfactory then you make adjustments as necessary to the work in progress, and repeat the cycle at suitable intervals。

If the end results get really out of line with the baseline plan, you may have to change the plan。

More likely, there will be (or have been) scope changes that change the reference baselines which means that every time that happens you have to change the baseline plan anyway。

文学作品中英文对照外文翻译文献

文学作品中英文对照外文翻译文献

文学作品中英文对照外文翻译文献
本文旨在汇总文学作品中的英文和中文对照外文翻译文献,共有以下几篇:
1. 《傲慢与偏见》
翻译:英文原版名为“Pride and Prejudice”,中文版由钱钟书翻译。

该小说是英国作家简.奥斯汀的代表作之一,描绘了19世纪英国中上层社会的生活和爱情故事。

2. 《了不起的盖茨比》
翻译:英文原版名为“The Great Gatsby”,中文版由杨绛翻译。

小说主要讲述了一个居住在纽约长岛的年轻白领盖茨比为了追求他的旧爱黛西而付出的努力,是20世纪美国文学的经典之作。

3. 《麦田里的守望者》
翻译:英文原版名为“The Catcher in the Rye”,中文版由施蛰存翻译。

该小说主人公霍尔顿是美国现代文学中最为知名的反英雄形象之一,作品深刻地揭示了青少年内心的孤独和矛盾。

4. 《1984》
翻译:英文原版名为“1984”,中文版由李敬瑞翻译。

该小说是英国作家乔治.奥威尔的代表作之一,描绘了一个虚构的极权主义社会。

以上是部分文学作品的中英文对照外文翻译文献,可以帮助读者更好地理解和学习相关文学作品。

道路路桥工程中英文对照外文翻译文献

道路路桥工程中英文对照外文翻译文献

道路路桥工程中英文对照外文翻译文献中英文资料中英文资料外文翻译(文档含英文原文和中文翻译)原文:Asphalt Mixtures-Applications。

Theory and Principles1.ApplicationsXXX is the most common of its applications。

however。

and the onethat will be XXX.XXX “flexible” is used to distinguish these pavements from those made with Portland cement,which are classified as rigid pavements。

that is。

XXX it provides they key to the design approach which must be used XXX.XXX XXX down into high and low types,the type usually XXX product is used。

The low typesof pavement are made with the cutback。

or emulsion。

XXX type may have several names。

However。

XXX is similar for most low-type pavements and XXX mix。

forming the pavement.The high type of asphalt XXX中英文资料XXX grade.中英文资料Fig.·1 A modern XXX.Fig.·2 Asphalt con crete at the San Francisco XXX.They are used when high wheel loads and high volumes of traffic occur and are。

工程造价专业外文文献翻译(中英文对照(20200610064406)

工程造价专业外文文献翻译(中英文对照(20200610064406)

外文文献:Project Cost Control: The Way it WorksBy R. Max WidemanIn a recent consulting assignment we realized that there was some lack of understanding of the whole system of project cost control, how it is setup and applied. So we decided to write up a description of how it works. Project cost control is not that difficult to follow in theory.First you establish a set of reference baselines. Then, as work progresses, youmonitor the work, analyze the findings, forecast the end results and compare those with the reference baselines. If the end results are not satisfactory then youmake adjustments as necessary to the work in progress, and repeat the cycle atsuitable intervals. If the end results get really out of line with the baseline plan, youmay have to change the plan. More likely, there will be (or have been) scope changes that change the reference baselines which means that every time that happens you have to change the baseline plan anyway.But project cost control is a lot more difficult to do in practice, as is evidencedby the number of projects that fail to contain costs. It also involves a significantamount of work, as we shall see, and we might as well start at the beginning. So letus follow the thread of project cost control through the entire project life span.And, while we are at it, we will take the opportunity to point out the properplaces for several significant documents. These include the Business Case, the Request for (a capital) Appropriation (for execution), Work Packages and the WorkBreakdown Structure, the Project Charter (or Brief), the Project Budget or Cost Plan,Earned Value and the Cost Baseline. All of these contribute to the organization'sability to effectively control project costs.FootnoteI am indebted to my friend Quentin Fleming, the guru of Earned Value, for checking and correcting my work on this topic.The Business Case and Application for (execution) FundingIt is important to note that project cost control is most effective when the executive management responsible has a good understanding of how projects should unfold through the project life span. This means that they exercise their responsibilities at the key decision points between the major phases. They mustalso recognize the importance of project risk management for identifying and planning to head off at least the most obvious potential risk events.In the project's Concept Phase?Every project starts with someone identifying an opportunity or need. That is usually someone of importance or influence, if the project is to proceed, and thatperson often becomes the project's sponsor.? To determine the suitability of the potential pr oject, most organizations call for the preparation of a "Business Case" and its "Order of Magnitude" cost to justify thevalue of the project so that itcan be compared with all the other competing projects. This effort is conducted inthe Concept Phase of the project and is done as a part of the organization'smanagement of the entire project portfolio.? The cost of the work of preparing the Business Case is usually covered by corporate management overhead, but it may be carried forward as an accounting cost to the eventual project. No doubt because this will provide a tax benefit to the organization. The problem is, how do you then account for all the projects that arenot so carried forward??If the Business case has sufficient merit, approval will be given to proceed to a Development and Definition phase.In the project's Development or Definition Phase? The objective of the Development Phase is to establish a good understanding of thework involved to produce the required product, estimate the cost and seek capitalfunding for the actual execution of the project.? In a formalized setting, especially where big projects are involved, this application forfunding is often referred to as a Request for (a capital) Appropriation (RFA) or Capital Appropriati on Request (CAR).? This requires the collection of more detailed requirements and data to establish whatwork needsto be done to produce the required product or "deliverable". From this information, a plan is prepared in sufficient detail to give adequate confidence in adollar figure to be included in the request.? In a less formalized setting, everyone just tries to muddle through.Work Packages and the WBSThe Project Management Plan, Project Brief or Project Charter? If the deliverable consists of a number of different elements, these are identified。

知识产权中英文对照外文翻译文献

知识产权中英文对照外文翻译文献

(文档含英文原文和中文翻译)中英文翻译1外文参考文献译文the well-known trademarks and dilute anti-diluted First, well-known trademarks SummaryWell-known trademarks is a long-term use, in the market enjoy a high reputation, known for the relevant public and by certain procedures that the trademark. Since the "Paris Convention" was first introduced the concept of well-known trademarks, the well-known trademarks for special protection legislation has become the world trend.Paris Convention stipulates: all of the members were identified as the well-known trade marks, or registered First, the first to ban others, and the other is to prohibit the use of others with identical or similar logo. Trips further provides: 1, the Paris Convention for the special protection and extension of the services of well-known trademarks, 2, the scope of protection does not extend to prohibit similar goods or services with the well-known trademarks for use on the same or similar logo,3, on how to That a well-known trademarks in principle a simple requirement.National legislation on the practice, the well-known trade marks that standards vary, often based on specific trade mark promotion of public awareness of related areas, logo merchandise sales and the scope of national interests, and other factors identified. From an international treaty to protect the well-known trademarks mind, that well-known trade marks and protection of well-known trade marks are closely linked.Second, the well-known trademarks protected modeOn the protection of the main trademarks of relative and absolute protectionism two models.The former refers to ban others with well-known trademarks identical or similar trademark with the trademark owner the same or similar industries in the registration or use of similar goods in non-use of the same or similar trademarks is permitted, "the Paris Convention "That is, relative to protectionism.While the latter refers to ban others in any industry, including the well-known trade mark goods with different or similar to those in the industry to register with the well-known trade marks and the use of the same or similar trademarks, TRIPS agreement that is taken by the expansion of the absolute protectionism.In simple economic form, as specified by the trade mark goods at a single, specific trade mark goods and the link between more closely. With, a valuable well-known trademarks have been more and more use of different types of commodities, which are among the types of goods on the property may be totally different, in a trademark associated with the commodity groups and the relative weakening of trade marks Commodity producers and the relative isolation. Not well-known trademarks such as cross-category protection and allow others to register, even if the goods obvious differences, the public will still be in the new goods and reputable well-known trademarks to establish a link between people that the goods may be well-known trademark, the new commodities , Or the well-known trademarksof goods and people between the existence of a legal, organizational or business association, thus leading to the misuse of consumers purchase. The rapid development of the commodity today, the relative protectionism has not improved the protection of the public and well-known trademark owner's interests.In view of this, in order to effectively prevent the reputation of well-known trademarks, and the identification of significant features and advertising value by the improper use of the damage, many countries on the implementation of a well-known trademarks is protectionism, which prohibits the use of any products on the same or with the well-known trademarks Similar to the trademark.TRIPS Agreement Article 16, paragraph 3 states: Paris Convention 1967 text, in principle, applicable to the well-known trademarks and logos of the commodities or services are not similar goods or services, if not similar goods or services on the use of the trademark will be Suggest that the goods or services with the well-known trademarks on a link exists, so that the interests of all well-known trademarks may be impaired.Third, the well-known trademarks dilutedThe protection of trademark rights, there are mainly two: one for the confusion theory, a theory for desalination.The main traditional trademark protection for trade marks the difference between functional design, and its theoretical basis for the theory of confusion. In summary, which is to ensure that the trademark can be identification, confirmation and different goods or services different from the significant features, to avoid confusion, deception and E Wu, the law gives first use of a person or persons registered with exclusive rights, which prohibits any Without the permission of the rights to use may cause confusion among consumers in the same or similar trademarks. Clearly, the traditional concept of trademark protection, to stop "the possibility of confusion" is the core of trademark protection.With the socio-economic development and commercialization of the continuousimprovement of the degree, well-known trademarks by the enormous implication for the growing commercial value have attracted the attention of people. Compared with ordinary marks, bearing well-known trademarks by the significance and meaning beyond the trademark rights to the general, and further symbol of product quality and credit, contains a more valuable business assets - goodwill. Well-known trade mark rights of people to use its excellent reputation of leading the way in the purchasing power, instead of the use of trademarks to distinguish between different products and producers.When the mark beyond the role of this feature to avoid confusion, then, this factor is obviously confused and can not cover everything, and other factors become as important as or more important. Thus, in theory confusion on the basis of further development of desalination theory.Trademark Dilution (dilution), also known as trademark dilution, is one of trademark infringement theory. "Watered down", according to the U.S. "anti-federal trademark law dilute" means "regardless of well-known trade mark rights and the others between the existence of competition, or existence of confusion, misunderstanding or the possibility of deception, reduce and weaken the well-known trademarks Its goods or services and the identification of significant capacity of the act. " In China, some scholars believe that "refers to dilute or weaken gradually weakened consumer or the public will be trademarks of the commercial sources with a specific link between the ability." Trademark faded and that the main theory is that many market operators have Using well-known trademarks of the desire of others, engage in well-known trademarks should be to prevent others from using its own unique identification of special protection.1927, Frank • Si Kaite in the "Harvard Law reviews" wrote the first trademark dilute theory. He believes that people should not only be trademarks of others prohibit the use of the mark, he will compete in the commodity, and should prohibit the use of non-competitive goods on. He pointed out: the real role of trade marks, not distinguish between goods operators, but satisfied with the degree of differencebetween different commodities, so as to promote the continuous consumer purchase. From the basic function of trademarks, trade mark used in non-competitive goods, their satisfaction with regard to the distinction between the role of different commodities will be weakened and watered down. Trademarks of the more significant or unique, to the public the impression that the more deeply, that is, should be restricted to non-compete others in the use of goods or services.Since then, the Intellectual Property Rights Branch of the American Bar Association Chairman Thomas • E • Si Kaite Smith on the theory made a further elaboration and development. He said: "If the courts allow or laissez-faire 'Rolls Royce' restaurants, 'Rolls-Royce' cafeteria, 'Rolls-Royce' pants, 'Rolls-Royce' the candy, then not 10 years, ' Rolls-Royce 'trademark owners will no longer have the world well-known trademarks. "Si Kaite in accordance with the theory of well-known trade marks have faded because of the effect of non-rights holders with well-known trademarks in the public mind the good image of well-known trademarks will be used in non-competitive goods, so as to gradually weaken or reduce the value of well-known trademarks, That is, by the well-known trademarks have credibility. Trademark tag is more significant or unique characteristics, which in the public mind the impression that the more deep, more is the need for increased protection, to prevent the well-known trade marks and their specific goods was the link between the weakening or disappearance.In practice, trademarks diluted share a wide range of operating methods, such as:A well-known trademarks of others will still use as a trademark, not only in the use of the same, similar to the goods or services. For example, household appliances, "Siemens" trademark as its own production of the furniture's trademark.2. To other people's well-known trademarks as their corporate name of the component. Such as "Haier" trademark for the name of his restaurant.3. To the well-known trademarks of others as the use of domain names. For example, watches trademark "OMEGA" registered the domain name for themselves().4. To the well-known trademarks of others as a commodity and decorating use.5. Will be others as well-known trade marks of goods or services using the common name. For example, "Kodak" interpreted as "film, is a camera with photographic material", or "film, also known as Kodak,……" This interpretation is also the mark of the water down. If the "Kodak" ignored the trademark owner, after a period of time, people will Kodak film is, the film is Kodak. In this way, the Kodak film-related goods has become the common name, it as a trademark by a significant, identifiable on limbo. The public well-known Jeep (Jeep), aspirin (Aspirin), freon (Freon), and so was the registration of foreign goods are due to improper use and management and the protection of poor, evolved into similar products common name, Thus lost its trademark logo features.U.S. "anti-diluted Federal trademark law" before the implementation of the Federal Court of Appeal through the second from 1994 to 1996 case, identified the following violations including the Trademark Dilution: (1) vague, non-means as others in similar goods not on Authorized the use of a trademark so that the sales of goods and reduce the value of trademarks or weakened (2) pale, that is because of violations related to the quality, or negative, to demonize the acts described a trademark goods may be caused to others The negative effects of the situation, (3) to belittle, or improperly changed, or derogatory way to describe a trade mark case.The majority of our scholars believe that the well-known trademarks diluted There are two main forms: watered down and defaced. The so-called dilute the people will have no right to use the same or similar trademark with the well-known trademarks used in different types of commodities, thus making the mark with the goods weakened ties between the specific acts the so-called defaced is that people will have no right to use the same Or similar marks for the well-known trade marks will have to belittle good reputation, tarnished the role of different types of goods on the act.Some scholars believe that the desalination also refers to the three aspects of well-known trademarks damage. First, in a certain way to demonize the relevant well-known trademarks; Second, some way related to well-known trademark dark; Third is the indirect way so that consumers will distort trade mark goods for the general misunderstanding of the name.In general, can be diluted in the form summarized as follows:1, weakeningWeakening is a typical diluted form, also known as dark, is that others will have some visibility in the use of a trademark is not the same, similar to the goods or services, thereby weakening the mark with its original logo of goods or services The link between, weakening the mark was a significant and identifiable, thus bearing the trade mark by the damage caused by acts of goodwill. Weakening the mark of recognition of the significant damage is serious, it can be the recognition of trademark dilution, was significant, or even make it completely disappeared, then to the mark by carrying the reputation of devastating combat.First, the weakening of the identification is the weakening and lower. Any unauthorized person, others will have some visibility in the use of a trademark is not the same, similar to the goods or services, will reduce its recognition of. But consumers were referred to the mark, it may no longer think of first is the original goods or services, not only is the original or goods or services, consumers simply will not even think of goods or services, but the Trademark Dilution of goods Or services. There is no doubt that this marks the recognition of, is a heavy blow.Weakening of the mark is significantly weakened and the lower. Mark is significantly different from other commercial trademark marked characteristics. A certain well-known trademarks, which in itself should be a very significant, very significant and can be quickly and other signs of its own separate. However, the Trademark Dilution of the same or similar trademarks used in different goods or services, so that was the trademark and other commercial marked difference in greatlyreduced, to the detriment of its significant.Of course, regardless of the weakening of the mark was a significant or identifiable, are the ultimate impact of the mark by the bearer of goodwill. Because the trade mark is the carrier of goodwill, the mark of any major damage, the final performance for all bearing the trade mark by the goodwill of the damage.2, tarnishedMeans others will have some well-known trademarks in the use of the good reputation of the trademark will have to belittle, defaced role of the goods or services on the act. Contaminate the trademarks of others, is a distortion of trade marks to others, the use of the damage, not only reduced the value of the mark, even on such values were defaced. As tarnished reputation is a trademark of damage, so tarnished included in the diluted acts, is also relatively accepted view. Moreover, in the field of trademark faded, tarnished than the weakening of the danger of even greater acts, the consequences are more serious.3, degradationDegradation is due to improper use of trademarks, trade mark goods for the evolution of the common name recognition and loss of function. Trademark Dilution degradation is the most serious kind. Degradation of the event, will completely lose their identification marks, no longer has the distinction function as the common name of the commodity.Fourth, protection against diluteBased on the well-known trademarks dilute the understanding, and accompanied by a serious weakening of well-known trademarks, all countries are gradually legislation to provide for the well-known trademarks to protect anti-diluted. There are specific models:1, the development of special anti-dilute the protection of well-known trademarksThe United States is taking this protection on behalf of the typical pattern.1995, in order to prevent lower dilute "the only representative of the public eye, the unique image of the trademark" to protect "the trademark value of advertising," the U.S. Congress passed the National reunification of the "anti-federal trademark law watered down", so as to the well-known trademarks All provide the unified and effective national anti-dilute the protection.U.S. anti-diluted in trademark protection has been added a new basis for litigation, which is different from the traditional basis of trademark infringement litigation. Trademark infringement of the criteria is confusing, the possibility of deception and misleading, and the Trademark Dilution criteria is unauthorized to others well-known trademarks of the public to reduce the use of the trademark instructions for goods and services only and in particular of Feelings. It is clear that the U.S. law is anti-diluted basis, "business reputation damage" and the possibility of well-known trade mark was a significant weakening of the possibility of providing relief. Moreover, anti-faded law does not require the application of competitive relations or the existence of possible confusion, which is more conducive to the exercise of trademark right to appeal.2, through the Anti-Unfair Competition Law ProtectionSome countries apply anti-unfair competition law to protect famous trademarks from being watered down. Such as Greece, "Anti-Unfair Competition Law," the first one: "Prohibition of the Use of well-known trademarks in order to take advantage of different commodities on the well-known trademarks dilute its credibility was significant." Although some countries in the Anti-Unfair Competition Law does not explicitly prohibits trademark faded, but the Trademark Dilution proceedings, the application of unfair competition litigation.3, through or under well-known trademark protection within the scope of trademark protectionMost civil law countries is this way. 1991, "the French Intellectual PropertyCode," Di Qijuan trademark law section L.713-5 of the provisions that: not in similar goods or services on the use of well-known trade marks to the trademark owner or a loss caused by the improper use of trademarks , Against people should bear civil liability.Germany in 1995, "the protection of trademarks and other signs of" Article 14 also stipulates that: without the consent of the trademark rights of third parties should be banned in commercial activities, in and protected by the use of the trademark does not like similar goods or services , And the use of the trademark identical or similar to any signs.4, in the judicial precedents in the application of anti-dilute the protection ofIn some countries there are no clear legislative provisions of the anti-dilute well-known trademarks, but in judicial practice, they are generally applicable civil law on compensation for the infringement of the debt to protect the interests of all well-known trademarks, through judicial precedents to dilute the protection of applicable anti.China's well-known trademarks in the protection of the law did not "water down" the reference, but on the substance of the relevant legal provisions, protection of anti-diluted. 2001 "Trademark Law" amendment to increase the protection of well-known trademarks, in particular, it is important to the well-known trademarks have been registered to conduct cross-category protection. Article 13 stipulates: "The meeting is not the same as or similar to the trademark application for registration of goods is copied, Mofang, translation others have been registered in the well-known trademarks, misleading the public, the standard of the well-known trade mark registration may be the interests of the damage, no registration And can not be used. "But needs to be pointed out that this provision does not mean that China's laws for the well-known trademarks has provided an effective anti-dilute the protection. "Trademark Law" will prohibit only well-known trademarks and trademarks of the same or similar use, without the same or similar goods not on the behavior, but thewell-known trade marks have faded in various forms, such as the well-known trademarks for names, domain names, such acts Detract from the same well-known trademarks destroyed the logo of the ability to make well-known trade mark registration of the interests of damage, this is not a legal norms.It must be pointed out that the trade mark that should be paying attention to downplay acts of the following:1, downplay acts are specifically for the well-known registered trade marks.Perpetrators diluted one of the main purpose is the free-rider, using the credibility of well-known trademarks to sell their products, and general use of trademarks do not have this value. That acts to dilute limited to well-known trademarks, can effectively protect the rights of trademark rights, have not excessively restrict the freedom of choice of logo, is right to resolve the conflict right point of balance. "Trademark Law" will be divided into well-known trademarks have been registered and unregistered, and give different protection. Anti-has been watered down to protect only against the well-known trade marks registration, and for China not only well-known trade marks registered in the same or similar ban on the registration and use of goods. This reflects the "Trademark Law" the principle of protection of registered trademarks.2, faded in the different categories of goods and well-known trademarks for use on the same or similar logo.If this is the same or similar goods with well-known trademarks for use on the same or similar to the logo should be in accordance with the general treatment of trademark infringement. There is also a need to downplay the use of the tags are similar to a well-known trademarks and judgments.3, not all the non-use of similar products on the well-known trade marks and logos of the same or similar circumstances are all faded.When a trademark has not yet become well-known trademarks, perhaps there aresome with the same or similar trademarks used in other types of goods on. In the well-known trademarks, the original has been in existence does not constitute a trademark of those who play down.4, acts that play down the perpetrator does not need to consider the subjective mental state.Regardless of their out of goodwill or malicious, intentional or fault, is not watered down the establishment. But the acts of subjective mental state will assume responsibility for its impact on the manner and scope. Generally speaking, if the perpetrator acts intentionally dilute the responsibility to shoulder much weight, in particular, bear a heavier responsibility for damages, if the fault is the commitment will be less responsibility. If there are no mistakes, just assume the responsibility to stop infringement.5, due to anti-faded to protect well-known trade marks with a specific goods or services linked to well-known trademarks a long time widely used in a variety of goods, will inevitably lead to trademark the logo of a particular commodity producers play down the link, well-known trademarks A unique attraction to consumers will also be greatly reduced. So that should not be watered down to conduct a source of confusion for the conditions of goods, after all, not all the water down will cause consumers confusion. For example, a street shop's name is "Rolls-Royce fruit shop," people at this time there will be no confusion and that the shop and the famous Rolls-Royce trademark or producers of the contact. However, such acts can not be allowed, a large number of similar acts will dilute the Rolls-Royce trademark and its products linked to undermine the uniqueness of the trademark, if things continue this way when the mention of Rolls-Royce trademark, people may think of is not only Automobile, food, clothing, appliances, etc.. That faded as to cause confusion for the conditions, some will not dilute norms and suppression of acts, makes well-known trade marks are not well protected. Therefore, as long as it is a well-known trademark detract from the logo and unique ability to act on the behavior should be identified as diluted.1. Zheng Chengsi: "Intellectual property law", legal publishers 2003 version.2. Wu Handong editor: "Intellectual Property Law," China Politics and Law University Press 2002 edition.3. Susan. Sela De: "The United States Federal trademark law dilute the anti-legislation and practice," Zhang Jin Yi, contained in the "Law on Foreign Translation" 1998 No.4.4. Kong Xiangjun: "Anti-Unfair Competition AFP theory," People's Court Press, 2001 edition.5. Liu Ping, Qi Chang: "On the special protection of famous trademarks", in "law and commercial" 1998 No.6.6. Well-Tao, Lu Zhou Li: "On the well-known trademarks to protect the anti-diluted", in "Law" 1998 No. 5.2 外文参考文献原文浅谈驰名商标之淡化与反淡化一、驰名商标概述驰名商标是指经过长期使用,在市场上享有较高声誉,为相关公众所熟知,并经一定程序认定的商标。

国际会计准则中英文对照外文翻译文献

国际会计准则中英文对照外文翻译文献

中英文对照外文翻译文献(文档含英文原文和中文翻译)译文:译文(一)世界贸易的飞速发展和国际资本的快速流动将世界经济带入了全球化时代。

在这个时代, 任何一个国家要脱离世界贸易市场和资本市场谋求自身发展是非常困难的。

会计作为国际通用的商业语言, 在经济全球化过程中扮演着越来越重要的角色, 市场参与者也对其提出越来越高的要求。

随着市场经济体制的逐步建立和完善,有些国家加入世贸组织后国际化进程的加快,市场开放程度的进一步增强,市场经济发育过程中不可避免的各种财务问题的出现,迫切需要完善的会计准则加以规范。

然而,在会计准则制定过程中,有必要认真思考理清会计准则的概念,使制定的会计准则规范准确、方便操作、经济实用。

由于各国家的历史、环境、经济发展等方面的不同,导致目前世界所使用的会计准则在很多方面都存在着差异,这使得各国家之间的会计信息缺乏可比性,本国信息为外国家信息使用者所理解的成本较高,在很大程度上阻碍了世界国家间资本的自由流动。

近年来,许多国家的会计管理部门和国家性的会计、经济组织都致力于会计准则的思考和研究,力求制定出一套适于各个不同国家和经济环境下的规范一致的会计准则,以增强会计信息的可比性,减少国家各之间经济交往中信息转换的成本。

译文(二)会计准则就是会计管理活动所依据的原则, 会计准则总是以一定的社会经济背景为其存在基础, 也总是反映不同社会经济制度、法律制度以及人们习惯的某些特征, 因而不同国家的会计准则各有不同特点。

但是会计准则毕竟是经济发展对会计规范提出的客观要求。

它与社会经济发展水平和会计管理的基本要求是相适应的,因而,每个国家的会计准则必然具有某些共性:1. 规范性每个企业有着变化多端的经济业务,而不同行业的企业又有各自的特殊性。

而有了会计准则,会计人员在进行会计核算时就有了一个共同遵循的标准,各行各业的会计工作可在同一标准的基础上进行,从而使会计行为达到规范化,使得会计人员提供的会计信息具有广泛的一致性和可比性,大大提高了会计信息的质量。

电动机控制中英文对照外文翻译文献

电动机控制中英文对照外文翻译文献

电动机控制中英文对照外文翻译文献(文档含英文原文和中文翻译)原文:Control of Electric winchFor motor control, we know the best way is to use the style buttons to move the many simple manual console. And this console, in some applications may still be a good choice, as some complex control headache can also be used. This article describes in your design, build or purchase winch controller, you have the motor's basic electrical equipment and you will need to address the user interface command addressed.First, the manual should be a manual control console type, so if you remove your finger buttons, hoist will stop. In addition, each control station equipped with an emergency need to brake, hoist the emergency brake to cut off all power, not just the control circuit. Think about it, if the hoist at the stop, it did not stop, you do need a way to cut off the fault line protection power. Set the table in the control of a key operated switch, is also a very good idea, especially in the line leading to theworkstation can not control, you can use the switch.(in the design of the console, even the simplest manual console, but also consider setting by specialized personnel to operate the safe operation of the keys.) Constant speed motor controlFor a fixed speed winch actual control device is a three-phase starter. Turn the motor is reversed, by a simple switch controlled phase transformation sequence from ABC to CBA. These actions are completed by two three-pole contactor-style, and they are interlocked, so that they can not be simultaneously closed. NEC, required in addition to overload and short circuit protection devices. To protect the motor against overload due to mechanical effects caused by overheating in the heat to be installed inside the starter overload delay device. When the heat overload delay device overheating, it has a long double off the metal motor power. In addition In addition, you can also select a thermistor can be installed in the motor winding way, it can be used to monitor motor temperature changes. For the short-circuit protection, we generally used by motor fuses to achieve.A linear current independent contactors, the contactors are configured should be more than the current main circuit contactor, so as to achieve the purpose of redundancy. This sets the current contactor is controlled by the security circuit, such as: emergency brake and the more-way limits.We can use the limit switches to achieve the above operation. When you reach the end of the normal travel limit position, the hoist will stop, and you can only move the winch in the opposite direction (ie, the direction away from the limit position.) There is also need for a more limited way just in case, due to electrical or mechanical problems, leaving the operation of hoist limit bit more than normal. If you run into more limiter, linear contactor will open, therefore, can not be driven winch will exceed this limit position. If this happens, you need to ask a professional technician to check the lead to meet the more specific reasons limiter. Then, you can use thestarter toggle switch inside the elastic recovery process to deal with more problems, rather than tripping device or a hand-off the current contacts.A necessary condition for speedOf course, the simple fixed speed starter is replaced by variable speed drives. This makes things start to get interesting again! At a minimum, you need to add a speed control dial operation platform. Joystick is a better user interface, because it makes you move parts of a more intuitive control.Unfortunately, you can not just from your local console to send commands to control the old variable speed drives, in addition, you can not want it in the initial stages, will be able to enhance the safe and reliable and decentralized facilities. Most of the variable speed drive can not achieve these requirements, because they are not designed to do upgrading work. Drivers need to be set to release the brake before the motor can generate torque, and when parking, that is, before the revocation of torque, the brake will be the first action.For many years, DC motors and drives provide a number of common solutions, such as when they are in a variety of speeds with good torque characteristics. For most of the hoist of the large demand for DC motor is very expensive, and that the same type of AC motor than the much more expensive. Although the early AC drives are not very useful, as they have a very limited scope of application of the speed, but produced only a small low-speed torque. Now, with the DC drives the development of low cost and a large number of available AC motors has led to a communication-driven revolution.Variable speed AC drives in two series. Frequency converter has been widely known and, indeed, easy to use. These drives convert AC into DC, and then, and then convert it back to exchange, the exchange after the conversion is a different frequency. If the drive produced the exchange of 30Hz, 60Hz a normal motor will run at half speed. Theoretically, this is very good, but in practice, this will have a lot of problems. First of all, a typical linear motor 60Hz frequencies below 2Hz 3Hz area or there will be errors, and start cog (that urgent push, yank), or parking. This will limit your speed range lower than 20:1, almost not adapted to the operational phase of the fine adjustment. Second, many low-cost converter is not able to provide the rated torque at low speeds. Use of these drives, will result in the rapid move to upgrade the components or complete failure, precisely, when you try to upgrade a stable scientific instruments, you do not want to see this situation. Some new inverter is a closed-loop system (to get feedback from the motor to provide a more accurate speed control), and the motor will work quite well.Another series of AC drives is the flow vector type drive. These components require installation of the spindle motor encoder, encoder makes use of these drivescan accurately monitor the rotation of the motor armature. Processor accurately measured magnetic flux vector values that are required to make the armature at a given speed rotation. These drives allow infinite speed, so you actually can produce at zero speed to rated torque. These drives provide precise speed and position control, so these drives in high performance applications to be welcomed.(Based on PLC controllers provide system status and control options. This screen shows the operator full access to the nine-story elevator enhance the control panel.) PLC-based systemsIs the full name of a PLC programmable logic controller. First of all, PLC controller developed to replace the fifties and sixties-based industrial control system relay, they work in harsh industrial indoor environments. These are modular systems that have a large variety of I / O modules. The modular system can easily achieve the semi-custom hardware configuration assembled, and the resulting configuration is also very reasonable price. These modules include: position control module, the counter, A / D and D / A converter, and a variety of physical state or physical contact with closed output module. Large number of different types of I / O components and PLC module property makes it an effective way to assemble custom and semi custom control system.The biggest shortcoming of PLC systems is the lack of the real number of display to tell you what is being done and the PLC on the PLC program to help you.T he first is professional entertainment for the large-scale PLC system is one of the original in Las Vegas, MGM (now Bailey Company) of the riding and carriage system. Many manufacturers offer a standard PLC-based semi-automated acoustic systems and a host of signs, set the location of the command line interpreter, and the upgrading of the control system is also available. Using standard modules to set user-defined system configuration capability is based on the PLC controller of the greatest advantage.High-end controllerFor complex transmission, the controller became complex, more than speed, time and location control. They include complex instructions to write and record the movement contour, and the processing can immediately run the ability to multi-point instructions.Many large opera house is toward the direction of point lift system, where each one is equipped with a rope to enhance independent winches, rope equivalent to those of each dimmer circuit. When more than one hoist is used to enhance the individual part, the hoist must be fully synchronous, or the load to shift, so will lead to a separate winch becomes the risk of overload. Control system must be able to be selected to keep pace winch, or a hoist winch is not able to maintain synchronization with the other, can provide the same high-speed parking capacity. For a typical speed of 240 ft / min and a winch to maintain the rate of error of between 1 / 8 points of equipment, you only have less than three microseconds of time to identify problems and try to correct the error The hoist speed, make sure you fail, you start all the winch stop the group. This will require a large amount of computation, fast I / O interface, and easy to use to write software.For large rope control system has two very different solutions. The first is to use a separate console, the problem in general terms, this console should be installed in the appropriate location of the operator perspective. However, this not only from one angle to another angle, but still can not get an instruction to another instruction from the control. These difficulties have been partially resolved. Installed in different locations through the use of video cameras, and these cameras connected to the three-dimensional display graphics, these graphics enables the operator to observe from the perspective of any of the three coordinates in the expected direction of rope movement. These operators can make from a console for him at the actual angle, or closed circuit camera practical perspective, to observe the movement of the rope on the screen. For the complex interrelated moving parts, makes the implementation of the above observation Failure to control and find out easier.Another solution to the problem is a distributed system that uses multiple light console. This will allow the different operators in the same way the different aspects of control gear, we have improved the manual control device. A vivid example is the flower in a vegetable market in central London, the Royal Opera House, the program uses the above, where the control console 240 with ten motors. Each console has five playback device, and has been open, so that each motor has been assigned to a single console. An operator and a console can control all the devices, however, often may be running a console platform screen upgrade, another console is a console on the transmission device, and the third console is used to the necessary backgroundin the background image down.(edge-type portable console allows the operator many advantages from the start to control the movement of the machine, and provide three-dimensional image display.)ConclusionA huge change in the rope control system, a workstation has been developed from a push-button to complex multi-user computerized control system. When the control system to buy rope, you can always find to meet your needs. Control system performance is the most important security and reliability. These are the true value of the property, and you can expect the price to buy a suitable way of security. With a certain product manufacturers to work, he will make you know how to install it. And he will make contact with you and the users, those users have with similar requests.译文:电动卷扬机的控制对于电动机的控制,我们所知道的最好的方式就是使用由许多点动式按钮组成的简单的手工操作台。

企业风险管理中英文对照外文翻译文献

企业风险管理中英文对照外文翻译文献

企业风险管理中英文对照外文翻译文献(文档含英文原文和中文翻译)原文:Risk ManagementThis chapter reviews and discusses the basic issues and principles of risk management, including: risk acceptability (tolerability); risk reduction and the ALARP principle; cautionary and precautionary principles. And presents a case study showing the importance of these issues and principles in a practical management context. Before we take a closer look, let us briefly address some basic features of risk management.The purpose of risk management is to ensure that adequate measures are taken to protect people, the environment, and assets from possible harmful consequences of the activities being undertaken, as well as to balance different concerns, in particular risks and costs. Risk management includes measures both to avoid the hazards and toreduce their potential harm. Traditionally, in industries such as nuclear, oil, and gas, risk management was based on a prescriptive regulating regime, in which detailed requirements were set with regard to the design and operation of the arrangements. This regime has gradually been replaced by a more goal-oriented regime, putting emphasis on what to achieve rather than on the means of achieving it.Risk management is an integral aspect of a goal-oriented regime. It is acknowledged that risk cannot be eliminated but must be managed. There is nowadays an enormous drive and enthusiasm in various industries and in society as a whole to implement risk management in organizations. There are high expectations that risk management is the proper framework through which to achieve high levels of performance.Risk management involves achieving an appropriate balance between realizing opportunities for gain and minimizing losses. It is an integral part of good management practice and an essential element of good corporate governance. It is an iterative process consisting of steps that, when undertaken in sequence, can lead to a continuous improvement in decision-making and facilitate a continuous improvement in performance.To support decision-making regarding design and operation, risk analyses are carried out. They include the identification of hazards and threats, cause analyses, consequence analyses, and risk descriptions. The results are then evaluated. The totality of the analyses and the evaluations are referred to as risk assessments. Risk assessment is followed by risk treatment, which is a process involving the development and implementation of measures to modify the risk, including measures designed to avoid, reduce (“optimize”), transfer, or retain the risk. Risk transfer means sharing with another party the benefit or loss associated with a risk. It is typically affected through insurance. Risk management covers all coordinated activities in the direction and control of an organization with regard to risk.In many enterprises, the risk management tasks are divided into three main categories: strategic risk, financial risk, and operational risk. Strategic risk includes aspects and factors that are important for the e nterprise’s long-term strategy and plans,for example mergers and acquisitions, technology, competition, political conditions, legislation and regulations, and labor market. Financial risk includes the enterprise’s financial situation, and includes: Market risk, associated with the costs of goods and services, foreign exchange rates and securities (shares, bonds, etc.). Credit risk, associated with a debtor’s failure to meet its obligations in accordance with agreed terms. Liquidity risk, reflecting lack of access to cash; the difficulty of selling an asset in a timely manner. Operational risk is related to conditions affecting the normal operating situation: Accidental events, including failures and defects, quality deviations, natural disasters. Intended acts; sabotage, disgruntled employees, etc. Loss of competence, key personnel. Legal circumstances, associated for instance, with defective contracts and liability insurance.For an enterprise to become successful in its implementation of risk management, top management needs to be involved, and activities must be put into effect on many levels. Some important points to ensure success are: the establishment of a strategy for risk management, i.e., the principles of how the enterprise defines and implements risk management. Should one simply follow the regulatory requirements (minimal requirements), or should one be the “best in the class”? The establishment of a risk management process for the enterprise, i.e. formal processes and routines that the enterprise is to follow. The establishment of management structures, with roles and responsibilities, such that the risk analysis process becomes integrated into the organization. The implementation of analyses and support systems, such as risk analysis tools, recording systems for occurrences of various types of events, etc. The communication, training, and development of a risk management culture, so that the competence, understanding, and motivation level within the organization is enhanced. Given the above fundamentals of risk management, the next step is to develop principles and a methodology that can be used in practical decision-making. This is not, however, straightforward. There are a number of challenges and here we address some of these: establishing an informative risk picture for the various decision alternatives, using this risk picture in a decision-making context. Establishing an informative risk picture means identifying appropriate risk indices and assessments ofuncertainties. Using the risk picture in a decision making context means the definition and application of risk acceptance criteria, cost benefit analyses and the ALARP principle, which states that risk should be reduced to a level which is as low as is reasonably practicable.It is common to define and describe risks in terms of probabilities and expected values. This has, however, been challenged, since the probabilities and expected values can camouflage uncertainties; the assigned probabilities are conditional on a number of assumptions and suppositions, and they depend on the background knowledge. Uncertainties are often hidden in this background knowledge, and restricting attention to the assigned probabilities can camouflage factors that could produce surprising outcomes. By jumping directly into probabilities, important uncertainty aspects are easily truncated, and potential surprises may be left unconsidered.Let us, as an example, consider the risks, seen through the eyes of a risk analyst in the 1970s, associated with future health problems for divers working on offshore petroleum projects. The analyst assigns a value to the probability that a diver would experience health problems (properly defined) during the coming 30 years due to the diving activities. Let us assume that a value of 1 % was assigned, a number based on the knowledge available at that time. There are no strong indications that the divers will experience health problems, but we know today that these probabilities led to poor predictions. Many divers have experienced severe health problems (Avon and Vine, 2007). By restricting risk to the probability assignments alone, important aspects of uncertainty and risk are hidden. There is a lack of understanding about the underlying phenomena, but the probability assignments alone are not able to fully describe this status.Several risk perspectives and definitions have been proposed in line with this realization. For example, Avon (2007a, 2008a) defines risk as the two-dimensional combination of events/consequences and associated uncertainties (will the events occur, what the consequences will be). A closely related perspective is suggested by Avon and Renan (2008a), who define risk associated with an activity as uncertaintyabout and severity of the consequences of the activity, where severity refers to intensity, size, extension, scope and other potential measures of magnitude with respect to something that humans value (lives, the environment, money, etc.). Losses and gains, expressed for example in monetary terms or as the number of fatalities, are ways of defining the severity of the consequences. See also Avon and Christensen (2005).In the case of large uncertainties, risk assessments can support decision-making, but other principles, measures, and instruments are also required, such as the cautionary/precautionary principles as well as robustness and resilience strategies. An informative decision basis is needed, but it should be far more nuanced than can be obtained by a probabilistic analysis alone. This has been stressed by many researchers, e.g. Apostolicism (1990) and Apostolicism and Lemon (2005): qualitative risk analysis (QRA) results are never the sole basis for decision-making. Safety- and security-related decision-making is risk-informed, not risk-based. This conclusion is not, however, justified merely by referring to the need for addressing uncertainties beyond probabilities and expected values. The main issue here is the fact that risks need to be balanced with other concerns.When various solutions and measures are to be compared and a decision is to be made, the analysis and assessments that have been conducted provide a basis for such a decision. In many cases, established design principles and standards provide clear guidance. Compliance with such principles and standards must be among the first reference points when assessing risks. It is common thinking that risk management processes, and especially ALARP processes, require formal guidelines or criteria (e.g., risk acceptance criteria and cost-effectiveness indices) to simplify the decision-making. Care must; however, be shown when using this type of formal decision-making criteria, as they easily result in a mechanization of the decision-making process. Such mechanization is unfortunate because: Decision-making criteria based on risk-related numbers alone (probabilities and expected values) do not capture all the aspects of risk, costs, and benefits, no method has a precision that justifies a mechanical decision based on whether the result is overor below a numerical criterion. It is a managerial responsibility to make decisions under uncertainty, and management should be aware of the relevant risks and uncertainties.Apostolicism and Lemon (2005) adopt a pragmatic approach to risk analysis and risk management, acknowledging the difficulties of determining the probabilities of an attack. Ideally, they would like to implement a risk-informed procedure, based on expected values. However, since such an approach would require the use of probabilities that have not b een “rigorously derived”, they see themselves forced to resort to a more pragmatic approach.This is one possible approach when facing problems of large uncertainties. The risk analyses simply do not provide a sufficiently solid basis for the decision-making process. We argue along the same lines. There is a need for a management review and judgment process. It is necessary to see beyond the computed risk picture in the form of the probabilities and expected values. Traditional quantitative risk analyses fail in this respect. We acknowledge the need for analyzing risk, but question the value added by performing traditional quantitative risk analyses in the case of large uncertainties. The arbitrariness in the numbers produced can be significant, due to the uncertainties in the estimates or as a result of the uncertainty assessments being strongly dependent on the analysts.It should be acknowledged that risk cannot be accurately expressed using probabilities and expected values. A quantitative risk analysis is in many cases better replaced by a more qualitative approach, as shown in the examples above; an approach which may be referred to as a semi-quantitative approach. Quantifying risk using risk indices such as the expected number of fatalities gives an impression that risk can be expressed in a very precise way. However, in most cases, the arbitrariness is large. In a semi-quantitative approach this is acknowledged by providing a more nuanced risk picture, which includes factors that can cause “surprises” r elative to the probabilities and the expected values. Quantification often requires strong simplifications and assumptions and, as a result, important factors could be ignored or given too little (or too much) weight. In a qualitative or semi-quantitative analysis, amore comprehensive risk picture can be established, taking into account underlying factors influencing risk. In contrast to the prevailing use of quantitative risk analyses, the precision level of the risk description is in line with the accuracy of the risk analysis tools. In addition, risk quantification is very resource demanding. One needs to ask whether the resources are used in the best way. We conclude that in many cases more is gained by opening up the way to a broader, more qualitative approach, which allows for considerations beyond the probabilities and expected values.The traditional quantitative risk assessments as seen for example in the nuclear and the oil & gas industries provide a rather narrow risk picture, through calculated probabilities and expected values, and we conclude that this approach should be used with care for problems with large uncertainties. Alternative approaches highlighting the qualitative aspects are more appropriate in such cases. A broad risk description is required. This is also the case in the normative ambiguity situations, as the risk characterizations provide a basis for the risk evaluation processes. The main concern is the value judgments, but they should be supported by solid scientific assessments, showing a broad risk picture. If one tries to demonstrate that it is rational to accept risk, on a scientific basis, too narrow an approach to risk has been adopted. Recognizing uncertainty as a main component of risk is essential to successfully implement risk management, for cases of large uncertainties and normative ambiguity.A risk description should cover computed probabilities and expected values, as well as: Sensitivities showing how the risk indices depend on the background knowledge (assumptions and suppositions); Uncertainty assessments; Description of the background knowledge, including models and data used.The uncertainty assessments should not be restricted to standard probabilistic analysis, as this analysis could hide important uncertainty factors. The search for quantitative, explicit approaches for expressing the uncertainties, even beyond the subjective probabilities, may seem to be a possible way forward. However, such an approach is not recommended. Trying to be precise and to accurately express what is extremely uncertain does not make sense. Instead we recommend a more openqualitative approach to reveal such uncertainties. Some might consider this to be less attractive from a methodological and scientific point of view. Perhaps it is, but it would be more suited for solving the problem at hand, which is about the analysis and management of risk and uncertainties.Source: Terje Aven. 2010. “Risk Management”. Risk in Technological Systems, Oct, p175-198.译文:风险管理本章回顾和讨论风险管理的基本问题和原则,包括:风险可接受性(耐受性)、风险削减和安全风险管理原则、警示和预防原则,并提出了一个研究案例,说明在实际管理环境中这些问题和原则的重要性。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

届本科毕业设计(论文)外文文献翻译学院:专业:姓名:学号:外文出处:Automating Manufacturing Systemswith PLCs附件:1、外文翻译2、外文原文附录1:外文资料翻译译文基于PLC的自动化制造系统15.梯形图逻辑函数主题:•数据处理、数学运算、数据转换、阵列操作、统计、比较、布尔量运算等函数•设计实例宗旨:•理解基本函数,允许计算和比较•了解使用了内存文件的数组函数15.1介绍梯行图逻辑输入触点和输出线圈之间允许简单的逻辑判断。

这些函数把基本的梯形图逻辑延伸到其他控制形式中。

例如,附加的定时器和计数器允许基于事件的控制。

在下图15.1中有一个较长的关于这些函数的表。

这包括了组合逻辑和事件函数。

本章将会研究数据处理和数值的逻辑。

下一章将介绍表、程序控制和一些输入和输出函数。

剩下的函数会在后面的章节中讨论图15.1 基本PLC函数分类大多数的函数会使用PLC 的存储单元获取值、储存值和跟踪函数状态。

一般大部分函数当输入值是“真”时,会被激活。

但是,有些函数,如延时断开定时器,可以在无输入时,保持激活状态。

其它的函数仅当输入由“假”变“真”时,才会被执行,这就是所谓的上升沿触发。

想想,一计数器仅仅是输入由“假”变“真”时才会计数,输入为“真”状态的持续时间并不影响函数动作。

而下降沿触发函数仅当输入由“真”变“假”时才会触发。

多数函数并非边沿触发:除非有规定说明函数不是边沿触发。

15.2数据处理 15.2.1传递函数有两种基本的传递函数;MOV(值,操作数) -把值传递到指定的存储位置。

MVM(值,标号,操作数) -把值传递到指定的存储位置,但是用标号来指定一个传递的位。

这个MOV 函数从一个存储空间取出一个值放置到另外一个存储空间里。

下图15.2给出了MOV 的基本用法。

当A 为“真”,MOV 函数把一个浮点数从原操作数传递到操作数存储位置。

原操作数地址中的数据没有改变。

当B 为“真”时,原操作数中的浮点数将被转换成整数存储在操作数存储区中。

浮点数会被四舍五入成整数。

当C 为“真”时,整数“123”将被存储在整数文件N7:23中。

MOV原操作数F8:07操作数N7:23MOV原操作数123操作数N7:23MOV原操作数F8:07操作数F8:23图15.2 MOV的基本用法下图15.3给出了更多更复杂的MOV函数用法。

当A为“真”时,第一个模块将会把值“123”送入N7:0,同时第二个模块将会把值“-9385”从N7:1 送到N7:2中(这个值之所以为负数,是因为我们使用了2S的compliment)。

对于基本的MOV函数使用中,二进制数值不是必要的;但是在MVM函数中,二进制数值却是必要的。

这个模块中从N7:3 移动二进制数值到N7:5中。

但是这些“位”在N7:4中仍为“ON”,操作数的其他位将不会受到影响。

请注意:N7:5的第一位N7:5/0在指令执行前后仍为“ON”,但是在N7:4中却不同,MVM 函数当应用在个别二进制位的处理中时非常有用,但是处理实数却是用处不大了。

MOV原操作数130dest N7:0MOV原操作数N7:1dest N7:2MVM原操作数N7:3标号N7:4dest N7:5MVM原操作数N7:3标号N7:4dest N7:6图15.3MOV和MVM函数的使用实例15.2.2数学函数数学函数将检索一个或多个值,执行一个操作然后把结果储存在内存中。

图15.4展示的是一个ADD函数从N7:4和F8:35中读取数据操,把他们转换成操作数的地址格式,把两个浮点数相加,结果储存在F8:36中。

该函数有两个原操作数记做“原操作数A”、“原操作数B”。

对于该函数来说原操作数顺序可以改变,但是这对于“减法函数”或“除法函数”等其他操作来说却不一定正确,下面列出了其他一些基本的数学函数。

其中的一些,如“取负”是一元的函数,也就是说它只有一个原操作数。

加原操作数A N7:04原操作数B F8:35操作数F8:36图15.4数学函数图15.5列出了数学函数的用法,多数函数的执行会给出我们期待的结果,第二个ADD函数从N7:3中取了一个值,加1然后送入原操作数,这就是通常所说的“自加”操作。

第一个DIV,执行操作整数25除以整数10,结果四舍五入为最接近的整数,这时,结果被储存在N7:6中。

NEG指令取走了新数“-10”,而不是源数据“0”,从N7:4取出的数据符号被取反,结果存入N7:7。

图15.5 数学函数例子函数、对数函数、取二次方根函数。

最后一个函数CPT能接受表达式并且可以执行一个复杂的运算。

图15.6 高级数学函数图15.7展示的是把表达式转化成梯形图逻辑。

转换的第一步是把表达式的变量存入PLC中没被使用过的存储区中。

接下来拥有很多嵌套运算的方程就可以被转化,例如LN函数。

这时LN函数的运算结果被保存在其他存储空间中,之后会被调用。

其它的一些操作会应用在相似的情况下。

(注意:这些方程可能应用在其他场合中,占用更少的存储空间。

)给定方程指定存储图15.7 用梯形图表示的方程和图15.7中一样的方程被应用于图15.8所示的CPT函数中。

存储区也和上图使用的一样。

该表达式被直接输进了PLC程序中。

图15.8 利用CPT函数计算数学函数可以导致诸如溢出,进位等状态标识位变化,注意要尽量避免出现像“溢出”这样的问题。

但是使用浮点数时这种问题会少一点。

而整数极易出现这样的问题,因为它们受到-32768—32767这样一个数据范围的限制。

15.2.3 转换函数梯形图中的转换函数列在了图15.9中。

例子中的函数将会从D存储区读取一个BCD码数据,然后把它转换为浮点数存储在F8:2中。

其它的函数将把二进制负数转换成BCD码数据,下面的函数也包含了弧度数和角度的转化。

图15.9 转换函数图15.10给出了转换函数的例子。

这些函数读取一个源数据后,开始转换,结束后储存结果。

TOD函数转换成BCD码将会出现“溢出”错误。

图15.10 转换例子15.2.4矩阵函数矩阵可以储存多列数据。

在PLC 中这将是一系列的整数数字,浮点数或者 其它类型的数据。

例如,假定我们测量和保存一块封装芯片的重量时要使用浮点数存储区F8:20。

每十分钟要读取一次重量数据,并且一小时后找出平均重量。

这一节我们将聚焦于矩阵中多组数据的处理技术,也就是说明书中所谓的“块”。

15.2.4.1-统计这些函数也是可以处理统计数据的。

图15.11列出了这些函数,当A 变为“真”A VE 函数的转换操作从存储区F8:0开始,并算出四个数的平均值。

控制字R6:1被用来跟踪运算的进程,并判断运算何时结束。

这些运算还有其它的一些是边沿触发的。

该次运算可能会需要经过多个扫描周期才能完成。

运算结束后,平均值被储存在F8:0中,同时R6:1/DN 位被置ON 。

图15.11统计函数如图15.12给出的统计函数例子,它拥有一个有四个字长从F8:0开始的数组数据。

每次执行平均值运算的结果储存在F8:4中,标准差储存在F8:5中。

一系列数值被存放在从F8:0到F8:3的按升序排列的存储区中。

为防止出现数据覆盖现象,每个函数都应该有自己的控制存储器。

同时触发该函数与其他运算不是一个明智的选择,因为在计算期间该函数会移动数据,这会导致错误的结果。

图15.12 统计运算图15.13给出了最基本的块函数。

这个COP函数将会拷贝从N7:50到N7:40拥有十个数据的数组。

FAL函数将会通过一个表达式执行数学运算。

FSC函数通过使用表达式允许数组之间进行比较。

FLL函数会利用一个数据把块存储区填充起来。

图15.13块操作函数图15.14显示的是拥有不同地址模式的FAL函数使用例子。

第一个FAL函数将会执行下列运算:N7:5=N7:0+5, N7:6=N7:1+5, N7:7=N7:2+5, N8:7=N7:3+5, N7:9=N7:4+5.第二个FAL函数中在表达式值之前缺少“#”标识,因此运算将变为:N7:5=N7:0+5, N7:6=N7:0+5, N7:7=N7:0+5, N8:7=N7:0+5, N7:9=N7:0+5.当B为真,且为模式2时该指令在每次扫描周期到来时执行两个运算。

最后一个FAL运算的结果为:N7:5=N7:0+5, N7:5=N7:1+5,N7:5=N7:2+5, N7:5=N7:3+5, N7:5=N7:4+5.最后一个操作貌似没什么用处,但是请注意,该运算是增值的。

在C上升沿到来时该运算都会执行一次。

每次扫描周期经过时,这几个运算将执行所有的5个操作一次。

用来指示每次扫描运算的编号,而插入一个号码也是有可能的。

由于有较大的数组,运算时间可能会很长,同时尝试每次扫描时执行所有运算也将会导致看门狗超时错误。

图15.14 文本代数函数例子15.3 逻辑函数15.3.1 数值比较图15.15所示为比较函数,先前的函数块是输出,它取代了输入联系。

例子展示的是比较两个浮点数大小的函数EQU。

如果数值相当,则输出位B3:5/1为真,否则为假。

其他形式的相等函数也裂了出来。

图15.16展示了六个基本的比较函数。

图右边是比较函数的操作例子,图15.16比较函数例子图15.16中的梯形图程序在图15.17中又用CMP函数表达了一遍,该函数可以使用文本表达式。

图15.17使用CMP函数的等价表述表达式可以被用来做许多复杂运算,如图15.18所示。

表达式将会判断F8:1是否介于F8:0和F8:2之间。

图15.18一个更加复杂的比较函数LIM和MEQ函数如图15.19所示。

前三个函数将会判断待检测值是否处在范围内。

如果上限值大于下限值且待测值介于限值之间或者等于限值,那么输出为真。

如果下限值大于上限值,则只有待测值在范围之外时输出值才为真。

上限下限下限上限图15.20LIM函数的线段表示图15.20展示的线段可以帮助我们判断待测数值是否在限值内。

在图15.21中使用FSC指令进行文件与文件的比较也是被允许的。

该指令使用了控制字R6:0。

它将解释表达式10次,做两次比较在每次逻辑扫描中(模式2)。

比较为:F8:10<F8:0 , F8:11<F8:0 然后F8:12<F8:0 , F8:13<F8:0 然后F8:14<F8:0 , F8:15<F8:0 然后F8:16<F8:0 , F8:17<F8:0 然后是F8:18<F8:0 , F8:19<F8:0 。

函数将会继续执行除非发现一个错误状态或者完成比较。

如果比较完成没有发现错误状态那么输出A将为“真”。

在一个扫描周期中该模式也会一直执行所有比较。

或者当函数前面的输入为真时就更新增量---在这种情况下输入为一条线,而一直为真。

相关文档
最新文档