5外文及译文
毕业论文英文参考文献与译文
Inventory managementInventory ControlOn the so-called "inventory control", many people will interpret it as a "storage management", which is actually a big distortion.The traditional narrow view, mainly for warehouse inventory control of materials for inventory, data processing, storage, distribution, etc., through the implementation of anti-corrosion, temperature and humidity control means, to make the custody of the physical inventory to maintain optimum purposes. This is just a form of inventory control, or can be defined as the physical inventory control. How, then, from a broad perspective to understand inventory control? Inventory control should be related to the company's financial and operational objectives, in particular operating cash flow by optimizing the entire demand and supply chain management processes (DSCM), a reasonable set of ERP control strategy, and supported by appropriate information processing tools, tools to achieved in ensuring the timely delivery of the premise, as far as possible to reduce inventory levels, reducing inventory and obsolescence, the risk of devaluation. In this sense, the physical inventory control to achieve financial goals is just a means to control the entire inventory or just a necessary part; from the perspective of organizational functions, physical inventory control, warehouse management is mainly the responsibility of The broad inventory control is the demand and supply chain management, and the whole company's responsibility.Why until now many people's understanding of inventory control, limited physical inventory control? The following two reasons can not be ignored:First, our enterprises do not attach importance to inventory control. Especially those who benefit relatively good business, as long as there is money on the few people to consider the problem of inventory turnover. Inventory control is simply interpreted as warehouse management, unless the time to spend money, it may have been to see the inventory problem, and see the results are often very simple procurement to buy more, or did not do warehouse departments .Second, ERP misleading. Invoicing software is simple audacity to call it ERP, companies on their so-called ERP can reduce the number of inventory, inventory control, seems to rely on their small software can get. Even as SAP, BAAN ERP world, the field ofthese big boys, but also their simple modules inside the warehouse management functionality is defined as "inventory management" or "inventory control." This makes the already not quite understand what our inventory control, but not sure what is inventory control.In fact, from the perspective of broadly understood, inventory control, shouldinclude the following:First, the fundamental purpose of inventory control. We know that the so-called world-class manufacturing, two key assessment indicators (KPI) is, customer satisfaction and inventory turns, inventory turns and this is actually the fundamental objective of inventory control.Second, inventory control means. Increase inventory turns, relying solely on the so-called physical inventory control is not enough, it should be the demand and supply chain management process flow of this large output, and this big warehouse management processes in addition to including this link, the more important The section also includes: forecasting and order processing, production planning and control, materials planning and purchasing control, inventory planning and forecasting in itself, as well as finished products, raw materials, distribution and delivery of the strategy, and even customs management processes.And with the demand and supply chain management processes throughout the process, it is the information flow and capital flow management. In other words, inventory itself is across the entire demand and supply management processes in all aspects of inventory control in order to achieve the fundamental purpose, it must control all aspects of inventory, rather than just manage the physical inventory at hand.Third, inventory control, organizational structure and assessment.Since inventory control is the demand and supply chain management processes, output, inventory control to achieve the fundamental purpose of this process must be compatible with a rational organizational structure. Until now, we can see that many companies have only one purchasing department, purchasing department following pipe warehouse. This is far short of inventory control requirements. From the demand and supply chain management process analysis, we know that purchasing and warehouse management is the executive arm of the typical, and inventory control should focus on prevention, the executive branch is very difficult to "prevent inventory" for the simple reason that they assessment indicatorsin large part to ensure supply (production, customer). How the actual situation, a reasonable demand and supply chain management processes, and thus set the corresponding rational organizational structure and is a question many of our enterprisesto exploreThe role of inventory controlInventory management is an important part of business management. In the production and operation activities, inventory management must ensure that both the production plant for raw materials, spare parts demand, but also directly affect the purchasing, sales of share, sales activities. To make an inventory of corporate liquidity, accelerate cash flow, the security of supply under the premise of minimizing Yaku funds, directly affects the operational efficiency. Ensure the production and operation needs of the premise, so keep inventories at a reasonable level; dynamic inventory control, timely, appropriate proposed order to avoid over storage or out of stock; reduce inventory footprint, lower total cost of inventory; control stock funds used to accelerate cash flow.Problems arising from excessive inventory: increased warehouse space andinventory storage costs, thereby increasing product costs; take a lot of liquidity, resultingin sluggish capital, not only increased the burden of payment of interest, etc., would affect the time value of money and opportunity income; finished products and raw materials caused by physical loss and intangible losses; a large number of enterprise resource idle, affecting their rational allocation and optimization; cover the production, operation of the whole process of the various contradictions and problems, is not conducive to improve the management level.Inventory is too small the resulting problems: service levels caused a decline in the profit impact of marketing and corporate reputation; production system caused by inadequate supply of raw materials or other materials, affecting the normal production process; to shorten lead times, increase the number of orders, so order (production) costs; affect the balance of production and assembly of complete sets.NotesInventory management should particularly consider the following two questions:First, according to sales plans, according to the planned production of the goods circulated in the market, we should consider where, how much storage.Second, starting from the level of service and economic benefits to determine howto ensure inventories and supplementary questions.The two problems with the inventory in the logistics process functions.In general, the inventory function:(1)to prevent interrupted. Received orders to shorten the delivery of goods fromthe time in order to ensure quality service, at the same time to prevent out of stock.(2)to ensure proper inventory levels, saving inventory costs.(3)to reduce logistics costs. Supplement with the appropriate time interval compatible with the reasonable demand of the cargo in order to reduce logistics costs, eliminate or avoid sales fluctuations.(4)ensure the production planning, smooth to eliminate or avoid sales fluctuations.(5)display function.(6)reserve. Mass storage when the price falls, reduce losses, to respond to disasters and other contingencies.About the warehouse (inventory) on what the question, we must consider the number and location. If the distribution center, it should be possible according to customer needs, set at an appropriate place; if it is stored in central places to minimize the complementary principle to the distribution centers, there is no place certain requirements. When the stock base is established, will have to take into account are stored in various locations in what commodities.库存管理库存控制在谈到所谓“库存控制”的时候,很多人将其理解为“仓储管理”,这实际上是个很大的曲解。
外文参考文献译文及原文
目录1介绍 (1)在这一章对NS2的引入提供。
尤其是,关于NS2的安装信息是在第2章。
第3章介绍了NS2的目录和公约。
第4章介绍了在NS2仿真的主要步骤。
一个简单的仿真例子在第5章。
最后,在第.8章作总结。
2安装 (1)该组件的想法是明智的做法,以获取上述件和安装他们的个人。
此选项保存downloadingtime和大量内存空间。
但是,它可能是麻烦的初学者,因此只对有经验的用户推荐。
(2)安装一套ns2的all-in-one在unix-based系统 (2)安装一套ns2的all-in-one在Windows系统 (3)3目录和公约 (4)目录 (4)4运行ns2模拟 (6)ns2程序调用 (6)ns2模拟的主要步骤 (6)5一个仿真例子 (8)6总结 (12)1 Introduction (13)2 Installation (15)Installing an All-In-One NS2 Suite on Unix-Based Systems (15)Installing an All-In-One NS2 Suite on Windows-Based Systems (16)3 Directories and Convention (17)Directories and Convention (17)Convention (17)4 Running NS2 Simulation (20)NS2 Program Invocation (20)Main NS2 Simulation Steps (20)5 A Simulation Example (22)6 Summary (27)1介绍网络模拟器(一般叫作NS2)的版本,是证明了有用在学习通讯网络的动态本质的一个事件驱动的模仿工具。
模仿架线并且无线网络作用和协议(即寻址算法,TCP,UDP)使用NS2,可以完成。
一般来说,NS2提供用户以指定这样网络协议和模仿他们对应的行为方式。
大学生心理健康问题外文文献最新译文
大学生心理健康问题外文文献最新译文XXX。
as evidenced by the high-profile cases of XXX students at Virginia Tech and Northern XXX。
these incidents are not representative of the broader public health XXX students as they are among same-aged non-students。
and the number and XXX。
they are not XXX illness.One of the major XXX。
lack of knowledge about available resources。
and XXX must work to ce these barriers XXX and support for mental health.XXX students is the lack of resources available on XXX form of counseling or mental health services。
these resources are often overburdened and underfunded。
This can lead to long wait times for appointments and limited access to specialized care。
To address this issue。
colleges and XXX and services.It is also XXX college students。
such as those from XXX or those with pre-existing mental health ns。
外文原文及译文
外文原文及译文一、外文原文Subject:Financial Analysis with the DuPont Ratio: A UsefulCompassDerivation:Steven C. Isberg, Ph.D.Financial Analysis and the Changing Role of Credit ProfessionalsIn today's dynamic business environment, it is important for credit professionals to be prepared to apply their skills both within and outside the specific credit management function. Credit executives may be called upon to provide insights regarding issues such as strategic financial planning, measuring the success of a business strategy or determining the viability of an acquisition candidate. Even so, the normal duties involved in credit assessment and management call for the credit manager to be equipped to conduct financial analysis in a rapid and meaningful way.Financial statement analysis is employed for a variety of reasons. Outside investors are seeking information as to the long run viability of a business and its prospects for providing an adequate return in consideration of the risks being taken. Creditors desire to know whether a potential borrower or customer can service loans being made. Internal analysts and management utilize financial statement analysis as a means to monitor the outcome of policy decisions, predict future performance targets, develop investment strategies, and assess capital needs. As the role of the credit manager is expanded cross-functionally, he or she may be required to answer the call to conduct financial statement analysis under any of these circumstances. The DuPont ratio is a useful tool in providing both an overview and a focus for such analysis.A comprehensive financial statement analysis will provide insights as to a firm's performance and/or standing in the areas of liquidity, leverage, operating efficiency and profitability. A complete analysis will involve both time series and cross-sectional perspectives. Time series analysis will examine trends using the firm's own performance as a benchmark. Cross sectional analysis will augment the process by using external performance benchmarks for comparison purposes. Every meaningful analysis will begin with a qualitative inquiry as to the strategy and policies of the subject company, creating a context for the investigation. Next, goals and objectives of the analysis will be established, providing a basis for interpreting the results. The DuPont ratio can be used as a compass in this process by directing the analyst toward significant areas of strength and weakness evident in the financial statements.The DuPont ratio is calculated as follows:ROE = (Net Income/Sales) X (Sales/Average Assets) X (Average Assets/Avenge Equity)The ratio provides measures in three of the four key areas of analysis, eachrepresenting a compass bearing, pointing the way to the next stage of the investigation.The DuPont Ratio DecompositionThe DuPont ratio is a good place to begin a financial statement analysis because it measures the return on equity (ROE). A for-profit business exists to create wealth for its owner(s). ROE is, therefore, arguably the most important of the key ratios, since it indicates the rate at which owner wealth is increasing. While the DuPont analysis is not an adequate replacement for detailed financial analysis, it provides an excellent snapshot and starting point, as will be seen below.The three components of the DuPont ratio, as represented in equation, cover the areas of profitability, operating efficiency and leverage. In the following paragraphs, we examine the meaning of each of these components by calculating and comparing the DuPont ratio using the financial statements and industry standards for Atlantic Aquatic Equipment, Inc. (Exhibits 1, 2, and 3), a retailer of water sporting goods.Profitability: Net Profit Margin (NPM: Net Income/Sales)Profitability ratios measure the rate at which either sales or capital is converted into profits at different levels of the operation. The most common are gross, operating and net profitability, which describe performance at different activity levels. Of the three, net profitability is the most comprehensive since it uses the bottom line net income in its measure.A proper analysis of this ratio would include at least three to five years of trend and cross-sectional comparison data. The cross sectional comparison can be drawn from a variety of sources. Most common are the Dun & Bradstreet Index of Key Financial Ratios and the Robert Morris Associates (RMA) Annual Statement Studies. Each of these volumes provide key ratios estimated for business establishments grouped according to industry (i.e., SIC codes). More will be discussed in regard to comparisons as our example is continued below. As is, over the two years, Whitbread has become less profitable.Leverage: The Leverage Multiplier (Average Assets/Average Equity)Leverage ratios measure the extent to which a company relies on debt financing in its capital structure. Debt is both beneficial and costly to a firm. The cost of debt is lower thanthe cost of equity, an effect which is enhanced by the tax deductibility of interest payments in contrast to taxable dividend payments and stock repurchases. If debt proceeds are invested in projects which return more than the cost of debt, owners keep the residual, and hence, the return on equity is "leveraged up." The debt sword, however, cuts both ways. Adding debt creates a fixed payment required of the firm whether or not it is earning an operating profit, and therefore, payments may cut into the equity base. Further, the risk of the equity position is increased by the presence of debt holders having a superior claim to the assets of the firm.二、译文题目:杜邦分析体系出处:史蒂文c Isberg运输研究所硕士论文杜邦分析体系财务分析与专业信用人员的角色转变在当今动态商业环境中,信贷的专业人士申请内部外部的特定信贷管理职能的技能非常重要。
毕业设计(论文)外文原文及译文
毕业设计(论文)外文原文及译文一、外文原文MCUA microcontroller (or MCU) is a computer-on-a-chip. It is a type of microcontroller emphasizing self-sufficiency and cost-effectiveness, in contrast to a general-purpose microprocessor (the kind used in a PC).With the development of technology and control systems in a wide range of applications, as well as equipment to small and intelligent development, as one of the single-chip high-tech for its small size, powerful, low cost, and other advantages of the use of flexible, show a strong vitality. It is generally better compared to the integrated circuit of anti-interference ability, the environmental temperature and humidity have better adaptability, can be stable under the conditions in the industrial. And single-chip widely used in a variety of instruments and meters, so that intelligent instrumentation and improves their measurement speed and measurement accuracy, to strengthen control functions. In short,with the advent of the information age, traditional single- chip inherent structural weaknesses, so that it show a lot of drawbacks. The speed, scale, performance indicators, such as users increasingly difficult to meet the needs of the development of single-chip chipset, upgrades are faced with new challenges.The Description of AT89S52The AT89S52 is a low-power, high-performance CMOS 8-bit microcontroller with 8K bytes of In-System Programmable Flash memory. The device is manufactured using Atmel's high-density nonvolatile memory technology and is compatible with the industry-standard 80C51 instruction set and pinout. The on-chip Flash allows the program memory to be reprogrammed in-system or by a conventional nonvolatile memory programmer. By combining a versatile 8-bit CPU with In-System Programmable Flash on a monolithic chip, the Atmel AT89S52 is a powerful microcontroller which provides a highly-flexible and cost-effective solution to many embedded control applications.The AT89S52 provides the following standard features: 8K bytes ofFlash, 256 bytes of RAM, 32 I/O lines, Watchdog timer, two data pointers, three 16-bit timer/counters, a six-vector two-level interrupt architecture, a full duplex serial port, on-chip oscillator, and clock circuitry. In addition, the AT89S52 is designed with static logic for operation down to zero frequency and supports two software selectable power saving modes. The Idle Mode stops the CPU while allowing the RAM, timer/counters, serial port, and interrupt system to continue functioning. The Power-down mode saves the RAM contents but freezes the oscillator, disabling all other chip functions until the next interrupt or hardware reset.Features• Compatible with MCS-51® Products• 8K Bytes of In-System Programmable (ISP) Flash Memory– Endurance: 1000 Write/Erase Cycles• 4.0V to 5.5V Operating Range• Fully Static Operation: 0 Hz to 33 MHz• Three-level Program Memory Lock• 256 x 8-bit Internal RAM• 32 Programmable I/O Lines• Three 16-bit Timer/Counters• Eight Interrupt Sources• Full Duplex UART Serial Channel• Low-power Idle and Power-down Modes• Interrupt Recovery from Power-down Mode• Watchdog Timer• Dual Data Pointer• Power-off FlagPin DescriptionVCCSupply voltage.GNDGround.Port 0Port 0 is an 8-bit open drain bidirectional I/O port. As an output port, each pin can sink eight TTL inputs. When 1s are written to port 0 pins, the pins can be used as high-impedance inputs.Port 0 can also be configured to be the multiplexed low-order address/data bus during accesses to external program and data memory. In this mode, P0 has internal pullups.Port 0 also receives the code bytes during Flash programming and outputs the code bytes during program verification. External pullups are required during program verification.Port 1Port 1 is an 8-bit bidirectional I/O port with internal pullups. The Port 1 output buffers can sink/source four TTL inputs. When 1s are written to Port 1 pins, they are pulled high by the internal pullups and can be used as inputs. As inputs, Port 1 pins that are externally being pulled low will source current (IIL) because of the internal pullups.In addition, P1.0 and P1.1 can be configured to be the timer/counter 2 external count input (P1.0/T2) and the timer/counter 2 trigger input (P1.1/T2EX), respectively.Port 1 also receives the low-order address bytes during Flash programming and verification.Port 2Port 2 is an 8-bit bidirectional I/O port with internal pullups. The Port 2 output buffers can sink/source four TTL inputs. When 1s are written to Port 2 pins, they are pulled high by the internal pullups and can be used as inputs. As inputs, Port 2 pins that are externally being pulled low will source current (IIL) because of the internal pullups.Port 2 emits the high-order address byte during fetches from external program memory and during accesses to external data memory that use 16-bit addresses (MOVX @ DPTR). In this application, Port 2 uses strong internal pull-ups when emitting 1s. During accesses to external data memory that use 8-bit addresses (MOVX @ RI), Port 2 emits the contents of the P2 Special Function Register.Port 2 also receives the high-order address bits and some control signals during Flash programming and verification.Port 3Port 3 is an 8-bit bidirectional I/O port with internal pullups. The Port 3 output buffers can sink/source four TTL inputs. When 1s are written to Port 3 pins, they are pulled high by the internal pullups and can be used as inputs. As inputs, Port 3 pins that are externally being pulled low will source current (IIL) because of the pullups.Port 3 also serves the functions of various special features of the AT89S52, as shown in the following table.Port 3 also receives some control signals for Flash programming and verification.RSTReset input. A high on this pin for two machine cycles while the oscillator is running resets the device. This pin drives High for 96 oscillator periods after the Watchdog times out. The DISRTO bit in SFR AUXR (address 8EH) can be used to disable this feature. In the default state of bit DISRTO, the RESET HIGH out feature is enabled.ALE/PROGAddress Latch Enable (ALE) is an output pulse for latching the low byte of the address during accesses to external memory. This pin is also the program pulse input (PROG) during Flash programming.In normal operation, ALE is emitted at a constant rate of 1/6 the oscillator frequency and may be used for external timing or clocking purposes. Note, however, that one ALE pulse is skipped during each access to external data memory.If desired, ALE operation can be disabled by setting bit 0 of SFR location 8EH. With the bit set, ALE is active only during a MOVX or MOVC instruction. Otherwise, the pin is weakly pulled high. Setting the ALE-disable bit has no effect if the microcontroller is in external execution mode.PSENProgram Store Enable (PSEN) is the read strobe to external program memory. When the AT89S52 is executing code from external program memory, PSENis activated twice each machine cycle, except that two PSEN activations are skipped during each access to external data memory.EA/VPPExternal Access Enable. EA must be strapped to GND in order to enable the device to fetch code from external program memory locations starting at 0000H up to FFFFH. Note, however, that if lock bit 1 is programmed, EA will be internally latched on reset. EA should be strapped to VCC for internal program executions.This pin also receives the 12-volt programming enable voltage (VPP) during Flash programming.XTAL1Input to the inverting oscillator amplifier and input to the internal clock operating circuit.XTAL2Output from the inverting oscillator amplifier.Special Function RegistersNote that not all of the addresses are occupied, and unoccupied addresses may not be implemented on the chip. Read accesses to these addresses will in general return random data, and write accesses will have an indeterminate effect.User software should not write 1s to these unlisted locations, since they may be used in future products to invoke new features. In that case, the reset or inactive values of the new bits will always be 0.Timer 2 Registers:Control and status bits are contained in registers T2CON and T2MOD for Timer 2. The register pair (RCAP2H, RCAP2L) are the Capture/Reload registers for Timer 2 in 16-bit capture mode or 16-bit auto-reload mode.Interrupt Registers:The individual interrupt enable bits are in the IE register. Two priorities can be set for each of the six interrupt sources in the IP register.Dual Data Pointer Registers: To facilitate accessing both internal and external data memory, two banks of 16-bit Data Pointer Registers areprovided: DP0 at SFR address locations 82H-83H and DP1 at 84H-85H. Bit DPS = 0 in SFR AUXR1 selects DP0 and DPS = 1 selects DP1. The user should always initialize the DPS bit to the appropriate value before accessing the respective Data Pointer Register.Power Off Flag:The Power Off Flag (POF) is located at bit 4 (PCON.4) in the PCON SFR. POF is set to “1” during power up. It can be set and rest under software control and is not affected by reset.Memory OrganizationMCS-51 devices have a separate address space for Program and Data Memory. Up to 64K bytes each of external Program and Data Memory can be addressed.Program MemoryIf the EA pin is connected to GND, all program fetches are directed to external memory. On the AT89S52, if EA is connected to VCC, program fetches to addresses 0000H through 1FFFH are directed to internal memory and fetches to addresses 2000H through FFFFH are to external memory.Data MemoryThe AT89S52 implements 256 bytes of on-chip RAM. The upper 128 bytes occupy a parallel address space to the Special Function Registers. This means that the upper 128 bytes have the same addresses as the SFR space but are physically separate from SFR space.When an instruction accesses an internal location above address 7FH, the address mode used in the instruction specifies whether the CPU accesses the upper 128 bytes of RAM or the SFR space. Instructions which use direct addressing access of the SFR space. For example, the following direct addressing instruction accesses the SFR at location 0A0H (which is P2).MOV 0A0H, #dataInstructions that use indirect addressing access the upper 128 bytes of RAM. For example, the following indirect addressing instruction, where R0 contains 0A0H, accesses the data byte at address 0A0H, rather than P2 (whose address is 0A0H).MOV @R0, #dataNote that stack operations are examples of indirect addressing, so the upper 128 bytes of data RAM are available as stack space.Timer 0 and 1Timer 0 and Timer 1 in the AT89S52 operate the same way as Timer 0 and Timer 1 in the AT89C51 and AT89C52.Timer 2Timer 2 is a 16-bit Timer/Counter that can operate as either a timer or an event counter. The type of operation is selected by bit C/T2 in the SFR T2CON (shown in Table 2). Timer 2 has three operating modes: capture, auto-reload (up or down counting), and baud rate generator. The modes are selected by bits in T2CON.Timer 2 consists of two 8-bit registers, TH2 and TL2. In the Timer function, the TL2 register is incremented every machine cycle. Since a machine cycle consists of 12 oscillator periods, the count rate is 1/12 of the oscillator frequency.In the Counter function, the register is incremented in response to a1-to-0 transition at its corresponding external input pin, T2. In this function, the external input is sampled during S5P2 of every machine cycle. When the samples show a high in one cycle and a low in the next cycle, the count is incremented. The new count value appears in the register during S3P1 of the cycle following the one in which the transition was detected. Since two machine cycles (24 oscillator periods) are required to recognize a 1-to-0 transition, the maximum count rate is 1/24 of the oscillator frequency. To ensure that a given level is sampled at least once before it changes, the level should be held for at least one full machine cycle.InterruptsThe AT89S52 has a total of six interrupt vectors: two external interrupts (INT0 and INT1), three timer interrupts (Timers 0, 1, and 2), and the serial port interrupt. These interrupts are all shown in Figure 10.Each of these interrupt sources can be individually enabled or disabledby setting or clearing a bit in Special Function Register IE. IE also contains a global disable bit, EA, which disables all interrupts at once.Note that Table 5 shows that bit position IE.6 is unimplemented. In the AT89S52, bit position IE.5 is also unimplemented. User software should not write 1s to these bit positions, since they may be used in future AT89 products. Timer 2 interrupt is generated by the logical OR of bits TF2 and EXF2 in register T2CON. Neither of these flags is cleared by hardware when the service routine is vectored to. In fact, the service routine may have to determine whether it was TF2 or EXF2 that generated the interrupt, and that bit will have to be cleared in software.The Timer 0 and Timer 1 flags, TF0 and TF1, are set at S5P2 of the cycle in which the timers overflow. The values are then polled by the circuitry in the next cycle. However, the Timer 2 flag, TF2, is set at S2P2 and is polled in the same cycle in which the timer overflows.二、译文单片机单片机即微型计算机,是把中央处理器、存储器、定时/计数器、输入输出接口都集成在一块集成电路芯片上的微型计算机。
外文文献原稿和译文
(空一行)原□□稿(空一行) IntroductionThe "jumping off" point for this paper is Reengineering the Corporation, by Michael Hammer and James Champy . The paper goes on to review the literature on BPR. It explores the principles and assumptions behind reengineering, looks for commonfactors behind its successes or failures, examines case studies, and presents alternatives to "classical" reengineering theory . The paper pays particular attention to the role of information technology in BPR. In conclusion, the paper offers somespecific recommendations regarding reengineering. Old Wine in New BottlesThe concept of reengineering traces its origins back to management theories developedas early as the nineteenth century . The purpose of reengineering is to "make all your processes the best-in-class." Frederick Taylor suggested in the 1880's that managers use process reengineering methods to discover the best processes for performing work, and that these processes be reengineered to optimize productivity. BPR echoes the classical belief that there is one best way to conduct tasks. In Taylor's time, technology did not allow large companies to design processes in across-functional or cross-departmental manner. Specialization was the state-of-theart method to improve efficiency given the technology of the time.(下略)正文内容:新罗马“TimesNewRoman ”字体,小四号字。
外文参考文献译文及原文
广东工业大学华立学院本科毕业设计(论文)外文参考文献译文及原文系部城建学部专业土木工程年级 2011级班级名称 11土木工程9班学号 23031109000学生姓名刘林指导教师卢集富2015 年5 月目录一、项目成本管理与控制 0二、Project Budget Monitor and Control (1)三、施工阶段承包商在控制施工成本方面所扮演的作用 (2)四、The Contractor's Role in Building Cost Reduction After Design (4)一、外文文献译文(1)项目成本管理与控制随着市场竞争的激烈性越来越大,在每一个项目中,进行成本控制越发重要。
本文论述了在施工阶段,项目经理如何成功地控制项目预算成本。
本文讨论了很多方法。
它表明,要取得成功,项目经理必须关注这些成功的方法。
1.简介调查显示,大多数项目会碰到超出预算的问……功控制预算成本。
2.项目控制和监测的概念和目的Erel and Raz (2000)指出项目控制周期包括测量成……原因以及决定纠偏措施并采取行动。
监控的目的就是纠偏措施的...标范围内。
3.建立一个有效的控制体系为了实现预算成本的目标,项目管理者需要建立一……被监测和控制是非常有帮助的。
项目成功与良好的沟通密...决( Diallo and Thuillier, 2005)。
4.成本费用的检测和控制4.1对检测的优先顺序进行排序在施工阶段,很多施工活动是基于原来的计……用完了。
第四,项目管理者应该检测高风险活动,高风险活动最有...重要(Cotterell and Hughes, 1995)。
4.2成本控制的方法一个项目的主要费用包括员工成本、材料成本以及工期延误的成本。
为了控制这些成本费用,项目管理者首先应该建立一个成本控制系统:a)为财务数据的管理和分析工作落实责任人员b)确保按照项目的结构来合理分配所有的……它的变化--在成本控制线上准确地记录所有恰...围、变更、进度、质量)相结合由于一个工程项目......虑时间价值影响后的结果。
毕业设计(论文)外文资料翻译(学生用)
毕业设计外文资料翻译学院:信息科学与工程学院专业:软件工程姓名: XXXXX学号: XXXXXXXXX外文出处: Think In Java (用外文写)附件: 1.外文资料翻译译文;2.外文原文。
附件1:外文资料翻译译文网络编程历史上的网络编程都倾向于困难、复杂,而且极易出错。
程序员必须掌握与网络有关的大量细节,有时甚至要对硬件有深刻的认识。
一般地,我们需要理解连网协议中不同的“层”(Layer)。
而且对于每个连网库,一般都包含了数量众多的函数,分别涉及信息块的连接、打包和拆包;这些块的来回运输;以及握手等等。
这是一项令人痛苦的工作。
但是,连网本身的概念并不是很难。
我们想获得位于其他地方某台机器上的信息,并把它们移到这儿;或者相反。
这与读写文件非常相似,只是文件存在于远程机器上,而且远程机器有权决定如何处理我们请求或者发送的数据。
Java最出色的一个地方就是它的“无痛苦连网”概念。
有关连网的基层细节已被尽可能地提取出去,并隐藏在JVM以及Java的本机安装系统里进行控制。
我们使用的编程模型是一个文件的模型;事实上,网络连接(一个“套接字”)已被封装到系统对象里,所以可象对其他数据流那样采用同样的方法调用。
除此以外,在我们处理另一个连网问题——同时控制多个网络连接——的时候,Java内建的多线程机制也是十分方便的。
本章将用一系列易懂的例子解释Java的连网支持。
15.1 机器的标识当然,为了分辨来自别处的一台机器,以及为了保证自己连接的是希望的那台机器,必须有一种机制能独一无二地标识出网络内的每台机器。
早期网络只解决了如何在本地网络环境中为机器提供唯一的名字。
但Java面向的是整个因特网,这要求用一种机制对来自世界各地的机器进行标识。
为达到这个目的,我们采用了IP(互联网地址)的概念。
IP以两种形式存在着:(1) 大家最熟悉的DNS(域名服务)形式。
我自己的域名是。
所以假定我在自己的域内有一台名为Opus的计算机,它的域名就可以是。
外文资料原文及译文
外文资料原文及译文南通大学法政与管理学院2009年06月HOW DO THE CHINESE PERCEIVE HARMONIOUSCORPORATE CULTURE:An Empirical Study on Dimensions of Harmonious Corporate CultureLianke SONG,Hao YANG,Lan YANGABSTRACT The Sixth Plenary Session of the 16th Central Committee of the Communist Party of China points out creating harmonious culture is an important task for building socialist harmonious society. Building harmonious culture needs all companies to create harmonious culture, because a company is a basic social unit. Henceforth, many Chinese companies advocate building harmonious corporate culture. Scholars must study basic theories for harmonious corporate culture. This study tried to answer two questions: What is harmonious corporate culture in Chinese mind and how do different Chinese perceive harmonious corporate culture? Firstly, this paper analyzed background of harmonious corporate culture from Chinese traditional culture and era needs. Secondly, authors designed an open-ended questionnaire and sent them to employees in Jiangsu and Shanghai. 329 questionnaires were collected and 291 questionnaires were valid, representing a response rate of 88.45%. Thirdly, this study explored dimensions of harmonious corporate culture and identified different viewpoints from different group. Finally, this paper discussed the results and pointed out limitations of this study and future research. The results of this paper were on basis of defining, measuring, analyzing, and creating harmonious corporate culture.1. THEORETICAL BACKGROUND AND QUESTIONSThe Fourth Plenary Session of the 16th Central Committee of the Communist Party of China puts forward building socialist harmonious societies and the sixth plenary session of the 16th central committee of the communist party of China points out creating harmonious culture is an important task for building socialist harmonious society. Building harmonious culture needs all companies to create harmonious culture, because a company is a basic social unit[1].Why do Chinese corporations advocate harmonious corporate culture? Maybe Chinese traditional culture and era needs are responsible.Chinese philosophy has a history of several thousand years. Its origins are often traced back to the Book of Changs (yi jing), which introduced some of the most fundamental terms of Chinese philosophy. Its first flowering is generally considered to have been in about the 6th century BC, but it draws on an oral tradition that goes back to Neolithic times.The Tao Te Ching (dao de jing) of Lao Tzu (lao zi) and the Analects (lun yu)of Confucius (kong zi) both appeared around the 6th century BC, around the time of early Buddhist philosophy.Confucianism focuses on the fields of ethics and politics, emphasizing personal and governmental morality, correctness of social relationships, justice, traditionalism, and sincerity. Confucianism and legalism are responsible for creating the world’s first meritocracy. Confucianism was and continues to be a major influence on Chinese culture. Harmonious culture is meant to respect the tradition of established virtue under Confucius upon "harmony with differences" while exploring extensively our cultural resources and cultural ideas or beliefs.The Chinese schools of philosophy, except during the Qin Dynasty, can be both critical and tolerant of one another. Despite the debates and competition, they generally have cooperated and shared ideas, which they would usually incorporate with their own.Harmony was a central concept in Chinese ancient philosophy. Confucian, Taoist, Buddhist and Legalist that are the major Chinese traditions all prize “harmony” as an ultimate value, but they disagree on how to achieve it. Confucians in particular emphasize the single-character term for “harmony” (he), which appears in all of Confucianism’s “Four Books and Five Classics” (si shu wu jing). The most forceful articulation of identification of personal and communal harmony comes from the Doctrine of the Mean (zhong yong), which defines harmony as a state of equilibriumw here pleasure, anger, sorrow and joy are moderated and restrained, claiming “all things in the universe to attain the way”.During the Industrial and Modern Ages, Chinese philosophy began to integrate the concepts of Western philosophy. Chinese philosophy attempted to incorporate democracy, republicanism and industrialism. Mao Zedong added Marxism, Stalinism and other communist thoughts. The government of the People’s Republic of China initiates Socialism with Chinese Characteristics.The theoretical bases of harmonious socialist society are Marxism-Leninism, Mao Zedong Thoughts, Deng Xiaoping Theory, and the important thought of "Three Represents" (That is, the CPC must always represent the development trend of China's advanced productive forces, the orientation of China's advanced culture, and the fundamental interests of the overwhelming majority of the people in China.).Six main characteristics of a harmonious society are democracy and the rule of law, fairness and justice, integrity and fraternity, vitality, stability and order, and harmony between man and nature. The principles observed in building a harmonious socialist society are as the following: people oriented; development in a scientific way; in-depth reform and opening up; democracy and the rule of law; properly handling the relationships between reform, development and stability; and the participation of the whole society under the leadership of the Party.The authors tried to define harmonious corporate culture: harmonious corporate culture is the corporate culture that adheres to people-oriented principle and considers harmony as a core concept, by managing in good faith and scientific administration to achieve harmony among enterprises, society and nature, and eventually make enterprises develop harmoniously and healthily.Chinese traditional culture is the basis of harmonious corporate culture. Era need is the direction of harmonious corporate culture. “Harmonious Corporate Culture” is a new identification and is different from any existent conceptions. What is harmonious corporate culture? This study wants to answer this question by analyzing Chinese viewpoints from open-ended questionnaires.Question 1: What is harmonious corporate culture in Chinese mind?Harmonious corporate culture is a new and special conception for Chinese. General views of Chinese can be found by searching dimensions of harmonious corporate culture. In fact, different people have different ideas. Maybe there are differences among different groups, which can be classified by sex, age, education and position. This study will find and explain those differences.Question 2: How do different Chinese perceive harmonious corporate culture?Today, many Chinese companies advocate building harmonious corporate culture. Understanding conception and characters of harmonious corporate culture are very important. This paper will answer two questions which are the basis of this field.2. METHODS2.1 Sample and ProcedureThe empirical analysis was carried out in Jiangsu and Shanghai. J iangsu’s economic and social development has always been taking the lead in China. Shanghai is China’s chief industrial and commercial centre and one of its leading centres of higher education and scientific research. They both lie in center of China’s eas t coast. We can know what modern Chinese are thinking and hoping by studying employees in Jiangsu and Shanghai.Questionnaires couldn’t be counted because we used both paper version and computer version. From January 2007 to January 2008, authors sent questionnaires to employees who worked in Jiangsu and Shanghai. 329 questionnaires were returned and 291 questionnaires were valid, representing a response rate of 88.45%.Table 1 summarizes the key statistics for the sample used in the study.Table 1 Characteristics of the sample2.2 MeasuresThe authors designed an open-ended questionnaire based on the purpose of the study. This scale only used one question to collect information for answering question 1 of this study. This question is “Please use ten words or ten sentences to describe harmonious corporate culture”.3. RESULTSThis research found out that there were some similar viewpoints about harmonious corporate culture from collected questionnaires. The authors classify these viewpoints into 15 dimensions after holding 10 study group meetings. Some dimensions were identified based on China’s traditional culture and present policies. Table 2 lists 15 dimensions in English and Chinese because of some dimensions with Chinese characteristics.Table 2 Dimension and frequency of harmonious corporate cultureThis s tudy calculated dimensions’ frequencies from different groups to know different people’s ideal harmonious corporate culture. Table 3 shows statistics for male’s and female’s viewpoints on harmonious corporate culture.Table3 Frequency and order of harmonious corporate culture from female and male4. DISCUSSION AND CONCLUSION4.1 ResultsSome companies advocate building harmonious corporate culture and some companies boast that they possess harmonious corporate culture after the central government calls on all society to create harmonious culture. But what is harmonious corporate culture? Some scholars wanted toexplain it, but nobody has answered this question by empirical study. The authors answered question 1 of this study by analyzing collected data. A lot of standpoints were found, but some standpoints could be integrated as one because they possess same meaning but are described with different words. The study group held 10 meetings to discuss harmonious corporate culture dimensions based on questionnaires. Finally, 15 dimensions were identified. They are People oriented, steady development, scientific administration, vitality, stability and order, fraternity and concord, unity and cooperation, fairness and impartiality, democratic participation, managing in good faith, pursuing excellence, social responsibility, energy conservation and environmental protection, incorporating things of diverse nature, and common development and win-win situation. The result answered question 1: What is harmonious corporate culture in Chinese mind?Dimensions were arranged on frequency. People oriented ranked first. People oriented in China has three sources: Max’s study of humanity; “People first” descending from Chinese history and new anthropocentric[2]. The Chinese like speaking “people oriented” relating to Chinese traditional culture. The genesis of people oriented is traceable to the Western Zhou Dynasty and people oriented became the core thought of Confucianism which influenced the Chinese deeply. Many archaism were concerned with people oriented, such as “The pe ople are the most important element in a state; next are the gods of land and grain; least is the ruler himself[3].”(min wei gui, she ji ci zhi, jun wei qing) Many scholars also considered people oriented is the core and basis of harmonious corporate culture[4][5].This paper compared different groups’ viewpoints to answer question 2 -- how do different Chinese perceive harmonious corporate culture?People oriented, unity and cooperation, vitality, and fraternity and concord were ranked from 1 to 4 by female and male. The same results made the authors surprised. But they are different in fifth dimension. The fifth of female is democratic participation and the fifth of male is stability and order. Female status was lower than male in ancient China. Female had to comply with the three obedience and the four virtues (san cong si de) in past. The three obediences (obey her father before marriage, her husband when married, and her sons in widowhood) and the four virtues (morality, proper speech, modest manner and diligent work) of women in ancient China, which were spiritual fetters of wifely submission and virtue imposed on women in feudal society. Female status is improving after female deputy attended the first National Congress of the Communist Party ofChina. Today, Chinese female think much of the rights of women, so democratic participation is the fifth dimension. The ancient belief “Men’s work centers around outside, women’s work centers around the home[6]”(nü zheng wei hu nei, nan zheng wei hu wai) which c ame from The Book of Changes (yi jing). Man had to work hard in society to earn money and get honour for his family. Today, both man and woman work in government, company, school, hospital and so on, but man always plays a major role and assumes primary responsibility in society and at home for traditional culture. The change is fast and the competition is fierce in modern society, so man is facing great pressure. This is the reason why man hopes to live and work in a more stable environment, so stability and order is the fifth dimension.People oriented, unity and cooperation, and vitality were ranked from 1 to 3 by Managerial employee and Nonmanagerial employee. Scientific administration and democratic participation were ordered as the fourth dimension by managerial employee. Managerial employee looks deeper and thinks further than nonmanagerial employee because managerial employee is at higher level and holds more responsibility in organization. Managerial employee cares about management questions. Fraternity and concord was ordered as the fourth dimension by nonmanagerial employee. Nonmanagerial employee concerns less about enterprises’ overall operation and management state than managerial employee does. They understand harmonious corporate culture from their own specific the work and life. Nonmanagerial employee does specific task and needs direct corporation. They believe that the staffs’ civilized language and behaviours, mutual understanding, the warm atmosphere of interpersonal relationships in the enterprise are very important aspects of harmonious corporate culture. Nonmanagerial employee cares about good relationship. Generally speaking, the differences of the harmonious corporate culture dimensions understanding between managerial employee and nonmanagerial employee are closely related to their location in the organizational structure and their working content in the enterprise.People oriented was ordered as first dimension and unity and cooperation was ordered as the second dimension by all persons whatever their education background is Vitality was ordered as the third dimension by all responders except persons who got a master or doctor degree. The responders whose highest education qualification over master degree ordered scientific administration as the second dimension too. The person holding advanced academic degree has more opportunity to be promoted to managerial position, so they think scientific administration is very important in aharmonious environment. Compared with other groups,the relatively higher education group who get undergraduate degree, are more interested in stability and order, fairness and impartiality dimensions. People in this group are the middle and high-level managers in the enterprise, that is, not only they are familiar with the overall state of the enterprise, but also they understand deeply internal staffs’ living conditions characteristics. Therefore, they put more attention on stability and order, fairness and impartiality dimensions.All groups ordered people oriented, unity and cooperation, and vitality as most important three dimensions. The same results showed what core contents for harmonious corporate culture are.4.2 Limitations and Future ResearchThis study was just an exploratory study. The authors searc hed harmonious corporate culture’s dimensions by open-ended questionnaire. But the validity of these results need to be proved by more studies. The authors will design close-ended questionnaire based on this study and collect new data. Dimensions of harmonious corporate culture will be confirmed by exploratory factor analysis and confirmatory factor analysis.This paper only discussed what harmonious corporate culture is. In the future, how to create harmonious corporate culture should be studied.The authors compared viewpoints from different sex, position and education. Age, birthplace, nationality and work experience influence individual thought too. Different opinions from different groups should be identified in future study.China should act as not only the defender of Chinese culture but an explorer and promoter of the new harmonious culture. Harmony is the social theme for present China. Studying basic theory of harmonious corporate culture will contribute to our society.REFERENCES[1] Lianke SONG, Dongtao, YANG, Hao YANG. Why do companies create harmonious cultures? Comparing the influence of different corporate cultures on employees. Enterprise Management and Change in a Transitional Economy. 2008. p595-603.[2] LU Wanglin. On theoretic s ource of “human oriented” -- analyzing the scientific factor of “scientific development view” from one point of view. Hebei Academic Journal, 26 (5), 2006,p228-230.[3] Mencius. The Mencius. Warring States time.[4] Liangbo CHENG, Lincheng JING. An search on creating harmonious corporate culture. Group Economy, (17), 2007, p294-295.[5] Xiangkui GENG. Extracting kernel of Confucianism to create harmonious corporate culture. Theoretical Research, (3), 2007, p47-48.[6] The Book of Changes.中国人如何认识和谐企业文化?——关于和谐企业文化维度的实证研究宋联可杨浩杨兰摘要党的十六届六中全会指明建设和谐文化是构建社会主义和谐社会的重要任务。
会计学财务报表中英文对照外文翻译文献
会计学财务报表中英文对照外文翻译文献(文档含英文原文和中文翻译)译文:中美财务报表的区别(1)财务报告内容构成上的区别1)美国的财务报告包括三个基本的财务报表,除此之外,典型的美国大公司财务报告还包括以下成分:股东权益、收益与综合收益、管理报告、独立审计报告、选取的5-10年数据的管理讨论与分析以及选取的季度数据。
2)我国财务报告不注重其解释,而美国在财务报告的内容、方法、多样性上都比较充分。
中国的评价部分包括会计报表和财务报表,财务报表是最主要的报表,它包括前述各项与账面不符的描述、财会政策与变化、财会评估的变化、会计差错等问题,资产负债表日期,关联方关系和交易活动等等,揭示方法是注意底部和旁注。
美国的财务范围在内容上比财务报表更加丰富,包括会计政策、技巧、添加特定项目的报告, 报告格式很难反映内容和商业环境等等,对违反一致性、可比性原则问题,评论也需要披露的,但也揭示了许多方面,比如旁注、底注、括号内、补充声明、时间表和信息分析报告。
(2)财务报表格式上的比较1)从资产负债表的格式来看,美国的资产负债表有账户类型和报告样式两项描述,而我国是使用固定的账户类型。
另外,我们的资产负债表在项目的使用上过于标准化,不能够很好的反映出特殊的商业项目或者不适用于特殊类型的企业。
而美国的资产负债表项目是多样化的,除此之外,财务会计准则也是建立在资产负债表中资产所有者投资和支出两项要素基础上的,这一点也是中国的财会准则中没有的。
2)从损益表格式的角度来看,美国采用的是多步式,损益表项目分为两部分,营业利润和非营业利润,但是意义不同。
我国的营业利润在范围上比美国的小,例如投资收益在美国是归类为营业利润的而在我国则不属于营业利润。
另外,我国的损益表项目较美国的更加规范和严格,美国校准损益表仅仅依赖于类别和项目。
报告收可以与销售收入及其他收入相联系,也可以和利息收益、租赁收入和单项投资收益相联系;在成本方面,并不是严格的划分为管理成本、财务成本、和市场成本,并且经常性销售费用、综合管理费用以及利息费用、净利息收益都要分别折旧。
外文文献翻译译文
环境管理会计(EMA)是管理会计发展的趋势Christine Jasch摘要:组织机构和会计师们为什么应该关心环境问题?来自供应链、资金提供商、监管机构以及其他利益相关者对于环境绩效及其信息披露的压力,导致组织机构的与环境相关的成本不断增加。
但同时提高环境绩效能够带来潜在的货币利益这一观点也逐渐得到人们的认同,传统的会计实务不能充分提供对于环境管理和与之相关的战略决策所需要的信息。
由于联合国可持续发展事务署下的环境管理会计工作组的成立,以及由它主办的出版物的发行,环境管理会计得到了促进和提升。
最近,国际会计师联合会发行了一份关于环境管理会计的指导性文件,这将进一步推动环境管理会计在会计师中的应用。
这期《清洁生产》杂志的关于环境管理会计的这个特别问题,侧重于它的方法论背景,以及来自澳大利亚、奥地利、阿根廷、加拿大、日本和立陶宛的案例研究经验。
正文:环境问题伴随者相关费用,收入和利益,正被世界上大多数国家的公民,政府组织,合作型领导人给予越来越多的关注.但是,有一个越来越广泛的共识,那就是,传统的会计不能为合理的支持在环境管理责任方面的决策制定提供准确的信息.为了填补这个差距,目前,EMA的新兴领域已经受到持续增加的关注.在19世纪九十年代早期,美国环保署是第一个成立了正式的项目去促进EMA的采纳的国家机构.从那时起,在30个国家的组织已经开始推动和落实EMA的许多不同类型的与环保相关的管理措施. 对于EMA的广泛关注是由于联合国可持续发展事务司对EMA的提倡以及其对EMA书籍的委托出版。
国际会计师联合会决定授权在由联合国科学发展司EMA工作组发表的最早的关于EMA 两本出版物的基础上发展一个关于EMA的指导性文件以整合关于EMA的最好的信息并与此同时进行必要的更新和添加.这个文件既不是有规定的要求的标准,也不是个描述性研究报告.它意在成为一个提供指导性信息的文件,作为监管要求,标准和纯粹信息的中间地带.这样, 它的目标是提供了一个总体框架和EMA的定义是相当全面,这是一致的可能与其他现有的,广泛应用于环境会计框架与EMA必须通力合作,以减少一些就这一重要议题的国际混乱功能。
建筑类外文翻译+译文
Architecture in a Climate of ChangePage52-Page62Low energy techniques for housingIt would appear that,for the industrialised countries,the best chance of rescue lies with the built environment because buildings in use or in the course of erection are the biggest single indirect source of carbon emissions generated by burning fossil fuels,accounting for over 50 per cent of total emissions.If you add the transport costs generated by buildings the UK government estimate is 75 per cent.It is the built environment which is the sector that can most easily accommodate fairly rapid change without pain.In fact,upgrading buildings, especially the lower end of the housing stock,creates a cluster of interlocking virtuous circles. Construction systemsHaving considered the challenge presented by global warming and the opportunities to generate fossil-free energy,it is now time to consider how the demand side of the energy equation can respond to that challenge.The built environment is the greatest sectoral consumer of energy and,within that sector,housing is in pole position accounting for 28 per cent of all UK carbon dioxide (CO2) emissions.In the UK housing has traditionally been of masonry and since the early 1920s this has largely been of cavity construction.The purpose was to ensure that a saturated external leaf would have no physical contact with the inner leaf apart from wall ties and that water would be discharged through weep holes at the damp-proof course level.Since the introduction of thermal regulations,initially deemed necessary to conserve energy rather than the planet,it has been common practice to introduce insulation into the cavity.For a long time it was mandatory to preserve a space within the cavity and a long rearguard battle was fought by the traditionalists to preserve this‘sacred space’.Defeat was finally conceded when some extensive research by the Building Research Establishment found that there was no greater risk of damp penetration with filled cavities and in fact damp through condensation was reduced.Solid masonry walls with external insulation are common practice in continental Europe and are beginning to make an appearance in the UK.In Cornwall the Penwith Housing Association has built apartments of this construction on the sea front, perhaps the most challenging of situations.The advantages of masonry construction are:● It is a tried and tested technology familiar to house building companies of all sizes.● It is durable and generally risk free as regards catastrophic failure–though not entirely.A few years ago the entire outer leaf of a university building in Plymouth collapsed due to the fact that the wall ties had corroded.● Exposed brickwork is a low maintenance system; maintenance demands rise considerably if it receives a rendered finish.● From the energy efficiency point of view,masonry homes have a relatively high thermal mass which is considerably improved if there are high density masonryinternal walls and concrete floors.Framed constructionVolume house builders are increasingly resorting to timber-framed construction with a brick outer skin,making them appear identical to full masonry construction.The attraction is the speed of erection especially when elements are fabricated off site. However,there is an unfortunate history behind this system due to shortcomings in quality control.This can apply to timber which has not been adequately cured or seasoned.Framed buildings need to have a vapour barrier to walls as well as roofs. With timber framing it is difficult to avoid piercing the barrier.There can also be problems achieving internal fixings.For the purist,the ultimate criticism is that it is illogical to have a framed building clad in masonry when it cries out for a panel,boarded,slate or tile hung external finish.Pressed steel frames for homes are now being vigorously promoted by the steel industry.The selling point is again speed of erection but with the added benefit of a guaranteed quality in terms of strength and durability of the material.From the energy point of view,framed buildings can accommodate high levels of insulation but have relatively poor thermal mass unless this is provided by floors and internal walls.Innovative techniquesPermanent Insulation Formwork Systems (PIFS) are beginning to make an appearance in Britain.The principle behind PIFS is the use of precision moulded interlocking hollow blocks made from an insulation material,usually expanded polystyrene.They can be rapidly assembled on site and then filled with pump grade concrete.When the concrete has set the result is a highly insulated wall ready for the installation of services and internal and exterior finishes.They can achieve a U-value as low as 0.11 W/m2K.Above three storeys the addition of steel reinforcement is necessary. The advantages of this system are:● Design flexibility; almost any plan shape is possible.● Ease and speed of erection;skill requirements are modest which is why it has proved popular with the self-build sector.Experienced erectors can achieve 5 m2 per man hour for erection and placement of concrete.● The finished product has high structural strength together with considerable thermal mass and high insulation value.Solar designPassive solar designSince the sun drives every aspect of the climate it is logical to describe the techniques adopted in buildings to take advantage of this fact as‘solar design’. The most basic response is referred to as‘passive solar design’.In this case buildings are designed to take full advantage of solar gain without any intermediate operations.Access to solar radiation is determined by a number of conditions:● the sun’s position relative to the principal facades of the building(solar altitude and azimuth);● site orientation and slope;● existing obstructions on the site;● potential for overshadowing from obstructions outside the site boundary.One of the methods by which solar access can be evaluated is the use of some form of sun chart.Most often used is the stereographic sun chart in which a series of radiating lines and concentric circles allow the position of nearby obstructions to insolation,such as other buildings,to be plotted.On the same chart a series of sun path trajectories are also drawn(usually one arc for the 21st day of each month); also marked are the times of the day.The intersection of the obstructions’outlines and the solar trajectories indicate times of transition between sunlight and shade. Normally a different chart is constructed for use at different latitudes (at about two degree intervals).Sunlight and shade patterns cast by the proposed building itself should also be considered.Graphical and computer prediction techniques may be employed as well as techniques such as the testing of physical models with a heliodon.Computer modelling of shadows cast by the sun from any position is offered by Integrated Environmental Solutions (IES) with its‘Suncast’program.This is a user-friendly program which should be well within normal undergraduate competence. The spacing between buildings is important if overshading is to be avoided during winter months when the benefit of solar heat gain reaches its peak.On sloping sites there is a critical relationship between the angle of slope and the level of overshading.For example, if overshading is to be avoided at a latitude of 50 N,rows of houses on a 10 north-facing slope must be more than twice as far apart than on 10 south-facing slope.Trees can obviously obstruct sunlight.However,if they are deciduous,they perform the dual function of permitting solar penetration during the winter whilst providing a degree of shading in the summer.Again spacing between trees and buildings is critical.Passive solar design can be divided into three broad categories:● direct gain;● indirect gain;● attached sunspace or conservatory.Each of the three categories relies in a different way on the‘greenhouse effect’as a means of absorbing and retaining heat.The greenhouse effect in buildings is that process which is mimicked by global environmental warming.In buildings,the incident solar radiation is transmitted by facade glazing to the interior where it is absorbed by the internal surfaces causing warming.However,re-emission of heat back through the glazing is blocked by the fact that the radiation is of a much longer wavelength than the incoming radiation.This is because the re-emission is from surfaces at a much lower temperature and the glazing reflects back such radiation to the interior.Direct gainDirect gain is the design technique in which one attempts to concentrate the majority of the building’s glazing on the sun-facing facade.Solar radiation is admitted directly into the space concerned.Two examples 30 years apart are the author’s housein Sheffield,designed in 1967 and the Hockerton Project of 1998 by Robert and Brenda Vale.The main design characteristics are:● Apertures through which sunlight is admitted should be on the solar side of the building, within about 30 of south for the northern hemisphere.● Windows facing west may pose a summer overheating risk.● Windows should be at least double glazed with low emissivity glass (Low E) as now required by the UK Building Regulations.● The main occupied living spaces should be located on the solar side of the building.● The floor should be of a high thermal mass to absorb the heat and provide thermal inertia,which reduces temperature fluctuations inside the building.● As regards the benefits of thermal mass,for the normal daily cycle of heat absorption and emission,it is only about the first 100 mm of thickness which is involved in the storage process.Thickness greater than this provides marginal improvements in performance but can be useful in some longer-term storage options.● In the case of solid floors,insulation should be beneath the slab.● A vapour barrier should always be on the warm side of any insulation.● Thick carpets should be avoided over the main sunlit and heatabsorbing portion of the floor if it serves as a thermal store.However,with suspended timber floors a carpet is an advantage in excluding draughts from a ventilated underfloor zone. During the day and into the evening the warmed floor should slowly release its heat, and the time period over which it happens makes it a very suitable match to domestic circumstances when the main demand for heat is in the early evening.As far as the glazing is concerned,the following features are recommended: ● Use of external shutters and/or internal insulating panels might be considered to reduce night-time heat loss.● To reduce the potential of overheating in the summer,shading may be provided by designing deep eaves or external louvres. Internal blinds are the most common technique but have the disadvantage of absorbing radiant heat thus adding to the internal temperature.● Heat reflecting or absorbing glass may be used to limit overheating.The downside is that it also reduces heat gain at times of the year when it is beneficial. ● Light shelves can help reduce summer overheating whilst improving daylight distribution.Direct gain is also possible through the glazing located between the building interior and attached sunspace or conservatory;it also takes place through upper level windows of clerestory designs.In each of these cases some consideration is required concerning the nature and position of the absorbing surfaces.In the UK climate and latitude as a general rule of thumb room depth should not be more than two and a half times the window head height and the glazing area should be between about 25 and 35 per cent of the floor area.Indirect gainIn this form of design a heat absorbing element is inserted between the incident solar radiation and the space to be heated;thus the heat is transferred in an indirectway.This often consists of a wall placed behind glazing facing towards the sun,and this thermal storage wall controls the flow of heat into the building.The main elements● High thermal mass element positioned between sun and internal spaces,the heat absorbed slowly conducts across the wall and is liberated to the interior some time later.● Materials and thickness of the wall are chosen to modify the heat flow.In homes the flow can be delayed so that it arrives in the evening matched to occupancy periods. Typical thicknesses of the thermal wall are 20–30 cm.● Glazing on the outer side of the thermal wall is used to provide some insulation against heat loss and help retain the solar gain by making use of the greenhouse effect.● The area of the thermal storage wall element should be about 15–20 per cent of the floor area of the space into which it emits heat.● In order to derive more immediate heat benefit,air can be circulated from the building through the air gap between wall and glazing and back into the room.In this modified form this element is usually referred to as a Trombe wall. Heat reflecting blinds should be inserted between the glazing and the thermal wall to limit heat build-up in summer.In countries which receive inconsistent levels of solar radiation throughout the day because of climatic factors (such as in the UK),the option to circulate air is likely to be of greater benefit than awaiting its arrival after passage through the thermal storage wall.At times of excess heat gain the system can provide alternative benefits with the air circulation vented directly to the exterior carrying away its heat,at the same time drawing in outside air to the building from cooler external spaces.Indirect gain options are often viewed as being the least aesthetically pleasing of the passive solar options,partly because of the restrictions on position and view out from remaining windows,and partly as a result of the implied dark surface finishes of the absorbing surfaces.As a result,this category of the three prime solar design technologies is not as widely used as its efficiency and effectiveness would suggest.Attached sunspace/conservatoryThis has become a popular feature in both new housing and as an addition to existing homes.It can function as an extension of living space,a solar heat store,a preheater for ventilation air or simply an adjunct greenhouse for plants.On balance it is considered that conservatories are a net contributor to global warming since they are often heated.Ideally the sunspace should be capable of being isolated from the main building to reduce heat loss in winter and excessive gain in summer.The area of glazing in the sunspace should be 20–30 per cent of the area of the room to which it is attached.The most adventurous sunspace so far encountered is in the Hockerton housing development which will feature later.Ideally the summer heat gain should be used to charge a seasonal thermal storage element to provide background warmth in winter.At the very least,air flow paths between the conservatory and the main building should be carefully controlled.Active solar thermal systemsA distinction must be drawn between passive means of utilising the thermal heat of the sun, discussed earlier,and those of a more‘active’nature Active systems take solar gain a step further than passive solar.They convert direct solar radiation into another form of energy.Solar collectors preheat water using a closed circuit calorifier.The emergence of Legionella has highlighted the need to store hot water at a temperature above 60 C which means that for most of the year in temperate climes active solar heating must be supplemented by some form of heating.Active systems are able to deliver high quality energy.However,a penalty is incurred since energy is required to control and operate the system known as the ‘parasitic energy requirement’.A further distinction is the difference between systems using the thermal heat of the sun,and systems,such as photovoltaic cells, which convert solar energy directly into electrical power.For solar energy to realise its full potential it needs to be installed on a district basis and coupled with seasonal storage.One of the largest projects is at Friedrichshafen.The heat from 5600 m2 of solar collectors on the roofs of eight housing blocks containing 570 apartments is transported to a central heating unit or substation.It is then distributed to the apartments as required.The heated living area amounts to 39 500 m2.Surplus summer heat is directed to the seasonal heat store which,in this case, is of the hot water variety capable of storing 12 000 m3.The scale of this storage facility is indicated by Figure 5.9.The heat delivery of the system amounts to 1915 MWh/year and the solar fraction is 47 per cent.The month by month ratio between solar and fossil-based energy indicates that from April to November inclusive,solar energy accounts for almost total demand,being principally domestic hot water.In places with high average temperatures and generous sunlight,active solar has considerable potential not just for heating water but also for electricity generation.This has particular relevance to less and least developed countries.环境变化影响下的建筑学房屋设计中的低能耗技术显而易见,在工业化国家,最好的营救机会依赖于建筑环境,因为不论是在使用的建筑或者是在建设的建筑,都是最大的、单一的、间接地由化石燃料的燃烧所引起的碳排放的源头,而这些站了所有排放的50%。
外文翻译资料及译文
附录C:外文翻译资料Article Source:Business & Commercial Aviation, Nov 20, 2000. 5-87-88 Interactive Electronic Technical Manuals Electronic publications can increase the efficiency of your digital aircraft and analogtechnicians.Benoff, DaveComputerized technical manuals are silently revolutionizing the aircraft maintenance industry by helping the technician isolate problems quickly, and in the process reduce downtime and costs by more than 10 percent.These electronic publications can reduce the numerous volumes of maintenance manuals, microfiche and work cards that are used to maintain engines, airframes, avionics and their associated components."As compared with the paper manuals, electronic publications give us greater detail and reduced research times," said Chuck Fredrickson, general manager of Mercury Air Center in Fort Wayne, Ind.With all the advances in computer hardware and software technologies, such as high quality digital multimedia, hypertext and the capability to store and transmit digital multimedia via CD-ROMs/ networks, technical publication companies have found an effective, cost-efficient method to disseminate data to technicians.The solution for many operators and OEMs is to take advantage of today's technology in the form of Electronic Technical Manuals (ETM) or Interactive Technical Manuals (IETM). An ETM is any technical manual prepared in digital format that has the ability to be displayed using any electronic hardware media. The difference between the types of ETM/IETMs is the embedded functionality and implementation of the data."The only drawback we had to using ETMs was getting enough computers to meet our technicians' demand," said Walter Berchtold, vice president of maintenance at Jet Aviation's West Palm Beach, Fla., facility.A growing concern is the cost to print paper publications. In an effort to reduce costs, some aircraft manufacturers are offering incentives for owners to switch from paper to electronic publications. With an average printing cost of around 10 cents per page, a typical volume of a paper technical manual can cost the manufacturer over $800 for each copy. When producing a publication electronically, average production costs for a complete set of aircraft manuals are approximately $20 per copy. It is not hard to see the cost advantages of electronic publications.Another advantage of ETMs is the ease of updating information. With a paper copy, the manufacturer has to reprint the revised pages and mail copies to all the owners. When updates are necessary for an electronic manual, changes can either be e-mailed to theowners or downloaded from the manufacturer's Web site.So why haven't more flight departments converted their publications to ETM/IETMs? The answer lies in convincing technicians that electronic publications can increase their efficiency."We had an initial learning curve when the technicians switched over, but now that they are familiar with the software they never want to go back to paper," said Fredrickson.A large majority of corporate technicians also said that while they like the concept of having a tool that aids the troubleshooting process, they are fearful to give up all of their marked-up paper manuals.In 1987, a human factors study was conducted by the U.S. government to compare technician troubleshooting effectiveness, between paper and electronic methodology, and included expert troubleshooting procedures with guidance through the events. Results of the project indicated that technicians using electronic media took less than half the time to complete their tasks than those using the paper method, and technicians using the electronic method accomplished 65 percent more in that reduced time.The report also noted that new technicians using the electronic technical manuals were 12-percent more efficient than the older, more experienced technicians. (Novices using paper took 15 percent longer than the experts.)It is interesting that 90 percent of the technicians who used the electronic manuals said they preferred them to the paper versions. This proved to the industry that with proper training, the older technicians could easily transition from paper to electronic media.Electronic publications are not a new concept, although how they are applied today is. "Research over the last 20 years has provided a solid foundation for today's IETM implementation," said Joseph Fuller of the U.S. Naval Surface Warfare Center. "IETMs such as those for the Apache, Comanche, F-22, JSTAR and V-22 have progressed from concept to military and commercial implementation."In the late 1970s, the U.S. military investigated the feasibility of converting existing paper and microfilm. The Navy Technical Information Presentation System (NTIPS) and the Air Force Computer- based Maintenance Aid System (CMAS) were implemented with significant cost savings.The report stated that transition to electronic publications resulted in reductions in corrective maintenance time, fewer false removals of good components, more accurate and complete maintenance data collection reports, reduction in training requirements and reduced system downtime.The problem that the military encountered was ETMs were created in multiple levels of complexity with little to no standardization. Options for publications range from simple page-turning programs to full-functioning automated databases.This resulted in the classification of ETMs so that the best type of electronic publication could be selected for the proper application.Choosing a LevelWith all of the OEM and second- and third-party electronic publications that are available it is important that you choose the application level that is appropriate for your operation.John J. Miller, BAE Systems' manager of electronic publications, told B/CAthat "When choosing the level of an ETM/IETM, things like complexity of the aircraft and its systems, ease of use, currency of data and commonality of data should be the deciding factors; and, of course, price. If operational and support costs are reduced when you purchase a full-functioning IETM, then you should purchase the better system."Miller is an expert on the production, sustainment and emerging technologies associated with electronic publications, and was the manager of publications for Boeing in Philadelphia.Electronic publications are classified in one of five categories. A Class 1 publication is a basic electronic "page turner" that allows you to view the maintenance manual as it was printed. With a Class 2 publication all the original text of the manual is viewed as one continuous page with no page breaks. In Class 3, 4 and 5 publications the maintenance manual is viewed on a computer in a frame-based environment with increasing options as the class changes. (See sidebar.)Choosing the appropriate ETM for your operation is typically limited to whatever is being offered on the market, but since 1991 human factors reports state the demand has increased and, therefore, options are expected to follow.ETM/IETM ProvidersCompanies that create ETM/IETMs are classified as either OEM or second party provider. Class 1, 3 and 4 ETM/IETMs are the most commonly used electronic publications for business and commercial operators and costs can range anywhere from $100 to $3,000 for each ETM/ IETM. The following are just a few examples ofETM/IETMs that are available on the market.Dassault Falcon Jet offers operatorsof the Falcon 50/50EX, 900/900EX and 2000 a Class 4 IETM called the Falcon Integrated Electronic Library by Dassault (FIELD). Produced in conjunction with Sogitec Industries in Suresnes Cedex, France, the electronic publication contains service documentation, basic wiring, recommended maintenance and TBO schedules, maintenance manual, tools manual, service bulletins, maintenance and repair manual, and avionics manual.The FIELD software allows the user to view the procedures and hot- link directly to the Illustrated parts catalog. The software also enables the user to generate discrepancy forms, quotation sheets, annotations in the manual and specific preferences for each user.BAE's Miller said most of the IETM presentation systems have features called "Technical Notes." If a user of the electronic publication notices a discrepancy or needs to annotate the manual for future troubleshooting, the user can add a Tech Note (an electronic mark-up) to the step or procedure and save it to the base document. The next time that or another user is in the procedure, clicking on the tech note icon launches a pop-up screen displaying the previous technician's comments. The same electronic transfer of tech notes can be sent to other devices by using either a docking station or through a network server. In addition, systems also can use "personal notes" similar to technical notes that are assigned ID codes that only the authoring technician can access.Requirements for the FIELD software include the minimum of a 16X CD-ROM drive,Pentium II 200 MHz computer, Windows 95, Internet Explorer 4 SP 1 and Database Access V3.5 or higher.Raytheon offers owners of Beech and Hawker aircraft a Class 4 IETM called Raytheon Electronic Publication Systems (REPS). The REPS software links the frame-based procedures with the parts catalog using a single CD-ROM.Raytheon Aircraft Technical Publications said other in- production Raytheon aircraft manual sets will be converted to the REPS format, with the goal of having all of them available by 2001. In addition Raytheon offers select Component Maintenance Manuals (CMM). The Class 1 ETM is a stand-alone "page-turner" electronic manual that utilizes the PDF format of Adobe Acrobat.Other manufacturers including Bombardier, Cessna and Gulfstream offer operators similar online and PDF documentation using a customer- accessed Web account.Boeing is one manufacturer that has developed an onboard Class 5 IETM. Called the Computerized Fault Reporting System (CFRS), it has replaced the F-15 U.S. Air Force Fault Reporting Manuals. Technologies that are currently being applied to Boeing's military system are expected to eventually become a part of the corporate environment.The CFRS system determines re-portable faults by analyzing information entered during a comprehensive aircrew debrief along with electronically recovered maintenance data from the Data Transfer Module (DTM). After debrief the technicians can review aircraft faults and schedule maintenance work to be performed. The maintenance task is assigned a Job Control Number (JCN) and is forwarded electronically to the correct work center or shop. Appropriate information is provided to the Air Force's Core Automated Maintenance System (CAMS).When a fault is reported by pilot debrief, certain aircraft systems have the fault isolation procedural data on a Portable Maintenance Aid (PMA). The JCN is selected on a hardened laptop with a wireless Local Area Network (LAN) connection to the CFRS LAN infrastructure. The Digital Wiring Data System (DWDS) displays aircraft wiring diagrams to the maintenance technician for wiring fault isolation. On completion of maintenance, the data collected is provided to the Air Force, Boeing and vendors for system analysis.Third party IETM developers such as BAE Systems and Dayton T. Brown offer OEMs the ability to subcontract out the development of Class 1 through 5 ETM/IETMs. For example, Advantext, Inc. offers PDF and IPDF Class 1 ETMs for manufacturers such as Piper and Bell Helicopters. Technical publications that are available include maintenance manuals, parts catalogs, service bulletins, wiring diagrams, service letters and interactive parts ordering forms.The difference between the PDF and IPDF version is that the IPDF version has the ability to search for text and include hyperlinks. A Class 1 ETM, when printed, is an exact reproduction of the OEM manuals, including any misspellings or errors. Minimum requirements for the Advantext technical publications is a 486 processor, 16 MB RAM with 14 MB of free hard disk space and a 4X CD-ROM or better.Aircraft Technical Publishers (ATP) offers Class 1, 2 and 3 ETM/ IETMsfor the Beechjet 400/400A; King Air 300/ 350, 200 and 90; Learjet 23/24/25/28/29/35/36/55; Socata TB9/10/20/21 and TBM 700A; Sabreliner 265-65, -70 and -80; andBeech 1900. The libraries can include maintenance manuals, Illustrated parts bulletins, wiring manuals, Airworthiness Directives, Service Bulletins, component maintenance manuals and structural maintenance manuals. System minimum requirements are Pentium 133 MHz, Windows 95 with 16 MB RAM, 25 MB free hard disk space and a 4X CD-ROM or better.Additional providers such as Galaxy Scientific are providing ETM/ IETMs to the FAA. This Class 2, 3 and 4 publication browser is used to store, display and edit documentation for the Human Factors Section of the administration."Clearly IETMs have moved from research to reality," said Fuller, and the future looks to hold more promise.The Future of Tech PubsThe use of ETM/IETMs on laptop and desktop computers has led research and development corporations to investigate the human interface options to the computer. Elements that affect how a technician can interface with a computer are the work environment, economics and ease of use. Organizations such as the Office of Naval Research have focused their efforts on the following needs of technicians: -- Adaptability to the environment.-- Ease of use.-- Improved presentation of complex system relationship.-- Maximum reuse and distribution of engineering data.-- Intelligent data access.With these factors in mind, exploratory development has begun in the areas of computer vision, augmented reality display and speech recognition.Computer vision can be created using visual feedback from a head- mounted camera. The camera identifies the relative position and orientation of an object in an observed scene, and the object is used to correlate the object with a three-dimensional model. In order for a computer vision scenario to work, engineering data has to be provided through visually compatible software.When systems such as Sogitech's View Tech electronic publication browser and Dassault Systemes SA's Enovia are combined, a virtual 3D model is generated.The digital mockup allows the engineering information to directly update the technical publication information. If a system such as CATIA could be integrated into a Video Reference System (VRS), then it could be possible that a technician would point the camera to the aircraft component, the digital model identifies the component and the IETM automatically displays the appropriate information.This example of artificial intelligence is already under development at companies like Boeing and Dassault. An augmented reality display is a concept where visual cues are presented to users on a head-mounted, see-through display system.The cues are presented to the technician based on the identification of components on a 3D model and correlation with the observed screen. The cues are then presented as stereoscopic images projected onto the object in the observed scene.In addition a "Private Eye" system could provide a miniature display of the maintenance procedure that is provided from a palm- size computer. Limited success hascurrently been seen in similar systems for the disabled. The user of a Private Eye system can look at the object selected and navigate without ever having to touch the computer. Drawbacks from this type of system are mental and eye fatigue, and spatial disorientation.Out of all the technologies, speech recognition has developed into an almost usable and effective system. The progression through maintenance procedures is driven by speaker-independent recognition. A state engine controls navigation, and launches audio responses and visual cues to the user. Voice recognition software is available, although set up and use has not been extremely successful.Looking at other industries, industrial manufacturing has already started using "Palm Pilot" personal digital assistants (PDAs) to aid technicians in troubleshooting. These devices allow the technician to have the complete publication beside them when they are in tight spaces. "It would be nice to take the electronic publications into the aircraft, so we are not constantly going back to the work station to print out additional information," said Jet Aviation's Berchtold.With all the advantages that a ETM/ IETM offers it should be noted that electronic publications are not the right solution all of the time, just as CBT is not the right solution for training in every situation. Only you can determine if electronic publications meet your needs, and most technical publication providers offer demo copies for your review. B/CA IllustrationPhoto: Photograph: BAE Systems' Christine Gill prepares a maintenance manual for SGML conversion BAE Systems; Photograph: Galaxy Scientific provides the FAA's human factors group with online IETM support.; Photograph: Raytheon's Class 4 IETM "REPS" allows a user to see text and diagrams simultaneously with hotlinks to illustrated parts catalogs.外文翻译资料译文部分文章出处:民航商业杂志,2000-11-20,5-87-88交互式电子技术手册的电子出版物可以提高数字飞机和模拟技术的效率。
5、外文文献翻译(附原文)产业集群,区域品牌,Industrial cluster ,Regional brand
外文文献翻译(附原文)外文译文一:产业集群的竞争优势——以中国大连软件工业园为例Weilin Zhao,Chihiro Watanabe,Charla-Griffy-Brown[J]. Marketing Science,2009(2):123-125.摘要:本文本着为促进工业的发展的初衷探讨了中国软件公园的竞争优势。
产业集群深植于当地的制度系统,因此拥有特殊的竞争优势。
根据波特的“钻石”模型、SWOT模型的测试结果对中国大连软件园的案例进行了定性的分析。
产业集群是包括一系列在指定地理上集聚的公司,它扎根于当地政府、行业和学术的当地制度系统,以此获得大量的资源,从而获得产业经济发展的竞争优势。
为了成功驾驭中国经济范式从批量生产到开发新产品的转换,持续加强产业集群的竞争优势,促进工业和区域的经济发展是非常有必要的。
关键词:竞争优势;产业集群;当地制度系统;大连软件工业园;中国;科技园区;创新;区域发展产业集群产业集群是波特[1]也推而广之的一个经济发展的前沿概念。
作为一个在全球经济战略公认的专家,他指出了产业集群在促进区域经济发展中的作用。
他写道:集群的概念,“或出现在特定的地理位置与产业相关联的公司、供应商和机构,已成为了公司和政府思考和评估当地竞争优势和制定公共决策的一种新的要素。
但是,他至今也没有对产业集群做出准确的定义。
最近根据德瑞克、泰克拉[2]和李维[3]检查的关于产业集群和识别为“地理浓度的行业优势的文献取得了进展”。
“地理集中”定义了产业集群的一个关键而鲜明的基本性质。
产业由地区上特定的众多公司集聚而成,他们通常有共同市场、,有着共同的供应商,交易对象,教育机构和其它像知识及信息一样无形的东西,同样地,他们也面临相似的机会和威胁。
在全球产业集群有许多种发展模式。
比如美国加州的硅谷和马萨诸塞州的128鲁特都是知名的产业集群。
前者以微电子、生物技术、和风险资本市场而闻名,而后者则是以软件、计算机和通讯硬件享誉天下[4]。
外文翻译原文及配套译文
Business process re-engineering –saviour or just another fad?One UK health care perspectiveAnjali PatwardhanHealth Service Management Centre,Birmingham,UK,and Dhruv Patwardhan University of Newcastle,Newcastle upon Tyne,UKAbstractPurpose –Pressure to change is politically driven owing to escalating healthcare costs and an emphasis on efficiency gains,value for money and improved performance proof in terms of productivity and recently to some extent by demands from less satisfied patients and stakeholders.In a background of newly immerging expensive techniques and drugs,there is an increasing consumer expectation,i.e.quality services.At the same time,health system managers and practitioners are finding it difficult to cope with demand and quality expectations.Clinicians are frustrated because they are not recognised for their contribution.Managers are frustrated because meaningful dialogue with clinicians is lacking,which has intensified the need for change to a more efficient system that satisfies all arguments about cost effectiveness and sustainable quality services.Various strategies,originally developed by management quality “gurus”for engineering industries,have been applied to health industries with variable success,which largely depends on the type of health care system to which they are applied.Design/methodology/approach –Business process re-engineering is examined as a quality management tool using past and recent publications.Findings –The paper finds that applying business process re-engineering in the right circumstances and selected settings for quality improvement is critical for its success.It is certainly “not for everybody”.Originality/value –The paper provides a critical appraisal of business process re-engineering experiences in UK healthcare.Lessons learned regarding selecting organisations and agreeing realistic expectations are addressed.Business process re-engineering has been evaluated and reviewed since 1987in US managed health care,with no clear lessons learned possibly because unit selection and simultaneous comparison between two units virtually performing at opposite ends has never been done before.Two UK pilot studies,however,add useful insights.Keywords Business process re-engineering,Total quality management,Continuous improvement,Medical management,Health services,United KingdomPaper type ViewpointHistory of quality management in health careTo know how health care organisations became interested in industrial quality development tools and how business process re-engineering (BPR)emerged as an option,we have to go back to 1987when the Quality Improvement in Health Care National Demonstration Project (NDP)was launched as an experiment (Godfrey,n.d.).A total of 21health-care organisations participated and promised to support this study lasting eight-months.The aim was to look at the applicability of industrial quality-improvement methods to health care.Support included free consulting,The current issue and full text archive of this journal is available at/0952-6862.htmBPR –saviour or just a fad?289Received 29November 2006Revised 10February 2007Accepted 25May 2007International Journal of Health Care Quality Assurance Vol.21No.3,2008pp.289-296q Emerald Group Publishing Limited 0952-6862DOI 10.1108/09526860810868229materials,access to training courses and reviews.The funding companies included many of the USA’s leading organisations such as Corning,Ford,Hewlett-Packard,IBM and Xerox.At the final stages of the project evaluations it was clear that out of 21organisations,15health care organisations made significant progress –mainly financial and patient satisfaction gains,target and project time keeping and investment in research and development.The NDP was extended for three years,which eventuallyevolved into the Institute for Healthcare Improvement,a not-for-profit organisation –dedicated to providing health-care quality management ter,BPR emerged as an alternative for managers in organisations frustrated with slow improvements,not encompassing the whole organisation experiencing total quality management (TQM).The TQM key target was to convert an organisation’s structure,culture and services to patient/consumer rather than organisation-focused goals (Harvey and Millett,1999).Why change?Traditionally health care systems were mostly “governed”by clinicians (Shutt,2003)because patient outcomes;that is,recovery from illness,were the sole responsibility of all professionals directly involved in patient plexity and variance in health care studies reveal that outcome has many determinants;i.e.pharmacy,pathology,technical support and information technology.It was also realised that cost containment and good quality care needed teamwork,communication,time management,etc.(Shutt,2003).Sir Roy Griffiths,in the early 1980s,developed hospital general management and the greater involvement of clinicians in resource management initiatives (DHSS,1984).Today,apart from political motives,change is driven by escalating health care costs,increased demands for quality care,value for money services,patient expectation and third-party payers in managed health care systems.These intensified the need for change to more efficient health care systems.What is BPR?BPR,also known business transformation and process change management,was introduced to the business world by Frederick Taylor in his article The Principles of Scientific Management in the 1900s (wikipedia,2006).In the 1990s,Hammer and Champy (1993)introduced Reengineering the Corporation ,which gave birth to BPR.BPR is “the analysis and design of workflows and processes within and between organizations”(Devenport and Short,1990,p.11).Teng et al.(1994)on the other hand,defined BPR as critical analysis and radical redesign of existing business processes to achieve breakthrough improvements in performance measures.Hammer and Champy (1993),similarly,defined BPR as fundamentally rethinking and radically redesigning business processes to achieve dramatic improvements in critical contemporary performance measures such as cost,quality,service and speed.From a health care viewpoint,BPR is a management approach that rethinks present practices and processes in business and its interactions.It attempts to improve underlying process efficiency by applying fundamental and radical approaches by either modifying or eliminating non-value adding activities and redeveloping the process/structure/culture (McNulty and Ferlie,2000).However,in the health sector,a wide variety of patient groups make the health care service a complex project to redesign along these lines,thereby rendering changes context and time sensitive.IJHCQA 21,3290BPR key featuresHealth care’s BPR approach means starting with clean slate and rethinking services using a patient-focused approach.With the benefit of hindsight BPR identifies delays caused by unnecessary steps or potential errors that are built into processes.It is presumed that redesigning processes by removing these errors dramatically improves care quality.The BPR approach,therefore,raises expectations about dramatic results. Consequently,high returns on investment are anticipated.The process,planned strategically,is explained in Taylor’s BPR framework(wikipedia,2006): .defining BPR’s purpose and goal;.identifying requirements that meet clients’needs;.defining project scope,including appropriate activities such as process mapping;.assessing the environment using,for example,force-field analyses;.re-engineering business processes and activities;.implementing redesigned processes;and.monitoring redesign success and failure.BPR vs TQMComparing BPR with other popular quality management methods helps us to appreciate and highlight key features in a health care context(Harvey and Millett,1999).TQM or continuous quality improvement(CQI)refers to programmes and initiatives that emphasise incremental improvement in work processes and outputs over an open-ended time period.In contrast,BPR refers to discrete initiatives intended to radically redesign and improve work processes within a time frame.Some people think TQM is best suited to quality in health care improvement though it is an incremental stepwise,slow but holistic approach.In practice,TQM and BPR are customer-oriented and both encourage managers and practitioners to take a customer view point.Both are team approaches that involve process control.The TQM protagonists assume that existing health care practices and systems are principally right but improvements are needed.The BPR supporters,on the other hand,assume that health care systems and practices areflawed and need replacing.Those using TQM expect and believe in stepwise increments in performance as opposed to BPR experts who look for dramatic results.TQM aims to improve all levels for all stakeholders and at all steps,while BPR aims at specified areas only.Standardisation and supporting documentation is a TQM key point.Believing in consistent and cost-effective performance and minimising process or system defects, prevents rather than corrects problems(Field and Swift,1996).Those that use the BPR approach,on the other hand,areflexible and assume that standardisation increases process complexity(Harvey and Millett,1999).Nevertheless,BPR is a drastic change leading to staff resistance.Moreover,it is a top-down approach,so management support and commitment is vital to success.Innovation,therefore,is a risky process when used for“sick organisations”.The TQM incremental method,on the other hand,follows a gradual approach that is mostly bottom-up.It involves employees and often based on Deming’s principles that direct improvements through plan-do-study-act(PDSA)cycle.TQM,therefore,is suitable for improving quality in any organisation,although some amendments to suit context may be needed.Application in managed health care generated different results BPR–saviour or just a fad?291when dissimilar processes were applied in different scenarios.Business process re-engineering,therefore,may not suite everyone because it works better when applied to sick organisations or in fundamentally defective systems (Bashein et al.,1994).The TQM approach is about a cultural change as it is built into practices hooked on daily routines.The BPR method is a target-oriented process that is time sensitive because if not completed as planned then success may be jeopardised.The TQM primary enableris statistical process control,while in BPR it is information and technology (Harvey and Millett,1999).Advantages of applying BPR to health service quality improvementUsing BPR in the health sector was a response to frustrations amongst managers in organisations who perceived TQM’s incrementalism and ability to achieve organisation-wide change had failed.King’s College Hospital experience (Grimes,2000;Harrison et al.,1992)suggests that BPR could be best tried to achieve previously unachieved levels of efficiency in scenarios when other efforts/methods had been unsuccessful.The driving forces for change were aspirations to develop a more efficient system that satisfies consumers’demands for service quality and value for money (Bowns and McNulty,1999).At the same time,BPR makes it possible to sustain such quality without necessarily costing more,even though we know that health care costs are rising steeply.The third and most important aspiration in the King’s project was to improve professionals’job satisfaction,what they felt they always deserved.The aim was to orient health care towards and focus on patients rather than organisation needs.The BPR approach focuses on rethinking and redesigning processes from scratch,giving staff opportunities to revisit services in detail,thereby pointing out improvement areas.It strips all non-value adding and unnecessary steps from the process to make services more efficient.Although it is managed top-down and dominated by managers and leaders,decision making is done at the coal-face,thereby empowering the team.The BPR approach provides a flexible work environment,culture and work practices.It can be valuable for organisations in deep difficulties and performing poorly.In such a crisis,re-engineering may be the only way organisations can survive (Harvey and Millett,1999).Where major structural and cultural deficiencies are identified or are obvious as a poor performance cause,BPR is the best way to handle that scenario –evident from King’s College Hospital experiences (Bowns and McNulty,1999).BPR limitations in health care quality improvementWe know that BPR is a top-down approach that staff may resist.It is cited by autonomous clinical professionals as “a brutal and inappropriate technique”(Jones,1996,p.4284).Implementing BPR in health care scenarios,where clinicians are key players,therefore,is not only difficult but also unsafe (McNulty and Ferlie,2000).Thus,BPR may lead to ownership loss and employee de-motivation because they are not involved in planning and change management.Generally,change processes are less-well understood by employees (Jones,1996,p.4284):Quality would seem unlikely to be forthcoming if re-engineering is imposed from the top down in a rigid and mechanistic fashion ...If organizational change is to be effective and sustainable,this will also require the active engagement of,and learning by,employees rather than grudging compliance with management diktat.IJHCQA 21,3292Quality improvement in European public services elaborated health care TQM and BPR as quality improvement tools.It was acknowledged that many business approaches to quality improvement,including TQM and re-engineering,failed to take account of health care’s complexity and the nature of professionalised knowledge.The language and values used in most of these projects were alien to clinicians and so were rejected as management fads.It seems that BPR requires massive culture and structure change if it is to improve quality of the same magnitude.It may be that radical overnight transformation may sound impressive but unrealistic.Structural and cultural change needs time to develop,be accepted and absorbed at all levels, particularly in health care settings.In short,BPR is a high-cost and high-risk project. Seventy-percent of all industries could not achieve their targets–a BPR success rate around30per cent.In the health care sector,on the other hand,from the literature we reviewed,there is no successfigure available.BPR carries an unrealistic scope and expectation most of the time,which may be a reason for its70per cent failure rate.Its top-down nature and success depends on sustained management commitment and inspirational leadership,which is not easily measured and may not be available up to the threshold needed.BPR may make only a unit change in time.To be meaningful,it needs to be followed by a CQI exercise.Once changes are brought about,BPR-based change needs CQI projects to have sustained results.It is always contested that BPR does not take account of human processes–evident from Jones’(1996,p.4284) quotation:Such a perspective is seen as promoting the idea that you can design a perfect process, implement it exactly as you planned and the organizational machine will carry it out faultlessly.Setting on one side questions about the reliability of this whole process,it is evident that BPR neglects the important role of human creativity in making process work. As we raised earlier,BPR usually innovates one process at a time rather than a whole organisation approach.The process that is changed,therefore,might not have an effect on overall organisational performance that can be measured especially those perceived by consumers.In other words,BPR may have a drastic effect on one specific process but none or very little on total organisational performance.A simple illustration for improving inpatient admissions shows that BPR alone cannot improve services.There will be need also to improve day care,outpatient,primary care and emergency services. All have an effect on an organisation’s inpatient services because they are interlinked and interdependent.Moreover,BPR’s effect can be difficult to assess in this context since NHS organisations lack specific measures(Bowns and McNulty,1999).The extent to which BPR is applicable to health care systemsThe UK BPR health care experience comes from two centrally funded pilot studies:(1)King’s College Hospital,London(KCH);and(2)Leicester Royal Infirmary(LRI).The KCH project was evaluated by a Brunel team(Packwood et al.,1998;Grimes,2000) and the LRI scheme by Sheffield and Warwick(Bowns and McNulty,1999).Employees in these organisations shared their BPR experiences during evaluation.Consequently, both studies generated interesting and valuablefindings as they highlighted to what extent BPR could be applied to health care systems.However,the two hospitals were BPR–saviour or just a fad?293extremes,i.e.KCH was a “sick”unit at the time of the study.LRI,on the other hand,was one of the best teaching hospitals (McNulty and Ferlie,2000)with little scope to improve.At the end of the pilot studies it was evident from reports that both hospitals could not reach expectations especially the drastic changes and improvements anticipated at the beginning of the BPR projects.Both reduced waiting times and length of stay along with faster diagnostic processes.King’s,over and above theseimprovements,also made £1million savings (Grimes,2000)–attributed to “waste reduction”by process mapping followed by removing non-value adding activities and by increasing efficiency in the renewed system (Packwood et al.1998).This suggests that BPR is not for everybody and that selecting units to which BPR can be applied is important to achieve desired results.When the two trusts ran the pilot,they also continued to work on their generic and core improvement initiatives at different levels in the process and so it was difficult to attribute success to BPR alone or to assess its relative contribution to overall improvements.One approach to identify suitable sub processes for applying BPR is process mapping from “door to door”,which helps capture all the process components and applying a lean approach (Jones and Mitchell,2006,p.23).Identifying value-added activities highlights the non-value-added ones.Each non-value-added activity can be measured and analysed to assess its impact and ways to eliminate activity.Resource availability,deadline,cost,generic skills and above all,urgency to change help users select the right improvement tool.Also,as raised earlier,change management success is closely related to team morale,ownership and motivation.To achieve quality in health care services,therefore,two key staff groups –managers and clinicians,who come from different cultural backgrounds and are knowledgeable in different ways,need to work as a team.Understanding and cooperation are crucial if difficult tasks are to be accomplished (Shortell et al.1998).However,BPR’s failure to consider the human aspects of processes may make it difficult to integrate BPR into health care services.The BPR approach sounds impressive but unrealistic because soft structural and cultural change need time to develop particularly in health care settings.We believe that BPR can help to improve health services if it is meticulously planned and applied diligently.In short,even with all BPR’s limitations,it is still capable of delivering dramatic results not least because it forces staff to think from outside the scenario or process as a whole and work to deadlines (Bowns and McNulty,1999).Conclusions and recommendationsHealth care is a more complex system than any manufacturing industry.As a service provider with a major human component there are safety and efficiency issues rather than cost and efficacy,which separates health care from industry.BPR,like other single approaches to improve service quality,are likely to be unsuitable for health care (Shortell and Ferlie,2001),which is comprised of a number of sub processes.It has many stakeholders at different levels and there is wide variation in its internal customer (e.g.,fellow professionals)and external customer (i.e.patients)needs.We accept that BPR can be used as a tool for improving some sub-process or sub-unit activity.An example could be what happened in the LRI where BPR was tried as a quality improvement tool in bed management,pathology and OPD service innovation,etc.;but not applied in areas where clinician’s precision was paramount or where BPR was accepted less-well.In these areas,therefore,views on the methods’suitability forIJHCQA 21,3294quality improvements were mixed.That is,TQM and BPR ideally should always be followed by CQI methods for service improvement to be sustainable and effective.In short,quality management tools designed for industry should be applied to health services with proper selection,caution and care.ReferencesBashein,B.J.,Markus,M.L.and Riley,P.(1994),“Preconditions for BPR success:and how to prevent failures”,Information Systems Management,Vol.11No.2,pp.7-13.Bowns,I.R.and McNulty,T.(1999),Re-engineering Leicester Royal Infirmary–Executive Summary,School of Health and Related Research,University of Sheffield,Sheffield. Devenport,T.and Short,J.(1990),“The new industrial engineering information technology and BPR”,Sloan Management Review,Summer,pp.11-27.DHSS(1984),Health Services Management:Implementation of the NHS Management Inquiry, DHSS Circular HC(84)13,DHSS,London.Field,S.W.and Swift,K.G.(1996),Effecting a Quality Change:An Engineering Approach,Arnold, London.Godfrey,B.(n.d.),“Quality health care”,Quality Digest,available at:/ sep96/health.htm(accessed on15October2006).Grimes,K.(2000),Changing the Change Team,King’s College Hospital,London.Hammer,M.and Champy,J.(1993),Reengineering the Corporation:A Manifesto for Business Revolution,Harper Business Books,New York,NY.Harrison,S.,Hunter,D.,Marnoch,G.and Pollitt,C.(1992),Just Managing:Power and Culture in the NHS,Macmillan,Basingstoke.Harvey,S.and Millett,B.(1999),“OD,TQM and BPR:a comparative approach”,Australian Journal of Management and Organizational Behavior,Vol.2No.3,pp.30-42.Jones,D.and Mitchell,A.(2006),Lean Thinking for the NHS:A Report Commissioned by the NHS Confederation,pamphlet RA395.G7,NHS Confederation,London.Jones,M.(1996),“Re-engineering”,in Warner,M.(Ed.),International Encyclopedia of Business and Management,Routledge,London.McNulty,T.and Ferlie,E.(2000),Reengineering Health Care:The Complexities of Organisational Transformation,Oxford University Press,Oxford.Packwood,T.,Pollitt,C.and Roberts,S.(1998),“Good medicine?A case study of business process reengineering in a hospital”,Policy and Politics,Vol.26No.4,pp.401-15. Shortell,S.and Ferlie,E.(2001),“Improving quality of healthcare in the United Kingdom and the United States:a framework for change”,The Milbank Quarterly,Vol.79No.2,May, pp.281-315.Shortell,S.M.,Waters,T.M.and Clarke,K.W.B.(1998),“Physicians as double agents: maintaining trust in an era of multiple accountabilities”,Journal of the American Medical Association,Vol.280No.12,pp.1102-8.Shutt,J.A.(2003),“Balancing the health care scorecard”,Managed Care,September,pp.42-6. Teng,J.T.C.,Grover,V.and Fiedler,K.(1994),“Business process reengineering:charting a strategic path for the information age”,California Management Review,Vol.36No.3, pp.9-31.wikipedia(2006),“Frederick Winslow Taylor”,available at:/wiki/ Frederick_Winslow_Taylor(accessed2December2006).BPR–saviour or just a fad?295Further reading Davies,H.T.O.(2000),“Organizational culture and quality of health care”,Quality in Health Care ,Vol.9No.2,pp.111-9.Malhotra,Y.(1998),“Business process redesign:an overview”,IEEE Engineering Management Review ,Vol.26No.3,pp.214-25.Pollitt,C.(1996),“Business approaches to quality improvement:why are they hard for the NHS toswallow?”,Quality in Health Care ,Vol.5No.2,pp.104-10.Raymond,L.,Bergeron, F.and Rivard,S.(1980),“Determinants of business processreengineering success in small and large enterprises:an empirical study in the Canadian context”,Journal of Small Business Management ,Vol.36,pp.72-85.Corresponding authorAnjali Patwardhan can be contacted at:doctoranjali@IJHCQA 21,3296To purchase reprints of this article please e-mail:reprints@ Or visit our web site for further details:/reprints。
外文文献翻译原文+译文
外文文献翻译原文Analysis of Con tin uous Prestressed Concrete BeamsChris BurgoyneMarch 26, 20051、IntroductionThis conference is devoted to the development of structural analysis rather than the strength of materials, but the effective use of prestressed concrete relies on an appropriate combination of structural analysis techniques with knowledge of the material behaviour. Design of prestressed concrete structures is usually left to specialists; the unwary will either make mistakes or spend inordinate time trying to extract a solution from the various equations.There are a number of fundamental differences between the behaviour of prestressed concrete and that of other materials. Structures are not unstressed when unloaded; the design space of feasible solutions is totally bounded;in hyperstatic structures, various states of self-stress can be induced by altering the cable profile, and all of these factors get influenced by creep and thermal effects. How were these problems recognised and how have they been tackled?Ever since the development of reinforced concrete by Hennebique at the end of the 19th century (Cusack 1984), it was recognised that steel and concrete could be more effectively combined if the steel was pretensioned, putting the concrete into compression. Cracking could be reduced, if not prevented altogether, which would increase stiffness and improve durability. Early attempts all failed because the initial prestress soon vanished, leaving the structure to be- have as though it was reinforced; good descriptions of these attempts are given by Leonhardt (1964) and Abeles (1964).It was Freyssineti’s observations of the sagging of the shallow arches on three bridges that he had just completed in 1927 over the River Allier near Vichy which led directly to prestressed concrete (Freyssinet 1956). Only the bridge at Boutiron survived WWII (Fig 1). Hitherto, it had been assumed that concrete had a Young’s modulus which remained fixed, but he recognised that the de- ferred strains due to creep explained why the prestress had been lost in the early trials. Freyssinet (Fig. 2) also correctly reasoned that high tensile steel had to be used, so that some prestress would remain after the creep had occurred, and alsothat high quality concrete should be used, since this minimised the total amount of creep. The history of Freyssineti’s early prestressed concrete work is written elsewhereFigure1:Boutiron Bridge,Vic h yFigure 2: Eugen FreyssinetAt about the same time work was underway on creep at the BRE laboratory in England ((Glanville 1930) and (1933)). It is debatable which man should be given credit for the discovery of creep but Freyssinet clearly gets the credit for successfully using the knowledge to prestress concrete.There are still problems associated with understanding how prestressed concrete works, partly because there is more than one way of thinking about it. These different philosophies are to some extent contradictory, and certainly confusing to the young engineer. It is also reflected, to a certain extent, in the various codes of practice.Permissible stress design philosophy sees prestressed concrete as a way of avoiding cracking by eliminating tensile stresses; the objective is for sufficient compression to remain after creep losses. Untensionedreinforcement, which attracts prestress due to creep, is anathema. This philosophy derives directly from Freyssinet’s logic and is primarily a working stress concept.Ultimate strength philosophy sees prestressing as a way of utilising high tensile steel as reinforcement. High strength steels have high elastic strain capacity, which could not be utilised when used as reinforcement; if the steel is pretensioned, much of that strain capacity is taken out before bonding the steel to the concrete. Structures designed this way are normally designed to be in compression everywhere under permanent loads, but allowed to crack under high live load. The idea derives directly from the work of Dischinger (1936) and his work on the bridge at Aue in 1939 (Schonberg and Fichter 1939), as well as that of Finsterwalder (1939). It is primarily an ultimate load concept. The idea of partial prestressing derives from these ideas.The Load-Balancing philosophy, introduced by T.Y. Lin, uses prestressing to counter the effect of the permanent loads (Lin 1963). The sag of the cables causes an upward force on the beam, which counteracts the load on the beam. Clearly, only one load can be balanced, but if this is taken as the total dead weight, then under that load the beam will perceive only the net axial prestress and will have no tendency to creep up or down.These three philosophies all have their champions, and heated debates take place between them as to which is the most fundamental.2、Section designFrom the outset it was recognised that prestressed concrete has to be checked at both the working load and the ultimate load. For steel structures, and those made from reinforced concrete, there is a fairly direct relationship between the load capacity under an allowable stress design, and that at the ultimate load under an ultimate strength design. Older codes were based on permissible stresses at the working load; new codes use moment capacities at the ultimate load. Different load factors are used in the two codes, but a structure which passes one code is likely to be acceptable under the other.For prestressed concrete, those ideas do not hold, since the structure is highly stressed, even when unloaded. A small increase of load can cause some stress limits to be breached, while a large increase in load might be needed to cross other limits. The designer has considerable freedom to vary both the working load and ultimate load capacities independently; both need to be checked.A designer normally has to check the tensile and compressive stresses, in both the top and bottom fibre of the section, for every load case. The critical sections are normally, but not always, the mid-span and the sections over piers but other sections may become critical ,when the cable profile has to be determined.The stresses at any position are made up of three components, one of which normally has a different sign from the other two; consistency of sign convention is essential.If P is the prestressing force and e its eccentricity, A and Z are the area of the cross-section and its elastic section modulus, while M is the applied moment, then where ft and fc are the permissible stresses in tension and compression.c e t f ZM Z P A P f ≤-+≤Thus, for any combination of P and M , the designer already has four in- equalities to deal with.The prestressing force differs over time, due to creep losses, and a designer isusually faced with at least three combinations of prestressing force and moment;• the applied moment at the time the prestress is first applied, before creep losses occur,• the maximum applied moment after creep losses, and• the minimum applied moment after creep losses.Figure 4: Gustave MagnelOther combinations may be needed in more complex cases. There are at least twelve inequalities that have to be satisfied at any cross-section, but since an I-section can be defined by six variables, and two are needed to define the prestress, the problem is over-specified and it is not immediately obvious which conditions are superfluous. In the hands of inexperienced engineers, the design process can be very long-winded. However, it is possible to separate out the design of the cross-section from the design of the prestress. By considering pairs of stress limits on the same fibre, but for different load cases, the effects of the prestress can be eliminated, leaving expressions of the form:rangestress e Perm issibl Range Mom entZ These inequalities, which can be evaluated exhaustively with little difficulty, allow the minimum size of the cross-section to be determined.Once a suitable cross-section has been found, the prestress can be designed using a construction due to Magnel (Fig.4). The stress limits can all be rearranged into the form:()M fZ PA Z e ++-≤1 By plotting these on a diagram of eccentricity versus the reciprocal of the prestressing force, a series of bound lines will be formed. Provided the inequalities (2) are satisfied, these bound lines will always leave a zone showing all feasible combinations of P and e. The most economical design, using the minimum prestress, usually lies on the right hand side of the diagram, where the design is limited by the permissible tensile stresses.Plotting the eccentricity on the vertical axis allows direct comparison with the crosssection, as shown in Fig. 5. Inequalities (3) make no reference to the physical dimensions of the structure, but these practical cover limits can be shown as wellA good designer knows how changes to the design and the loadings alter the Magnel diagram. Changing both the maximum andminimum bending moments, but keeping the range the same, raises and lowers the feasible region. If the moments become more sagging the feasible region gets lower in the beam.In general, as spans increase, the dead load moments increase in proportion to the live load. A stage will be reached where the economic point (A on Fig.5) moves outside the physical limits of the beam; Guyon (1951a) denoted the limiting condition as the critical span. Shorter spans will be governed by tensile stresses in the two extreme fibres, while longer spans will be governed by the limiting eccentricity and tensile stresses in the bottom fibre. However, it does not take a large increase in moment ,at which point compressive stresses will govern in the bottom fibre under maximum moment.Only when much longer spans are required, and the feasible region moves as far down as possible, does the structure become governed by compressive stresses in both fibres.3、Continuous beamsThe design of statically determinate beams is relatively straightforward; the engineer can work on the basis of the design of individual cross-sections, as outlined above. A number of complications arise when the structure is indeterminate which means that the designer has to consider, not only a critical section,but also the behaviour of the beam as a whole. These are due to the interaction of a number of factors, such as Creep, Temperature effects and Construction Sequence effects. It is the development of these ideas whichforms the core of this paper. The problems of continuity were addressed at a conference in London (Andrew and Witt 1951). The basic principles, and nomenclature, were already in use, but to modern eyes concentration on hand analysis techniques was unusual, and one of the principle concerns seems to have been the difficulty of estimating losses of prestressing force.3.1 Secondary MomentsA prestressing cable in a beam causes the structure to deflect. Unlike the statically determinate beam, where this motion is unrestrained, the movement causes a redistribution of the support reactions which in turn induces additional moments. These are often termed Secondary Moments, but they are not always small, or Parasitic Moments, but they are not always bad.Freyssinet’s bridge across the Marne at Luzancy, started in 1941 but not completed until 1946, is often thought of as a simply supported beam, but it was actually built as a two-hinged arch (Harris 1986), with support reactions adjusted by means of flat jacks and wedges which were later grouted-in (Fig.6). The same principles were applied in the later and larger beams built over the same river.Magnel built the first indeterminate beam bridge at Sclayn, in Belgium (Fig.7) in 1946. The cables are virtually straight, but he adjusted the deck profile so that the cables were close to the soffit near mid-span. Even with straight cables the sagging secondary momentsare large; about 50% of the hogging moment at the central support caused by dead and live load.The secondary moments cannot be found until the profile is known but the cablecannot be designed until the secondary moments are known. Guyon (1951b) introduced the concept of the concordant profile, which is a profile that causes no secondary moments; es and ep thus coincide. Any line of thrust is itself a concordant profile.The designer is then faced with a slightly simpler problem; a cable profile has to be chosen which not only satisfies the eccentricity limits (3) but is also concordant. That in itself is not a trivial operation, but is helped by the fact that the bending moment diagram that results from any load applied to a beam will itself be a concordant profile for a cable of constant force. Such loads are termed notional loads to distinguish them from the real loads on the structure. Superposition can be used to progressively build up a set of notional loads whose bending moment diagram gives the desired concordant profile.3.2 Temperature effectsTemperature variations apply to all structures but the effect on prestressed concrete beams can be more pronounced than in other structures. The temperature profile through the depth of a beam (Emerson 1973) can be split into three components for the purposes of calculation (Hambly 1991). The first causes a longitudinal expansion, which is normally released by the articulation of the structure; the second causes curvature which leads to deflection in all beams and reactant moments in continuous beams, while the third causes a set of self-equilibrating set of stresses across the cross-section.The reactant moments can be calculated and allowed-for, but it is the self- equilibrating stresses that cause the main problems for prestressed concrete beams. These beams normally have high thermal mass which means that daily temperature variations do not penetrate to the core of the structure. The result is a very non-uniform temperature distribution across the depth which in turn leads to significant self-equilibrating stresses. If the core of the structure is warm, while the surface is cool, such as at night, then quite large tensile stresses can be developed on the top and bottom surfaces. However, they only penetrate a very short distance into the concrete and the potential crack width is very small. It can be very expensive to overcome the tensile stress by changing the section or the prestress。
仓储物流外文文献翻译中英文原文及译文2023-2023
仓储物流外文文献翻译中英文原文及译文2023-2023原文1:The Current Trends in Warehouse Management and LogisticsWarehouse management is an essential component of any supply chain and plays a crucial role in the overall efficiency and effectiveness of logistics operations. With the rapid advancement of technology and changing customer demands, the field of warehouse management and logistics has seen several trends emerge in recent years.One significant trend is the increasing adoption of automation and robotics in warehouse operations. Automated systems such as conveyor belts, robotic pickers, and driverless vehicles have revolutionized the way warehouses function. These technologies not only improve accuracy and speed but also reduce labor costs and increase safety.Another trend is the implementation of real-time tracking and visibility systems. Through the use of RFID (radio-frequency identification) tags and GPS (global positioning system) technology, warehouse managers can monitor the movement of goods throughout the entire supply chain. This level of visibility enables better inventory management, reduces stockouts, and improves customer satisfaction.Additionally, there is a growing focus on sustainability in warehouse management and logistics. Many companies are implementing environmentally friendly practices such as energy-efficient lighting, recycling programs, and alternativetransportation methods. These initiatives not only contribute to reducing carbon emissions but also result in cost savings and improved brand image.Furthermore, artificial intelligence (AI) and machine learning have become integral parts of warehouse management. AI-powered systems can analyze large volumes of data to optimize inventory levels, forecast demand accurately, and improve operational efficiency. Machine learning algorithms can also identify patterns and anomalies, enabling proactive maintenance and minimizing downtime.In conclusion, warehouse management and logistics are continuously evolving fields, driven by technological advancements and changing market demands. The trends discussed in this article highlight the importance of adopting innovative solutions to enhance efficiency, visibility, sustainability, and overall performance in warehouse operations.译文1:仓储物流管理的当前趋势仓储物流管理是任何供应链的重要组成部分,并在物流运营的整体效率和效力中发挥着至关重要的作用。
液压专业毕业设计外文翻译(有译文、外文文献)值得收藏哦!
外文原文:The Analysis of Cavitation Problems in the Axial Piston Pumpshu WangEaton Corporation,14615 Lone Oak Road,Eden Prairie, MN 55344This paper discusses and analyzes the control volume of a piston bore constrained by the valve plate in axial piston pumps. The vacuum within the piston bore caused by the rise volume needs to be compensated by the flow; otherwise, the low pressure may cause the cavitations and aerations. In the research, the valve plate geometry can be optimized by some analytical limitations to prevent the piston pressure below the vapor pressure. The limitations provide the design guide of the timings and overlap areas between valve plate ports and barrel kidneys to consider the cavitations and aerations. _DOI: 10.1115/1.4002058_Keywords: cavitation , optimization, valve plate, pressure undershoots1 IntroductionIn hydrostatic machines, cavitations mean that cavities or bubbles form in the hydraulic liquid at the low pressure and collapse at the high pressure region, which causes noise, vibration, and less efficiency.Cavitations are undesirable in the pump since the shock waves formed by collapsed may be strong enough to damage components. The hydraulic fluid will vaporize when its pressure becomes too low or when the temperature is too high. In practice, a number of approaches are mostly used to deal with the problems: (1) raise the liquid level in the tank, (2) pressurize the tank, (3) booster the inlet pressure of the pump, (4) lower the pumping fluid temperature, and (5) design deliberately the pump itself.Many research efforts have been made on cavitation phenomena in hydraulic machine designs. The cavitation is classified into two types in piston pumps: trapping phenomenon related one (which can be preventedby the proper design of the valve plate) and the one observed on the layers after the contraction or enlargement of flow passages (caused by rotating group designs) in Ref. (1). The relationship between the cavitation and the measured cylinder pressure is addressed in this study. Edge and Darling (2) reported an experimental study of the cylinder pressure within an axial piston pump. The inclusion of fluid momentum effects and cavitations within the cylinder bore are predicted at both high speed and high load conditions. Another study in Ref. (3) provides an overview of hydraulic fluid impacting on the inlet condition and cavitation potential. It indicates that physical properties (such as vapor pressure, viscosity, density, and bulk modulus) are vital to properly evaluate the effects on lubrication and cavitation. A homogeneous cavitation model based on the thermodynamic properties of the liquid and steam is used to understand the basic physical phenomena of mass flow reduction and wave motion influences in the hydraulic tools and injection systems (4). Dular et al. (5, 6) developed an expert system for monitoring and control of cavitations in hydraulic machines and investigated the possibility of cavitation erosion by using the computational fluid dynamics (CFD) tools. The erosion effects of cavitations have been measured and validated by a simple single hydrofoil configuration in a cavitation tunnel. It is assumed that the severe erosion is often due to the repeated collapse of the traveling vortex generated by a leading edge cavity in Ref. (7). Then, the cavitation erosion intensity may be scaled by a simple set of flow parameters: the upstream velocity, the Strouhal number, the cavity length, and the pressure. A new cavitation erosion device, called vortex cavitation generator, is introduced to comparatively study various erosion situations (8).More previous research has been concentrated on the valve plate designs, piston, and pump pressure dynamics that can be associated with cavitations in axial piston pumps. The control volume approach and instantaneous flows (leakage) are profoundly studied in Ref. [9]. Berta et al. [10] used the finite volume concept to develop a mathematical model in which the effects of port plate relief grooves have been modeled andthe gaseous cavitation is considered in a simplified manner. An improved model is proposed in Ref. [11] and validated by experimental results. The model may analyze the cylinder pressure and flow ripples influenced by port plate and relief groove design. Manring compared principal advantages of various valve plate slots (i.e., the slots with constant, linearly varying, and quadratic varying areas) in axial piston pumps [12]. Four different numerical models are focused on the characteristics of hydraulic fluid, and cavitations are taken into account in different ways to assist the reduction in flow oscillations [13].The experiences of piston pump developments show that the optimization of the cavitations/aerations shall include the following issues: occurring cavitation and air release, pump acoustics caused by the induced noises, maximal amplitudes of pressure fluctuations, rotational torque progression, etc. However, the aim of this study is to modify the valve plate design to prevent cavitation erosions caused by collapsing steam or air bubbles on the walls of axial pump components. In contrastto literature studies, the research focuses on the development of analytical relationship between the valve plate geometrics and cavitations. The optimization method is applied to analyze the pressure undershoots compared with the saturated vapor pressure within the piston bore.The appropriate design of instantaneous flow areas between the valveplate and barrel kidney can be decided consequently.2 The Axial Piston Pump and Valve PlateThe typical schematic of the design of the axis piston pump is shown in Fig. 1. The shaft offset e is designed in this case to generate stroking containment moments for reducing cost purposes.The variation between the pivot center of the slipper and swash rotating center is shown as a. The swash angle αis the variable that determines the amount of fluid pumped per shaft revolution. In Fig. 1, the n th piston-slipper assembly is located at the angle ofθ. The displacement of the n thnpiston-slipper assembly along the x-axis can be written asx n= R tan(α)sin(θ)+ a sec(α)+ e tan(α) (1)nwhere R is the pitch radius of the rotating group.Then, the instantaneous velocity of the n th piston isx˙n = R 2sec ()αsin (n θ)α+ R tan (α)cos (n θ)ω+ R 2sec ()αsin (α)α + e 2sec ()αα (2)where the shaft rotating speed of the pump is ω=d n θ / dt .The valve plate is the most significant device to constraint flow inpiston pumps. The geometry of intake/discharge ports on the valve plateand its instantaneous relative positions with respect to barrel kidneys areusually referred to the valve plate timing. The ports of the valve plateoverlap with each barrel kidneys to construct a flow area or passage,which confines the fluid dynamics of the pump. In Fig. 2, the timingangles of the discharge and intake ports on the valve plate are listed as(,)T i d δ and (,)B i d δ. The opening angle of the barrel kidney is referred to asϕ. In some designs, there exists a simultaneous overlap between thebarrel kidney and intake/discharge slots at the locations of the top deadcenter (TDC) or bottom dead center (BDC) on the valve plate on whichthe overlap area appears together referred to as “cross -porting” in thepump design engineering. The cross-porting communicates the dischargeand intake ports, which may usually lower the volumetric efficiency. Thetrapped-volume design is compared with the design of the cross-porting,and it can achieve better efficiency 14]. However, the cross-porting isFig. 1 The typical axis piston pumpcommonly used to benefit the noise issue and pump stability in practice.3 The Control Volume of a Piston BoreIn the piston pump, the fluid within one piston is embraced by the piston bore, cylinder barrel, slipper, valve plate, and swash plate shown in Fig. 3. There exist some types of slip flow by virtue of relativeFig. 2 Timing of the valve platemotions and clearances between thos e components. Within the control volume of each piston bore, the instantaneous mass is calculated asM= n V(3)nwhere ρ and n V are the instantaneous density and volumesuch that themass time rate of change can be given asFig. 3 The control volume of the piston boren n n dM dV d V dt dt dtρρ=+ (4) where d n V is the varying of the volume.Based on the conservation equation, the mass rate in the control volume isn n dM q dtρ= (5)where n q is the instantaneous flow rate in and out of one piston. From the definition of the bulk modulus,n dP d dt dtρρβ= (6) where Pn is the instantaneous pressure within the piston bore. Substituting Eqs. (5) and (6) into Eq. (4) yields(?)n n n n n ndP q dV d V w d βθθ=- (7) where the shaft speed of the pump is n d dtθω=. The instantaneous volume of one piston bore can be calculated by using Eq. (1) asn V = 0V + P A [R tan (α)sin (n θ)+ a sec (α) + e tan(α) ] (8)where P A is the piston sectional area and 0V is the volume of eachpiston, which has zero displacement along the x-axis (when n θ=0, π).The volume rate of change can be calculated at the certain swash angle, i.e., α =0, such thattan cos n p n ndV A R d αθθ=()() (9) in which it is noted that the piston bore volume increases or decreaseswith respect to the rotating angle of n θ.Substituting Eqs. (8) and (9) into Eq. (7) yields0[tan()cos()] [tan sin sec tan() ]n P n n n p n q A R dP d V A R a e βαθωθαθαα-=-++()()()(10)4 Optimal DesignsTo find the extrema of pressure overshoots and undershoots in the control volume of piston bores, the optimization method can be used in Eq. (10). In a nonlinear function, reaching global maxima and minima is usually the goal of optimization. If the function is continuous on a closed interval, global maxima and minima exist. Furthermore, the global maximum (or minimum) either must be a local maximum (or minimum) in the interior of the domain or must lie on the boundary of the domain. So, the method of finding a global maximum (or minimum) is to detect all the local maxima (or minima) in the interior, evaluate the maxima (or minima) points on the boundary, and select the biggest (or smallest) one. Local maximum or local minimum can be searched by using the first derivative test that the potential extrema of a function f( · ), with derivative ()f ', can solve the equation at the critical points of ()f '=0 [15].The pressure of control volumes in the piston bore may be found as either a minimum or maximum value as dP/ dt=0. Thus, letting the left side of Eq. (10) be equal to zero yieldstan()cos()0n p n q A R ωαθ-= (11)In a piston bore, the quantity of n q offsets the volume varying and thendecreases the overshoots and undershoots of the piston pressure. In this study, the most interesting are undershoots of the pressure, which may fall below the vapor pressure or gas desorption pressure to cause cavitations. The term oftan()cos()p n A R ωαθ in Eq. (11) has the positive value in the range of intake ports (22ππθ-≤≤), shown in Fig. 2, which means that the piston volume arises. Therefore, the piston needs the sufficient flow in; otherwise, the pressure may drop.In the piston, the flow of n q may get through in a few scenariosshown in Fig. 3: (I) the clearance between the valve plate and cylinder barrel, (II) the clearance between the cylinder bore and piston, (III) the clearance between the piston and slipper, (IV) the clearance between the slipper and swash plate, and (V) the overlapping area between the barrel kidney and valve plate ports. As pumps operate stably, the flows in the as laminar flows, which can be calculated as [16]312IV k k Ln i I k h q p L ωμ==∑ (12)where k h is the height of the clearance, k L is the passage length,scenarios I –IV mostly have low Reynolds numbers and can be regarded k ω is the width of the clearance (note that in the scenario II, k ω =2π· r, in which r is the piston radius), and p is the pressure drop defined in the intake ports as p =c p -n p (13)where c p is the case pressure of the pump. The fluid films through theabove clearances were extensively investigated in previous research. The effects of the main related dimensions of pump and the operating conditions on the film are numerically clarified inRefs. [17,18]. The dynamic behavior of slipper pads and the clearance between the slipper and swash plate can be referred to Refs. [19,20]. Manring et al. [21,22] investigated the flow rate and load carrying capacity of the slipper bearing in theoretical and experimental methods under different deformation conditions. A simulation tool calledCASPAR is used to estimate the nonisothermal gap flow between the cylinder barrel and the valve plate by Huang and Ivantysynova [23]. The simulation program also considers the surface deformations to predict gap heights, frictions, etc., between the piston and barrel andbetween the swash plate and slipper. All these clearance geometrics in Eq.(12) are nonlinear and operation based, which is a complicated issue. In this study, the experimental measurements of the gap flows are preferred. If it is not possible, the worst cases of the geometrics or tolerances with empirical adjustments may be used to consider the cavitation issue, i.e., minimum gap flows.For scenario V, the flow is mostly in high velocity and can be described by using the turbulent orifice equation as((Tn d i d d q c A c A θθ= (14)where Pi and Pd are the intake and discharge pressure of the pump and ()i A θ and ()d A θ are the instantaneous overlap area between barrel kidneys and inlet/discharge ports of the valve plate individually.The areas are nonlinear functions of the rotating angle, which is defined by the geometrics of the barrel kidney, valve plate ports,silencing grooves, decompression holes, and so forth. Combining Eqs.(11) –(14), the area can be obtained as3()K IV A θ==(15)where ()A θ is the total overlap area of ()A θ=()()i d A A θλθ+, and λ is defined as=In the piston bore, the pressure varies from low tohigh while passing over the intake and discharge ports of the valve plates. It is possible that the instantaneous pressure achieves extremely low values during the intake area( 22ππθ-≤≤ shown in Fig. 2) that may be located below the vapor pressure vp p , i.e., n vp p p ≤;then cavitations canhappen. To prevent the phenomena, the total overlap area of ()A θ mightbe designed to be satisfied with30()K IV A θ=≥(16)where 0()A θ is the minimum area of 0()A θ=0()()i d A A θλθ+ and 0λis a constant that is0λ=gaseous form. The vapor pressure of any substance increases nonlinearly with temperature according to the Clausius –Clapeyron relation. With the incremental increase in temperature, the vapor pressure becomes sufficient to overcome particle attraction and make the liquid form bubbles inside the substance. For pure components, the vapor pressure can be determined by the temperature using the Antoine equation as /()10A B C T --, where T is the temperature, and A, B, and C are constants[24].As a piston traverse the intake port, the pressure varies dependent on the cosine function in Eq. (10). It is noted that there are some typical positions of the piston with respect to the intake port, the beginning and ending of overlap, i.e., TDC and BDC (/2,/2θππ=- ) and the zero displacement position (θ =0). The two situations will be discussed as follows:(1) When /2,/2θππ=-, it is not always necessary to maintain the overlap area of 0()A θ because slip flows may provide filling up for the vacuum. From Eq. (16), letting 0()A θ=0,the timing angles at the TDC and BDC may be designed as31cos ()tan()122IV c vpk k i I P k p p h A r L ωϕδωαμ--≤+∑ (17) in which the open angle of the barrel kidney is . There is nocross-porting flow with the timing in the intake port.(2) When θ =0, the function of cos θ has the maximum value, which can provide another limitation of the overlap area to prevent the low pressure undershoots suchthat 30(0)K IVA =≥ (18)where 0(0)A is the minimum overlap area of 0(0)(0)i A A =.To prevent the low piston pressure building bubbles, the vaporpressure is considered as the lower limitation for the pressure settings in Eq. (16). The overall of overlap areas then can be derived to have adesign limitation. The limitation is determined by the leakage conditions, vapor pressure, rotating speed, etc. It indicates that the higher the pumping speed, the more severe cavitation may happen, and then the designs need more overlap area to let flow in the piston bore. On the other side, the low vapor pressure of the hydraulic fluid is preferred to reduce the opportunities to reach the cavitation conditions. As a result, only the vapor pressure of the pure fluid is considered in Eqs. (16)–(18). In fact, air release starts in the higher pressure than the pure cavitation process mainly in turbulent shear layers, which occur in scenario V.Therefore, the vapor pressure might be adjusted to design the overlap area by Eq. (16) if there exists substantial trapped and dissolved air in the fluid.The laminar leakages through the clearances aforementioned are a tradeoff in the design. It is demonstrated that the more leakage from the pump case to piston may relieve cavitation problems.However, the more leakage may degrade the pump efficiency in the discharge ports. In some design cases, the maximum timing angles can be determined by Eq. (17)to not have both simultaneous overlapping and highly low pressure at the TDC and BDC.While the piston rotates to have the zero displacement, the minimum overlap area can be determined by Eq. 18 , which may assist the piston not to have the large pressure undershoots during flow intake.6 ConclusionsThe valve plate design is a critical issue in addressing the cavitation or aeration phenomena in the piston pump. This study uses the control volume method to analyze the flow, pressure, and leakages within one piston bore related to the valve plate timings. If the overlap area developed by barrel kidneys and valve plate ports is not properly designed, no sufficient flow replenishes the rise volume by the rotating movement. Therefore, the piston pressure may drop below the saturated vapor pressure of the liquid and air ingress to form the vapor bubbles. To control the damaging cavitations, the optimization approach is used to detect the lowest pressure constricted by valve plate timings. The analytical limitation of the overlap area needs to be satisfied to remain the pressure to not have large undershoots so that the system can be largely enhanced on cavitation/aeration issues.In this study, the dynamics of the piston control volume is developed by using several assumptions such as constant discharge coefficients and laminar leakages. The discharge coefficient is practically nonlinear based on the geometrics, flow number, etc. Leakage clearances of the control volume may not keep the constant height and width as well in practice due to vibrations and dynamical ripples. All these issues are complicated and very empirical and need further consideration in the future. Theresults presented in this paper can be more accurate in estimating the cavitations with these extensive studies.Nomenclature0(),()A A θθ= the total overlap area between valve plate ports and barrel kidneys 2()mmAp = piston section area 2()mmA, B, C= constantsA= offset between the piston-slipper joint and surface of the swash plate 2()mmd C = orifice discharge coefficiente= offset between the swash plate pivot and the shaft centerline of the pump 2()mmk h = the height of the clearance 2()mmk L = the passage length of the clearance 2()mmM= mass of the fluid within a single piston (kg)N= number of pistonsn = piston and slipper counter,p p = fluid pressure and pressure drop (bar)Pc= the case pressure of the pump (bar)Pd= pump discharge pressure (bar)Pi = pump intake pressure (bar)Pn = fluid pressure within the nth piston bore (bar)Pvp = the vapor pressure of the hydraulic fluid(bar)qn, qLn, qTn = the instantaneous flow rate of each piston(l/min)R = piston pitch radius 2()mmr = piston radius (mm)t =time (s)V = volume 3()mmwk = the width of the clearance (mm)x ,x ˙= piston displacement and velocity along the shaft axis (m, m/s) x y z --=Cartesian coordinates with an origin on the shaft centerline x y z '''--= Cartesian coordinates with an origin on swash plate pivot ,αα=swash plate angle and velocity (rad, rad/s)β= fluid bulk modulus (bar)δδ= timing angle of valve plates at the BDC and TDC (rad),B Tϕ= the open angle of the barrel kidney(rad)ρ= fluid density(kg/m3),θω= angular position and velocity of the rotating kit (rad, rad/s)μ=absolute viscosity(Cp),λλ= coefficients related to the pressure drop外文中文翻译:在轴向柱塞泵气蚀问题的分析本论文讨论和分析了一个柱塞孔与配流盘限制在轴向柱塞泵的控制量设计。
外文翻译—电力电子技术(英文+译文)
1 Power Electronic ConceptsPower electronics is a rapidly developing technology. Components are tting higher current and voltage ratings, the power losses decrease and the devices become more reliable. The devices are also very easy tocontrol with a mega scale power amplification. The prices are still going down pr. kVA and power converters are becoming attractive as a mean to improve the performance of a wind turbine. This chapter will discuss the standard power converter topologies from the simplest converters for starting up the turbine to advanced power converter topologies, where the whole power is flowing through the converter. Further, different park solutions using power electronics arealso discussed.1.1 Criteria for concept evaluationThe most common topologies are selected and discussed in respect to advantages and drawbacks. Very advanced power converters, where many extra devices are necessary in order to get a proper operation, are omitted.1.2 Power convertersMany different power converters can be used in wind turbine applications. In the case of using an induction generator, the power converter has to convert from a fixed voltage and frequency to a variable voltage and frequency. This may be implemented in many different ways, as it will be seen in the next section. Other generator types can demand other complex protection. However, the most used topology so far is a soft-starter, which is used during start up in order to limit the in-rush current and thereby reduce the disturbances to the grid.1.2.1 Soft starterThe soft starter is a power converter, which has been introduced to fixedspeed wind turbines to reduce the transient current during connection or disconnection of the generator to the grid. When the generator speed exceeds the synchronous speed, the soft-starter is connected. Using firing angle control of the thyristors in the soft starter the generator is smoothly connected to the grid over a predefined number of grid periods. An example of connection diagram for the softstarter with a generator is presented in Figure1.Figure 1. Connection diagram of soft starter with generators.The commutating devices are two thyristors for each phase. These are connected in anti-parallel. The relationship between the firing angle (﹤) and the resulting amplification of the soft starter is non-linear and depends additionally on the power factor of the connected element. In the case of a resistive load, may vary between 0 (full on) and 90 (full off) degrees, in the case of a purely inductive load between 90 (full on) and 180 (full off) degrees. For any power factor between 0 and 90 degrees, w ill be somewhere between the limits sketched in Figure 2.Figure 2. Control characteristic for a fully controlled soft starter.When the generator is completely connected to the grid a contactor (Kbyp) bypass the soft-starter in order to reduce the losses during normal operation. The soft-starter is very cheap and it is a standard converter in many wind turbines.1.2.2 Capacitor bankFor the power factor compensation of the reactive power in the generator, AC capacitor banks are used, as shown in Figure 3. The generators are normally compensated into whole power range. The switching of capacitors is done as a function of the average value of measured reactive power during a certain period.Figure 3. Capacitor bank configuration for power factor compensation ina wind turbine.The capacitor banks are usually mounted in the bottom of the tower or in thenacelle. In order to reduce the current at connection/disconnection of capacitors a coil (L) can be connected in series. The capacitors may be heavy loaded and damaged in the case of over-voltages to the grid and thereby they may increase the maintenance cost.1.2.3 Diode rectifierThe diode rectifier is the most common used topology in power electronic applications. For a three-phase system it consists of six diodes. It is shown in Figure 4.Figure 4. Diode rectifier for three-phase ac/dc conversionThe diode rectifier can only be used in one quadrant, it is simple and it is notpossible to control it. It could be used in some applications with a dc-bus.1.2.4 The back-to-back PWM-VSIThe back-to-back PWM-VSI is a bi-directional power converter consisting of two conventional PWM-VSI. The topology is shown in Figure 5.To achieve full control of the grid current, the DC-link voltage must be boosted to a level higher than the amplitude of the grid line-line voltage. The power flow of the grid side converter is controlled in orderto keep the DC-link voltage constant, while the control of the generator side is set to suit the magnetization demand and the reference speed. The control of the back-to-back PWM-VSI in the wind turbine application is described in several papers (Bogalecka, 1993), (Knowles-Spittle et al., 1998), (Pena et al., 1996), (Yifan & Longya, 1992), (Yifan & Longya, 1995).Figure 5. The back-to-back PWM-VSI converter topology.1.2.4.1 Advantages related to the use of the back-to-back PWM-VSIThe PWM-VSI is the most frequently used three-phase frequency converter. As a consequence of this, the knowledge available in the field is extensive and well established. The literature and the available documentation exceed that for any of the other converters considered in this survey. Furthermore, many manufacturers produce components especially designed for use in this type of converter (e.g., a transistor-pack comprising six bridge coupled transistors and anti paralleled diodes). Due to this, the component costs can be low compared to converters requiring components designed for a niche production.A technical advantage of the PWM-VSI is the capacitor decoupling between the grid inverter and the generator inverter. Besides affording some protection, this decoupling offers separate control of the two inverters, allowing compensation of asymmetry both on the generator side and on the grid side, independently.The inclusion of a boost inductance in the DC-link circuit increases the component count, but a positive effect is that the boost inductance reduces the demands on the performance of the grid side harmonic filter, and offers some protection of the converter against abnormal conditions on the grid.1.2.4.2 Disadvantages of applying the back-to-back PWM-VSIThis section highlights some of the reported disadvantages of the back-to-back PWM-VSI which justify the search for a more suitable alternative converter:In several papers concerning adjustable speed drives, the presence of the DC link capacitor is mentioned as a drawback, since it is heavy and bulky, it increases the costs and maybe of most importance, - it reduces the overall lifetime of the system. (Wen-Song & Ying-Yu, 1998); (Kim & Sul, 1993); (Siyoung Kim et al., 1998).Another important drawback of the back-to-back PWM-VSI is the switching losses. Every commutation in both the grid inverter and the generator inverter between the upper and lower DC-link branch is associated with a hard switching and a natural commutation. Since the back-to-back PWM-VSI consists of two inverters, the switching losses might be even more pronounced. The high switching speed to the grid may also require extra EMI-filters.To prevent high stresses on the generator insulation and to avoid bearing current problems (Salo & Tuusa, 1999), the voltage gradient may have to be limited by applying an output filter.1.2.5 Tandem converterThe tandem converter is quite a new topology and a few papers only have treated it up till now ((Marques & Verdelho, 1998); (Trzynadlowski et al., 1998a); (Trzynadlowski et al., 1998b)). However, the idea behind the converter is similar to those presented in ((Zhang et al., 1998b)), where the PWM-VSI is used as an active harmonic filter to compensate harmonic distortion. The topology of the tandem converter is shown inFigure 6.Figure 6. The tandem converter topology used in an induction generator wind turbine system.The tandem converter consists of a current source converter, CSC, in thefollowing designated the primary converter, and a back-to-back PWM-VSI, designated the secondary converter. Since the tandem converter consists of four controllable inverters, several degrees of freedom exist which enable sinusoidal input and sinusoidal output currents. However, in this context it is believed that the most advantageous control of the inverters is to control the primary converter to operate in square-wave current mode. Here, the switches in the CSC are turned on and off only once per fundamental period of the input- and output current respectively. In square wave current mode, the switches in the primary converter may either be GTO.s, or a series connection of an IGBT and a diode.Unlike the primary converter, the secondary converter has to operateat a high switching frequency, but the switched current is only a small fraction of the total load current. Figure 7 illustrates the current waveform for the primary converter, the secondary converter, is, and the total load current il.In order to achieve full control of the current to/from the back-to-back PWMVSI, the DC-link voltage is boosted to a level above the grid voltage. As mentioned, the control of the tandem converter is treated in only a few papers. However, the independent control of the CSC and the back-to-back PWM-VSI are both well established, (Mutschler & Meinhardt, 1998); (Nikolic & Jeftenic, 1998); (Salo & Tuusa, 1997); (Salo & Tuusa, 1999).Figure 7. Current waveform for the primary converter, ip, the secondary converter, is, and the total load current il.1.2.5.1Advantages in the use of the Tandem ConverterThe investigation of new converter topologies is commonly justifiedby thesearch for higher converter efficiency. Advantages of the tandem converter are the low switching frequency of the primary converter, and the low level of the switched current in the secondary converter. It is stated that the switching losses of a tandem inverter may be reduced by 70%, (Trzynadlowski et al., 1998a) in comparison with those of an equivalent VSI, and even though the conduction losses are higher for the tandem converter, the overall converter efficiency may be increased.Compared to the CSI, the voltage across the terminals of the tandem converter contains no voltage spikes since the DC-link capacitor of the secondary converter is always connected between each pair of input- and output lines (Trzynadlowski et al., 1998b).Concerning the dynamic properties, (Trzynadlowski et al., 1998a) states that the overall performance of the tandem converter is superior to both the CSC and the VSI. This is because current magnitude commands are handled by the voltage source converter, while phase-shift current commands are handled by the current source converter (Zhang et al., 1998b).Besides the main function, which is to compensate the current distortion introduced by the primary converter, the secondary converter may also act like an active resistor, providing damping of the primary inverter in light load conditions (Zhang et al., 1998b).1.2.5.2 Disadvantages of using the Tandem ConverterAn inherent obstacle to applying the tandem converter is the high number of components and sensors required. This increases the costs and complexity of both hardware and software. The complexity is justified by the redundancy of the system (Trzynadlowski et al., 1998a), however the system is only truly redundant if a reduction in power capability and performance is acceptable.Since the voltage across the generator terminals is set by the secondary inverter, the voltage stresses at the converter are high.Therefore the demands on the output filter are comparable to those when applying the back-to-back PWM-VSI.In the system shown in Figure 38, a problem for the tandem converter in comparison with the back-to-back PWM-VSI is the reduced generator voltage. By applying the CSI as the primary converter, only 0.866% of the grid voltage can be utilized. This means that the generator currents (and also the current through the switches) for the tandem converter must be higher in order to achieve the same power.1.2.6 Matrix converterIdeally, the matrix converter should be an all silicon solution with no passive components in the power circuit. The ideal conventional matrix converter topology is shown in Figure 8.Figure 8. The conventional matrix converter topology.The basic idea of the matrix converter is that a desired input current (to/from the supply), a desired output voltage and a desired output frequency may be obtained by properly connecting the output terminals of the converter to the input terminals of the converter. In order to protect the converter, the following two control rules must be complied with: Two (or three) switches in an output leg are never allowed to be on at the same time. All of the three output phases must be connected to an input phase at any instant of time. The actual combination of the switchesdepends on the modulation strategy.1.2.6.1 Advantages of using the Matrix ConverterThis section summarises some of the advantages of using the matrix converter in the control of an induction wind turbine generator. For a low output frequency of the converter the thermal stresses of the semiconductors in a conventional inverter are higher than those in a matrix converter. This arises from the fact that the semiconductors in a matrix converter are equally stressed, at least during every period of the grid voltage, while the period for the conventional inverter equals the output frequency. This reduces thethermal design problems for the matrix converter.Although the matrix converter includes six additional power switches compared to the back-to-back PWM-VSI, the absence of the DC-link capacitor may increase the efficiency and the lifetime for the converter (Schuster, 1998). Depending on the realization of the bi-directional switches, the switching losses of the matrix inverter may be less than those of the PWM-VSI, because the half of the switchings become natural commutations (soft switchings) (Wheeler & Grant, 1993).1.2.6.2 Disadvantages and problems of the matrix converterA disadvantage of the matrix converter is the intrinsic limitation of the output voltage. Without entering the over-modulation range, the maximum output voltage of the matrix converter is 0.866 times the input voltage. To achieve the same output power as the back-to-back PWM-VSI, the output current of the matrix converter has to be 1.15 times higher, giving rise to higher conducting losses in the converter (Wheeler & Grant, 1993).In many of the papers concerning the matrix converter, the unavailability of a true bi-directional switch is mentioned as one of the major obstacles for the propagation of the matrix converter. In the literature, three proposals for realizing a bi-directional switch exists. The diode embedded switch (Neft & Schauder, 1988) which acts like a truebi-directional switch, the common emitter switch and the common collector switch (Beasant et al., 1989).Since real switches do not have infinitesimal switching times (which is not desirable either) the commutation between two input phases constitutes a contradiction between the two basic control rules of the matrix converter. In the literature at least six different commutation strategies are reported, (Beasant et al., 1990); (Burany, 1989); (Jung & Gyu, 1991); (Hey et al., 1995); (Kwon et al., 1998); (Neft & Schauder, 1988). The most simple of the commutation strategies are those reported in (Beasant et al., 1990) and (Neft & Schauder, 1988), but neither of these strategies complies with the basic control rules.译文1 电力电子技术的内容电力电子技术是一门正在快速发展的技术,电力电子元器件有很高的额定电流和额定电压,它的功率减小元件变得更加可靠、耐用.这种元件还可以用来控制比它功率大很多倍的元件。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
外文文献翻译A Closer Look At Discretionary Writedowns Of Impaired AssetsThe increasing number of asset writedowns and write-offs during the last decade has captured the attention of both the financial press and the standard setting community. The FASB is currently considering regulation of writedowns of impaired assets and identifiable intangibles and released a Discussion Memorandum on the issues surrounding the problem in December 1990.Presently,there is little empirical research which applies directly to the writedowns which might be covered by the resulting standards (discretionary writedowns).Before any regulation by the FASB is attempted in this area, it would be beneficial for both standard setters and the business community to learn about those companies who have already chosen to record discretionary writedowns and the situations which may have prompted them to do so. Previous research in this area is not conclusive since some is based on anecdotal evidence and most of the published studies of writedowns include not only discretionary writedowns of impaired assets, but also write-offs and writedowns which may qualify as discontinued operations under APB 30.The purpose of this paper is to fill this void in the literature by providing information focusing only on discretionary asset writedowns. In particular, the following areas will be explored:1. What types of firms are recording discretionary asset writedowns? How prevalent are these events?2. How are discretionary asset writedowns disclosed?3. When are discretionary asset writedowns recorded? Are there any timing patterns in these events?4. What are the consequences of these writedowns? How do they affect stock prices and the financial health of the firms?THE CURRENT ENVIRONMENTAn asset is said to be "impaired" when its book value exceeds some measure of its "fair" value. When a firm recognizes this impairment and subsequently records the effect by decreasing the book value of the asset and debiting an income statement account, the firm has recorded a "writedown." GAAP clearly allows these writedowns in several situations. First, certain current assets, such as marketable securities and inventories, are examined periodically and adjusted to the lower of cost or market. Similarly, long term equity investments are adjusted periodically to the lower of cost or market although the income statement is not affected by the writedown. Finally, any long term assets for which disposal is contemplated (including assets being sold as part of a discontinued operation) are adjusted to their net realizable value (exit market value less costs of selling and readying for sale). These final types of writedowns are adequately addressed by APB 30, and there appears to be no confusion on the application of the rules to these situations, nor any reason to reassess APB 30 at this time.Most of the writedowns which a firm might normally record would fall into one of these three categories. The writedowns which the FASB would specifically target with their potential regulation would be those which do not. These will be termed "discretionary writedowns" to distinguish them from writedowns covered by APB 30. In these cases, the firm has not made a decision to dispose of the asset in question, but has come to the conclusion (based on their own decision model) that the asset is impaired. They plan to continue to use and depreciate the asset after the writedown, and do not indicate any current intent to sell the asset.Asset writedowns are often recorded in conjunction with, or under the title of, restructuring charges. Staff Accounting Bulletin No.67 (SAB 67), issued by the SEC in December 1986, deals with the display of restructuring charges on the income statement. The SEC concluded that restructuring charges should not be afforded extraordinary treatment. The FASB, however, has included disclosure as a "...separate line item outside continuing operations" as an option in their recently issued Discussion Memorandum, indicating that this treatment may be possible in their final standard. As a matter of policy, the Discussion Memorandum is subjected to SEC staffanalysis before issuance and an SEC observer is present at all Task Force meetings leading to the issuance of the Discussion Memorandum, so it is reasonable to assume that the SEC would rescind SAB 67 in the event that the FASB elected this treatment in a subsequent Statement of Financial Standards. Accordingly, this study looks at discretionary writedowns recorded during a period prior to the issuance of SAB 67 to determine management's preference for the income statement classification. SAB 67 is currently the only regulation which deals specifically with discretionary writedowns.The financial press has reported numerous cases of anecdotal evidence concerning writedowns and their impact on the financial health of the firm. For instance, several articles indicated that it appears that the stock market reacts positively to asset writedowns. A Wall Street Journal article indicated that the stock price for five large companies rose 4-7 percent three days after the announcement of "write-downs." Another article indicated that a $550 million "writedown" recorded by Warner Lambert in 1985 resulted in a 12 percent stock price increase. Unfortunately, none of these articles examine whether there may be a similar number of writedowns which resulted in stock price decreases. Another common theme in these articles is the surprise nature of the writedown and the failure of the writedown firms to "warn" the users of the financial statements in the periods before the writedown. None of these articles in the financial press distinguish between "writedowns" which are currently allowed under GAAP, and those which have been defined as "discretionary writedowns" in this paper.Four controlled studies of asset writedowns have been published, all of which include events other than "discretionary writedowns." Two of the interested parties in his topic, the Financial Executives Institute (FEI) and the National Association of Accountants (NAA) have each commissioned studies on asset impairments. Although extensive, neither of the studies specifically looks at only discretionary writedowns; these studies have included writedowns in contemplation of sale and total write-offs. The two published academic studies also include all types of write-offs and writedowns in their samples.''TIMING AND MOTIV ATIONAt the current time, partial writedowns of impaired long-lived assets are recognized at the discretion of management (and with the subsequent support of their auditors). Thus, it is important to investigate when management decides that the impairment should he recorded and what might motivate them to make such a decision. It is very difficult to assess the factors which might motivate a manager to record any discretionary event because of the manager's inability or reticence to describe the decision process. Often the researcher must draw conclusions from the available data to provide apparent motivating factors in the decision to record transactions. In this paper, earnings management will be examined as a possible explanation for the timing of and motivation for discretionary writedowns.By observing reported earnings surrounding the period in which the writedown was announced, two possible patterns of earnings management can be identified: income smoothing and “big baths." Income smoothing describes an earnings pattern in which management aspires to maintain a steady and predictable rate of earnings growth. Management may try to record discretionary gains, losses, or accruals in the period which will best help them to attain their goal of steady growth. This goal may he perceived as desirable because of a management incentive plan structured to reward smooth earnings patterns, or the hope that the market will equate smooth earnings with lower risk and subsequent higher stock prices. Thus, in the case of writedowns, a firm with an impaired asset may attempt to record the loss in a period of higher than normal earnings, or it may time the loss to coincide with a non-discretionary gain (for example, winning a substantial settlement in a lawsuit).A second form of earnings management has been referred to as the "big bath." Under this scenario, the firm appears to "save up" discretionary losses or accruals and then record several in the same period or in a period in which the firm has already experienced below normal earnings. Management might undertake a "big bath" to signal investors that "bad times" are behind them and better times will follow. In the case of discretionary asset writedowns, this reasoning is particularly appropriate since a writedown results in decreased depredation expense in the future. The "big bath" hasbeen mentioned often as a probable motivation for recording asset writedowns.To determine whether earnings management was a possible motivation in the timing of writedowns in this study, a measure of expected earnings was compared to reported earnings for each firm in the period in which the writedown was recorded. Income smoothing is characterized by periods in which pre-writedown earnings were higher than expected. By recording the writedown, reported earnings were closer to (but not less than) the level expected. A "big bath" is characterized by periods in which pre-writedown earnings were already below expected earnings. Thus, the firm recorded the writedown in a period in which other losses or accruals were already recorded or in a period of below normal operations.FINANCIAL CONSEQUENCES OF DISCRETIONARY WRITEDOWNS The financial consequences of recording discretionary writedowns of impaired assets can be assessed by observing certain financial indicators at the time of and after the writedown. Three specific indicators have been chosen in this study: (1) the reaction of the stock market, (2) the frequency of subsequent merger or acquisition activity, and (3) the subsequent financial health of the firms as measured by certain key ratios. Each of these three indicators will be discussed separately.Stock Market Reaction. The average stock returns of the writedown firms (adjusted for cash and stock dividends as well as firm-specific risk) were compared with the market return for a period of sixty days before to sixty days after the announcement of the writedown. On the average, there were no significant unusual or excess returns earned by the writedown firms over this period of time. In addition, they performed similarly to a control group of firms matched on the basis of industry and asset size over the same period of time.These results do not refute the anecdotal evidence that firms announcing discretionary writedowns have experienced unusual stock price increases. Rather, they indicate that for every firm that achieves these positive results, there is a firm for which the market acts negatively. Thus, a firm which records a writedown is just as likely to experience a negative market reaction as it is a positive market reaction.Subsequent Merger or Acquisition Activity. Some people in the business andacademic communities believe that a writedown may be an indicator of some sort of major capital structure change such as an acquisition of, acquisition by, or a merger with another similarly sized organization. To investigate this contention, each writedown firm was observed from the writedown date through the end of 1987 for evidence of merger or acquisition activity, and the results were compared with those of firms in similar industries and of similar asset size.The results indicate that a greater number of the writedown firms engage in subsequent mergers or acquisitions than do firms in similar industries and of similar size which do not record writedowns. However, a chi-squared test found that the frequency of subsequent merger or acquisition activity is not significantly higher (at a 0.05 level of significance) for the writedown firms. This means that the apparent differences could be due to chance. There is no statistical evidence to support the contention that firms recording discretionary writedowns of impaired assets are more likely to engage in subsequent merger or acquisition activity.Subsequent Financial Health. Tb evaluate the comparative financial health of the writedown and control firms in the periods surrounding the writedown, four financial variables, (1) Cash Dividend Growth, (2) Earnings/Price Ratio, (3) Debt to Equity Ratio, and (4) Quarterly Return on Assets, were measured at six points in time over the period of three years before to three years after the writedown. The period in which the writedown was recorded is not included because of its obvious impact on the financial variables and to provide a clearer view of the long term trend of each variable. In addition, because of the diverse range of relative size of the writedowns, the writedown firms were split into two groups based on the size of the writedown as a percentage of total assets of the firm. Splitting the writedown group in this manner will allow for the observation of the impact of the writedown size on the firm's subsequent financial health. These two groups will be referred to as large and small writedowns.SUMMARY AND CONCLUSIONSThere are several conclusions that can be drawn from the results of these empirical tests of discretionary writedowns of impaired assets. First, the number ofevents located during the test period of 1981 to 1983 is low. This observation may indicate two different situations. First, asset impairments not already covered by GAAP could be infrequent. Second, and probably more realistic, the number of partial impairments may be significant, but, given the paucity of specific regulatory guidance or requirement, the writedowns are not being recorded. This second situation has particular implications for the standard setting community since it makes it difficult to estimate the impact of potential regulation. The firms recording writedowns tend to be in capital-intensive industries and tend to be traded on the New York Stock Exchange. The writedowns also vary in absolute and relative size, again perhaps reflecting the deficiencies in guidance on what comprises impairment.Second, the writedowns occur primarily in the fourth quarter of the fiscal year, probably because of the more extensive review due to the budgeting and audit processes occurring in that period. Management tends to view the writedowns as unusual events and highlights them as separate line items after income from operations. However, even in a period preceding the SEC's prohibition of extraordinary income statement treatment, none of the firms disclosed the writedown after continuing operations.Third, the majority of the firms wrotedown their assets in a period of already below normal earnings (a "big bath"), but 25 percent offset the writedown with other gains or unusually high earnings (income smoothing).These results provide support for the contention that writedowns are being used to manage earnings. Finally, the writedowns are not a precursor of improved financial health for the firm. No significant evidence of positive stock market reaction to the writedown announcement could be found. There is no significant increase in merger or acquisition activity for writedown firms as compared to a control group of firms. In general, control firms outperformed writedown firms on the basis of key financial characteristics. In addition, the larger the writedown as a percentage of assets, the larger the decline in financial health. For those firms which did decide to record partial writedowns despite the lack of specific regulatory requirements, this study indicates a less than positive picture. Recording the writedown as a fourth quarter adjustment without "warning,"evidence of earnings management, and finally declining financial health which intensifies as the relative size of the writedown increases are all contradictory to the impression often given by the financial press, and perhaps management itself, that the writedown is a positive event which will result in increased stock prices and a healthier firm. For the standard setters, it provides a dilemma since it is impossible to determine how many firms actually have impaired assets but have so far elected not to record the writedown. The FASB must decide, first of all, the magnitude of the potential problem and then when to require partial writedowns of impaired long-lived assets. Once these decisions have been made, then the substantive issues of measurement and disclosure can be addressed.Linda J. Zucca, David R. Campbell. A Closer Look At Discretionary Writedowns Of Impaired Asserts [J].Accounting Horizon, 1992, 6(3): 30-41.译文:对减记资产减值的研究在过去十年里,资产减记和注销数目的不断增加,已经引起了金融媒体和标准设置委员会的关注。