外文翻译中文.pdf

合集下载

物联网工程中英文对照外文翻译文献

物联网工程中英文对照外文翻译文献

中英文对照外文翻译(文档含英文原文和中文翻译)Android: A Programmer’s Guide1 What Is Android1.1 Key Skills & Concepts● History of embedded device programming● Explanation of Open Handset Alliance● First look at the Android home screenIt can be said that, for a while, traditional desktop application developers have been spoiled. This is not to say that traditional desktop application development is easier than other forms of develop ment. However, as traditional desktop application developers, we have had the ability to create alm ost any kind of application we can imagine. I am including myself in this grouping because I got my start in desktop programming.One aspect that has made desktop programming more accessible is that we have had the ability to interact with the desktop operating system, and thus interact with any underlying hardware, pretty1freely (or at least with minimal exceptions). This kind of freedom to program independently, how ever, has never really been available to the small group of programmers who dared to venture int o the murky waters of cell phone development.NOTE :I refer to two different kinds of developers in this discussion: traditional desktop applicati on developers, who work in almost any language and whose end product, applications, are built to run on any “desktop” operating system; and Android developers, J ava developers who develop for the Android platform. This is not for the purposes of saying one is by any means better or wors e than the other. Rather, the distinction is made for purposes of comparing the development styles and tools of desktop operating system environments to the mobile operating system environment1.2 Brief History of Embedded Device ProgrammingFor a long time, cell phone developers comprised a small sect of a slightly larger group of developers known as embedded device developers. Seen as a less “glamorous” sibling to desktop—and later web—development, embedded device development typically got the proverbial short end of the stick as far as hardware and operating system features, because embedded device manufacturers were notoriously stingy on feature support.Embedded device manufacturers typically needed to guard their hardware secrets closely, so they gave embedded device developers few libraries to call when trying to interact with a specific device. Embedded devices differ fro m desktops in that an embedded device is typically a “computer on a chip.” For example, consider your standard television remote control; it is not really seen as an overwhelming achievement of technological complexity. When any button is pressed, a chip interprets the signal in a way that has been programmed into the device. This allows the device to know what to expect from the input device (key pad), and how to respond to those commands (for example, turn on the television). This is a simple form of embedded device programming. However, believe it or not, simple devices such as these are definitely related to the roots of early cell phone devices and development.Most embedded devices ran (and in some cases still run) proprietary operating systems. The reason for choosing to create a proprietary operating system rather than use any consumer system was really a product of necessity. Simple devices did not need very robust and optimized operating systems.As a product of device evolution, many of the more complex embedded devices, such as early PDAs, household security systems, and GPSs, moved to somewhat standardized operating system platforms about five years ago. Small-footprint operating systems such as Linux, or even an embedded version of Microsoft Windows, have become more prevalent on many embedded devices. Around this time in device evolution, cell phones branched from other embedded devices onto their own path. This branching is evident whenyou examine their architecture.Nearly since their inception, cell phones have been fringe devices insofar as they run on proprietary software—software that is owned and controlled by the manufacturer, and is almost always considered to be a “closed” system. The practice of manufacturers using proprietary operating systems began more out of necessity than any other reason. That is, cell phone manufacturers typically used hardware that was completely developed in-house, or at least hardware that was specifically developed for the purposes of running cell phone equipment. As a result, there were no openly available, off-the-shelf software packages or solutions that would reliably interact with their hardware. Since the manufacturers also wanted to guard very closely their hardware trade secrets, some of which could be revealed by allowing access to the software level of the device, the common practice was, and in most cases still is, to use completely proprietary and closed software to run their devices. The downside to this is that anyone who wanted to develop applications for cell phones needed to have intimate knowledge of the proprietary environment within which it was to run. The solution was to purchase expensive development tools directly from the manufacturer. This isolated many of the “homebrew” develo pers.NOTE:A growing culture of homebrew developers has embraced cell phone application development. The term “homebrew” refers to the fact that these developers typically do not work for a cell phone development company and generally produce small, one-off products on their own time.Another, more compelling “necessity” that kept cell phone development out of the hands of theeveryday developer was the hardware manufacturers’ solution to the “memory versus need” dilemma. Until recently, cell phones did little more than execute and receive phone calls, track your contacts, and possiblysend and receive short text messages; not really the “Swiss army knives” of technology they are today.Even as late as 2002, cell phones with cameras were not commonly found in the hands of consumers.By 1997, small applications such as calculators and games (Tetris, for example) crept their way ontocell phones, but the overwhelming function was still that of a phone dialer itself. Cell phones had not yetbecome the multiuse, multifunction personal tools they are today. No one yet saw the need for Internetbrowsing, MP3 playing, or any of the multitudes of functions we are accustomed to using today. It ispossible that the cell phone manufacturers of 1997 did not fully perceive the need consumers would havefor an all-in-one device. However, even if the need was present, a lack of device memory and storagecapacity was an even bigger obstacle to overcome. More people may have wanted their devices to be all-in-one tools, but manufacturers still had to climb the memory hurdle.To put the problem simply, it takes memory to store and run applications on any device, cell phones included. Cell phones, as a device, until recently did not have the amount of memory available to them thatwould facilitate the inclusion of “extra” programs. Within the last two years, the price of memory hasreached very low levels. Device manufacturers now have the ability to include more memory at lowerprices. Many cell phones now have more standard memory than the average PC had in the mid-1990s. So,now that we have the need, and the memory, we can all jump in and develop cool applications for cellphones around the world, right? Not exactly.Device manufacturers still closely guard the operating systems that run on their devices. While a fewhave opened up to the point where they will allow some Java-based applications to run within a smallenvironment on the phone, many do not allow this. Even the systems that do allow some Java apps to rundo not allow the kind of access to the “core” system that standard desktop developers are accustomed to having.1.3 Open Handset Alliance and AndroidThis barrier to application development began to crumble in November of 2007 when Google, under theOpen Handset Alliance, released Android. The Open Handset Alliance is a group of hardware and softwaredevelopers, including Google, NTT DoCoMo, Sprint Nextel, and HTC, whose goal is to create a more opencell phone environment. The first product to be released under the alliance is the mobile device operatingsystem, Android.With the release of Android, Google made available a host of development tools and tutorials to aid would-be developers onto the new system. Help files, the platform software development kit (SDK), and even a developers’ community can be found at Google’s Android website, This site should be your starting point, and I highly encourage you to visit the site.NOTE :Google, in promoting the new Android operating system, even went as far as to create a $10million contest looking for new and exciting Android applications.While cell phones running Linux, Windows, and even PalmOS are easy to find, as of this writing, nohardware platforms have been announced for Android to run on. HTC, LG Electronics, Motorola, andSamsung are members of the Open Handset Alliance, under which Android has been released, so we canonly hope that they have plans for a few Android-based devices in the near future. With its release inNovember 2007, the system itself is still in a software-only beta. This is good news for developers because it gives us a rare advance look at a future system and a chance to begin developing applications that willrun as soon as the hardware is released.NOTE:This strategy clearly gives the Open Handset Alliance a big advantage over other cell phone operating system developers, because there could be an uncountable number of applications available immediately for the first devices released to run Android.Introduction to AndroidAndroid, as a system, is a Java-based operating system that runs on the Linux 2.6 kernel. The system is very lightweight and full featured. Android applications are developed using Java and can be ported rather easily to the new platform. If you have not yet downloaded Java or are unsure about which version you need, I detail the installation of the development environment in Chapter 2. Other features of Android include an accelerated 3-D graphics engine (based on hardware support), database support powered by SQLite, and an integrated web browser.If you are familiar with Java programming or are an OOP developer of any sort, you are likely used to programmatic user interface (UI) development—that is, UI placement which is handled directly within the program code. Android, while recognizing and allowing for programmatic UI development, also supports the newer, XML-based UI layout. XML UI layout is a fairly new concept to the average desktop developer. I will cover both the XML UI layout and the programmatic UI development in the supporting chapters of this book.One of the more exciting and compelling features of Android is that, because of its architecture, third-partyapplications—including those that are “home grown”—are executed with the same system priority as those that are bundled with the core system. This is a major departure from most systems, which give embeddedsystem apps a greater execution priority than the thread priority available to apps created by third-partydevelopers. Also, each application is executed within its own thread using a very lightweight virtualmachine.Aside from the very generous SDK and the well-formed libraries that are available to us to develop with,the most exciting feature for Android developers is that we now have access to anything the operatingsystem has access to. In other words, if you want to create an application that dials the phone, you haveaccess to the phone’s dialer; if you want to create an application that utilizes the phone’s internal GPS (ifequipped), you have access to it. The potential for developers to create dynamic and intriguing applicationsis now wide open.On top of all the features that are available from the Android side of the equation, Google has thrown insome very tantalizing features of its own. Developers of Android applications will be able to tie their applications into existing Google offerings such as Google Maps and the omnipresent Google Search.Suppose you want to write an application that pulls up a Google map of where an incoming call isemanating from, or you want to be able to store common search results with your contacts; the doors ofpossibility have been flung wide open with Android.Chapter 2 begins your journey to Android development. You will learn the how’s and why’s of usingspecific development environments or integrated development environments (IDE), and you will downloadand install the Java IDE Eclipse.2 Application: Hello World2.1 Key Skills & Concepts● Creating new Android projects● Working with Views● Using a TextView● Modifying the main.xml file● Running applications on the Android EmulatorIn this chapter, you will be creating your first Android Activity. This chapter examines theapplication-building process from start to finish. I will show you how to create an Android project inEclipse, add code to the initial files, and run the finished application in the Android Emulator. The resultingapplication will be a fully functioning program running in an Android environment.Actually, as you move through this chapter, you will be creating more than one Android Activity.Computer programming tradition dictates that your first application be the typical Hello World! application,so in the first section you will create a standard Hello World! application with just a blank background andthe “Hello World!” text. Then, for the sake of enabling you to get to know the language better, the next section explains in detail the files automatically created by Android for your Hello World! application. You will create two iterations of this Activity, each using different techniques for displaying information to the screen. You will also create two different versions of a Hello World! application that will display an image that delivers the “Hello World!” message. This will give you a good introduction to the controls and inner workings of Android.NOTE:You will often see “application” and “Activity” used interchangeably. The difference between the two is that an application can be composed of multiple Activities, but one application must have at leastone Activity. Each “window” or screen of your application is a separate Activity. Therefore, if you create a fairly simple application with only one screen of data (like the Hello World! application in this chapter),that will be one Activity. In future chapters you will create applications with multiple Activities.To make sure that you get a good overall look at programming in Android, in Chapter 6 you will createboth of these applications in the Android SDK command-line environment for Microsoft Windows andLinux. In other words, this chapter covers the creation process in Eclipse, and Chapter 6 covers the creationprocess using the command-line tools. Therefore, before continuing, you should check that your Eclipseenvironment is correctly configured. Review the steps in Chapter 3 for setting the PATH statement for theAndroid SDK. You should also ensure that the JRE is correctly in your PATH statement.TIP:If you have configuration-related issues while attempting to work with any of the command-lineexamples, try referring to the configuration steps in Chapters 2 and 3; and look at the Android SDK documentation.2.2 Creating Your First Android Project in EclipseTo start your first Android project, open Eclipse. When you open Eclipse for the first time, it opens toan empty development environment (see Figure 5-1), which is where you want to begin. Your first task isto set up and name the workspace for your application. Choose File | New | Android Project, which willlaunch the New Android Project wizard.CAUTION Do not select Java Project from the New menu. While Android applications are written in Java, and you are doing all of your development in Java projects, this option will create a standard Java application. Selecting Android Project enables you to create Android-specific applications.If you do not see the option for Android Project, this indicates that the Android plugin for Eclipse was not fully or correctly installed. Review the procedure in Chapter 3 for installing the Android plugin for Eclipse to correct this.2.3 The New Android Project wizard creates two things for youA shell application that ties into the Android SDK, using the android.jar file, and ties the project intothe Android Emulator. This allows you to code using all of the Android libraries and packages, and alsolets you debug your applications in the proper environment.Your first shell files for the new project. These shell files contain some of the vital application blocksupon which you will be building your programs. In much the same way as creating a Microsoft .NETapplication in Visual Studio generates some Windows-created program code in your files, using the Android Project wizard in Eclipse generates your initial program files and some Android-created code. Inaddition, the New Android Project wizard contains a few options, shown next, that you must set to initiate your Android project. For the Project Name field, for purposes of this example, use the titleHelloWorldText. This name sufficiently distinguishes this Hello World! project from the others that youwill be creating in this chapter.In the Contents area, keep the default selections: the Create New Project inWorkspace radio button should be selected and the Use Default Location check box should be checked.This will allow Eclipse to create your project in your default workspace directory. The advantage ofkeeping the default options is that your projects are kept in a central location, which makes ordering,managing, and finding these projects quite easy. For example, if you are working in a Unix-basedenvironment, this path points to your $HOME directory.If you are working in a Microsoft Windows environment, the workspace path will beC:/Users/<username>/workspace, as shown in the previous illustration. However, for any number of reasons, you may want to uncheck the Use Default Location check box and select a different location for your project. One reason you may want to specify a different location here is simply if you want to choose a location for this specific project that is separate from other Android projects. For example, you may want to keep the projects that you create in this book in a different location from projects that you create in the future on your own. If so, simply override the Location option to specify your own custom location directory for this project.3 Application FundamentalsAndroid applications are written in the Java programming language. The compiled Java code — along with any data and resource files required by the application — is bundled by the aapt tool into an Androidpackage, an archive file marked by an .apk suffix. This file is the vehicle for distributing the application and installing it on mobile devices; it's the file users download to their devices. All the code in a single .apk file is considered to be one application.In many ways, each Android application lives in its own world:1. By default, every application runs in its own Linux process. Android starts the process when any of the application's code needs to be executed, and shuts down the process when it's no longer needed and system resources are required by other applications.2. Each process has its own virtual machine (VM), so application code runs in isolation from the code of all other applications.3. By default, each application is assigned a unique Linux user ID. Permissions are set so that the application's files are visible only to that user and only to the application itself — although there are ways to export them to other applications as well.It's possible to arrange for two applications to share the same user ID, in which case they will be able to see each other's files. To conserve system resources, applications with the same ID can also arrange to run in the same Linux process, sharing the same VM.3.1 Application ComponentsA central feature of Android is that one application can make use of elements of other applications (provided those applications permit it). For example, if your application needs to display a scrolling list of images and another application has developed a suitable scroller and made it available to others, you can call upon that scroller to do the work, rather than develop your own. Application have four types of components:(1)ActivitiesAn activity presents a visual user interface for one focused endeavor the user can undertake. For example, an activity might present a list of menu items users can choose from or it might display photographs along with their captions. A text messaging application might have one activity that shows a list of contacts to send messages to, a second activity to write the message to the chosen contact, and other activities to review old messages or change settings. Though they work together to form a cohesive user interface, each activity is independent of the others. Each one is implemented as a subclass of the Activity base class.An application might consist of just one activity or, like the text messaging application just mentioned, it may contain several. What the activities are, and how many there are depends, of course, on the application and its design. Typically, one of the activities is marked as the first one that should be presented to the user when the application is launched. Moving from one activity to another is accomplished by having the current activity start the next one.Each activity is given a default window to draw in. Typically, the window fills the screen, but it might be smaller than the screen and float on top of other windows. An activity can also make use of additional windows —— for example, a pop-up dialog that calls for a user response in the midst of the activity, or a windowswindow that presents users with vital information when they select a particular item on-screen.The visual content of the window is provided by a hierarchy of views — objects derived from the base View class. Each view controls a particular rectangular space within the window. Parent views contain and organize the layout of their children. Leaf views (those at the bottom of the hierarchy) draw in the rectangles they control and respond to user actions directed at that space. Thus, views are where the activity's interaction with the user takes place.For example, a view might display a small image and initiate an action when the user taps that image. Android has a number of ready-made views that you can use — including buttons, text fields, scroll bars, menu items, check boxes, and more.A view hierarchy is placed within an activity's window by the Activity.setContentView() method. The content view is the View object at the root of the hierarchy. (See the separate User Interface document for more information on views and the hierarchy.)(2)ServicesA service doesn't have a visual user interface, but rather runs in the background for an indefinite period of time. For example, a service might play background music as the user attends to other matters, or it might fetch data over the network or calculate something and provide the result to activities that need it. Each service extends the Service base class.A prime example is a media player playing songs from a play list. The player application would probably have one or more activities that allow the user to choose songs and start playing them. However, the musicplayback itself would not be handled by an activity because users will expect the music to keep playing even after they leave the player and begin something different. To keep the music going, the media player activity could start a service to run in the background. The system would then keep the music playback service running even after the activity that started it leaves the screen.It's possible to connect to (bind to) an ongoing service (and start the service if it's not already running). While connected, you can communicate with the service through an interface that the service exposes. For the music service, this interface might allow users to pause, rewind, stop, and restart the playback.Like activities and the other components, services run in the main thread of the application process. So that they won't block other components or the user interface, they often spawn another thread for time-consuming tasks (like music playback). See Processes and Threads, later.(3)Broadcast receiversA broadcast receiver is a component that does nothing but receive and react to broadcast announcements. Many broadcasts originate in system code — for example, announcements that the timezone has changed, that the battery is low, that a picture has been taken, or that the user changed a language preference. Applications can also initiate broadcasts — for example, to let other applications know that some data has been downloaded to the device and is available for them to use.An application can have any number of broadcast receivers to respond to any announcements it considers important. All receivers extend the BroadcastReceiver base class.Broadcast receivers do not display a user interface. However, they may start an activity in response to the information they receive, or they may use the NotificationManager to alert the user. Notifications can get the user's attention in various ways —— flashing the backlight, vibrating the device, playing a sound, and so the user's attention in various wayson. They typically place a persistent icon in the status bar, which users can open to get the message.(4)Content providersA content provider makes a specific set of the application's data available to other applications. The data can be stored in the file system, in an SQLite database, or in any other manner that makes sense. The content provider extends the ContentProvider base class to implement a standard set of methods that enable other applications to retrieve and store data of the type it controls. However, applications do not call these methods directly. Rather they use a ContentResolver object and call its methods instead. A ContentResolver can talk to any content provider; it cooperates with the provider to manage any interprocess communication that's involved.See the separate Content Providers document for more information on using content providers. Whenever there's a request that should be handled by a particular component, Android makes sure that the application process of the component is running, starting it if necessary, and that an appropriate instance of the component is available, creating the instance if necessary.3.2 Activating components: intentsContent providers are activated when they're targeted by a request from a ContentResolver. The other three components — activities, services, and broadcast receivers — are activated by asynchronous messages called intents. An intent is an Intent object that holds the content of the message. For activities and services, it names the action being requested and specifies the URI of the data to act on, among other things. For example, it might convey a request for an activity to present an image to the user or let the user edit some text. For broadcast receivers, theIntent object names the action being announced. For example, it might announce to interested parties that the camera button has been pressed.。

外文翻译PDF

外文翻译PDF

其中,Lossi-j 是母线真正的线损的到母线Ĵ,xi-JGK 是每个发电机线路损耗 的部分, PDK 是负载总线 K, yjGk 是分数每台发电机加载和 Ng 是发电机在系统 中的数量。 把 GA 传输损耗和负载流量分配问题,xi-JGK 和 yjGk 可以被视为一个优化 问题。GA 会发现这些分数对每个线路和负载的优化值。由于初始群体是随机
(5) (6)
其中β也是 0 和 1 之间的最后一步是完成与染色体的其余部分交叉,如下一 个随机值:
offspring 1 Pm1 , Pm 2 , …Pnew1 , ……PdNpar offspring 2 Pd 1 , Pd 2 , …Pnew2 , ……PmNpar


(7) (8)
[6] [3] [4]
的概念是基于发电机领域,共享和链接。功率跟踪可以做之前,需要先定义这些
本文结构如下: GA 的概念一目了然地呈现在下一部分,其次是 GA 技术损耗 和负载流量分配,结果和讨论,然后最终的结论将在本来的结尾陈述。
二 遗传算法
遗传算法(GA)是施加生物学过程的模型来解决最优化问题一种随机方法。 GA 允许个人组成在规定的法律进化到国家人口最大化“适”或最小化的成本函 数。该技术最初是由[9]开发的。遗传算法可以使用二进制和连续的方法被应用。 对于本文中,连续的 GA 用于由于它的优点在于连续参数的精度表示的术语。 A·代表性 起初 GA 二进制编码的位数工作(即 0 和 1),并连接在一起为一个字符串。 然而,本文将采用连续浮点数为代表,以解决问题。如果染色体具有由 P1 给的

虽然选择和交叉施加到染色体中每一代来获得更好的解决方案,一组新的, 偶尔它们可能变得过分热心,失去了一些有用的信息。为了保护这些不可恢复 的损失或过早收敛发生突变应用。突变是随机改变的小概率突变的(0-10%) 的参数。乘以突变率的参数总数给出应该突变参数的数目。甲突变参数被替换 为一个新的随机数。

建筑防火中英文对照外文翻译文献

建筑防火中英文对照外文翻译文献

- 1 -中英文对照外文翻译(文档含英文原文和中文翻译)外文文献外文文献: :Designing Against Fire Of BulidingABSTRACT:This paper considers the design of buildings for fire safety. It is found that fire and the associ- ated effects on buildings is significantly different to other forms of loading such as gravity live loads, wind and earthquakes and their respective effects on the building structure. Fire events are derived from the human activities within buildings or from the malfunction of mechanical and electrical equipment provided within buildings to achieve a serviceable environment. It is therefore possible to directly influence the rate of fire starts within buildings by changing human behaviour, improved maintenance and improved design of mechanical and electricalsystems. Furthermore, should a fire develops, it is possible to directly influence the resulting fire severity by the incorporation of fire safety systems such as sprinklers and to provide measures within the building to enable safer egress from the building. The ability to influence the rate of fire starts and the resulting fire severity is unique to the consideration of fire within buildings since other loads such as wind and earthquakes are directly a function of nature. The possible approaches for designing a building for fire safety are presented using an example of a multi-storey building constructed over a railway line. The design of both the transfer structure supporting the building over the railway and the levels above the transfer structure are consideredin the context of current regulatory requirements. The principles and assumptions associ- ated with various approaches are discussed.1 INTRODUCTIONOther papers presented in this series consider the design of buildings for gravity loads, wind and earthquakes.The design of buildings against such load effects is to a large extent covered by engineering based standards referenced by the building regulations. This is not the case, to nearly the same extent, in the case of fire. Rather, it is building regulations such as the Building Code of Australia (BCA) that directly specify most of the requirements for fire safety of buildings with reference being made to Standards such as AS3600 or AS4100 for methods for determining the fire resistance of structural elements.The purpose of this paper is to consider the design of buildings for fire safety from an engineering perspective (as is currently done for other loads such as wind or earthquakes), whilst at the same time,putting such approaches in the context of the current regulatory requirements.At the outset,it needs to be noted that designing a building for fire safety is far more than simply considering the building structure and whether it has sufficient structural adequacy.This is because fires can have a direct influence on occupants via smoke and heat and can grow in size and severity unlike other effects imposed on the building. Notwithstanding these comments, the focus of this paper will be largely on design issues associated with the building structure.Two situations associated with a building are used for the purpose of discussion. The multi-storey office building shown in Figure 1 is supported by a transfer structure that spans over a set of railway tracks. It is assumed that a wide range of rail traffic utilises these tracks including freight and diesel locomotives. The first situation to be considered from a fire safety perspective is the transfer structure.This is termed Situation 1 and the key questions are: what level of fire resistance is required for this transfer structure and how can this be determined? This situation has been chosen since it clearly falls outside the normal regulatory scope of most build-ing regulations. An engineering solution, rather than a prescriptive one is required. The second fire situation (termed Situation 2) corresponds to a fire within the office levels of the building and is covered by building regulations. This situation is chosen because it will enable a discussion of engineering approaches and how these interface with the building regulations regulations––since both engineering and prescriptive solutions are possible.2 UNIQUENESS OF FIRE2.1 Introduction Wind and earthquakes can be considered to b Wind and earthquakes can be considered to be “natural” phenomena o e “natural” phenomena o e “natural” phenomena over which designers ver which designers have no control except perhaps to choose the location of buildings more carefully on the basis of historical records and to design building to resist sufficiently high loads or accelerations for the particular location. Dead and live loads in buildings are the result of gravity. All of these loads are variable and it is possible (although generally unlikely) that the loads may exceed the resistance of the critical structural members resulting in structural failure.The nature and influence of fires in buildings are quite different to those associated with other“loads” to which a building may be subjected to. The essential differences are described in the following sections.2.2 Origin of FireIn most situations (ignoring bush fires), fire originates from human activities within the building or the malfunction of equipment placed within the building to provide a serviceable environment. It follows therefore that it is possible to influence the rate of fire starts by influencing human behaviour, limiting and monitoring human behaviour and improving the design of equipment and its maintenance. This is not the case for the usual loads applied to a building.2.3 Ability to InfluenceSince wind and earthquake are directly functions of nature, it is not possible to influence such events to any extent. One has to anticipate them and design accordingly. It may be possibleto influence the level of live load in a building by conducting audits and placing restrictions on contents. However, in the case of a fire start, there are many factors that can be brought to bear to influence the ultimate size of the fire and its effect within the building. It is known that occupants within a building will often detect a fire and deal with it before it reaches a sig- nificant size. It is estimated that less than one fire in five (Favre, 1996) results in a call to the fire brigade and for fires reported to the fire brigade, the majority will be limited to the room of fire origin. Inoc- cupied spaces, olfactory cues (smell) provide powerful evidence of the presence of even a small fire. The addition of a functional smoke detection system will further improve the likelihood of detection and of action being taken by the occupants.Fire fighting equipment, such as extinguishers and hose reels, is generally provided within buildings for the use of occupants and many organisations provide training for staff in respect ofthe use of such equipment.The growth of a fire can also be limited by automatic extinguishing systems such as sprinklers, which can be designed to have high levels of effectiveness.Fires can also be limited by the fire brigade depending on the size and location of the fire at the time of arrival.2.4 Effects of FireThe structural elements in the vicinity of the fire will experience the effects of heat. The temperatures within the structural elements will increase with time of exposure to the fire, the rate of temperature rise being dictated by the thermal resistance of the structural element and the severity of the fire. The increase in temperatures within a member will result in both thermal expansion and,eventually,a reduction in the structural resistance of the member. Differential thermal expansion will lead to bowing of a member. Significant axial expansion willbe accommodated in steel members by either overall or local buckling or yielding of local- ised regions. These effects will be detrimental for columns but for beams forming part of a floorsystem may assist in the development of other load resisting mechanisms (see Section 4.3.5).With the exception of the development of forces due to restraint of thermal expansion, fire does not impose loads on the structure but rather reduces stiffness and strength. Such effects are not instantaneous but are a function of time and this is different to the effects of loads such as earthquake and wind that are more or less instantaneous.Heating effects associated with a fire will not be significant or the rate of loss of capacity will be slowed if:(a) the fire is extinguished (e.g. an effective sprinkler system)(b) the fire is of insufficient severity –– insufficient fuel, and/or(b) the fire is of insufficient severity(c)the structural elements have sufficient thermal mass and/or insulation to slow the rise in internal temperatureFire protection measures such as providing sufficient axis distance and dimensions for concrete elements, and sufficient insulation thickness for steel elements are examples of (c). These are illustrated in Figure 2.The two situations described in the introduction are now considered.3 FIRE WITHIN BUILDINGS3.1 Fire Safety ConsiderationsThe implications of fire within the occupied parts of the office building (Figure 1) (Situation 2) are now considered. Fire statistics for office buildings show that about one fatality is expected in an office building for every 1000 fires reported to the fire brigade. This is an orderof magnitude less than the fatality rate associated with apartment buildings. More than two thirdsof fires occur during occupied hours and this is due to the greater human activity and the greater use of services within the building. It is twice as likely that a fire that commences out of normal working hours will extend beyond the enclosure of fire origin.A relatively small fire can generate large quantities of smoke within the floor of fire origin.If the floor is of open-plan construction with few partitions, the presence of a fire during normal occupied hours is almost certain to be detected through the observation of smoke on the floor. The presence of full height partitions across the floor will slow the spread of smoke and possibly also the speed at which the occupants detect the fire. Any measures aimed at improving housekeeping, fire awareness and fire response will be beneficial in reducing the likelihood of major fires during occupied hours.For multi-storey buildings, smoke detection systems and alarms are often provided to give “automatic” detection and warning to the occupants. An alarm signal is also transm itted to the fire brigade.Should the fire not be able to be controlled by the occupants on the fire floor, they will need to leave the floor of fire origin via the stairs. Stair enclosures may be designed to be fire-resistant but this may not be sufficient to keep the smoke out of the stairs. Many buildings incorporate stair pressurisation systems whereby positive airflow is introduced into the stairs upon detection of smoke within the building. However, this increases the forces required to open the stair doors and makes it increasingly difficult to access the stairs. It is quite likely that excessive door opening forces will exist(Fazio et al,2006)From a fire perspective, it is common to consider that a building consists of enclosures formed by the presence of walls and floors.An enclosure that has sufficiently fire-resistant boundaries (i.e. walls and floors) is considered to constitute a fire compartment and to be capableof limiting the spread of fire to an adjacent compartment. However, the ability of such boundariesto restrict the spread of fire can be severely limited by the need to provide natural lighting (windows)and access openings between the adjacent compartments (doors and stairs). Fire spread via the external openings (windows) is a distinct possibility given a fully developed fire. Limit- ing the window sizes and geometry can reduce but not eliminate the possibility of vertical fire spread.By far the most effective measure in limiting fire spread, other than the presence of occupants, is an effective sprinkler system that delivers water to a growing fire rapidly reducing the heat being generated and virtually extinguishing it.3.2 Estimating Fire SeverityIn the absence of measures to extinguish developing fires, or should such systems fail; severe fires can develop within buildings.In fire engineering literature, the term “fire load” refers to the quantity of combustibles within an enclosure and not the loads (forces) applied to the structure during a fire. Similarly, fire load density refers to the quantity of fuel per unit area. It is normally expressed in terms of MJ/m2or kg/m 2of wood equivalent. Surveys of combustibles for various occupancies (i.e offices, retail,hospitals, warehouses, etc)have been undertaken and a good summary of the available data is given in FCRC (1999). As would be expected, the fire load density is highly variable. Publications such as the International Fire Engineering Guidelines (2005) give fire load data in terms of the mean and 80th percentile.The latter level of fire load density is sometimes taken asthe characteristic fire load density and is sometimes taken as being distributed according to a Gumbel distribution (Schleich et al, 1999).The rate at which heat is released within an enclosure is termed the heat release rate (HRR) and normally expressed in megawatts (MW). The application of sufficient heat to a combustible material results in the generation of gases some of which are combustible. This process is called pyrolisation.Upon coming into contact with sufficient oxygen these gases ignite generating heat. The rate of burning(and therefore of heat generation) is therefore dependent on the flow of air to the gases generated by the pyrolising fuel.This flow is influenced by the shape of the enclosure (aspect ratio), and the position and size of any potential openings. It is found from experiments with single openings in approximately cubic enclosures that the rate of burning is directly proportional to A h where A is the area of the opening and h is the opening height. It is known that for deep enclosures with single openings that burning will occur initially closest to the opening moving back into the enclosure once the fuel closest to the opening is consumed (Thomas et al, 2005). Significant temperature variations throughout such enclosures can be expected.The use of the word ‘opening’ in relation to real building enclosures refers to any openings present around the walls including doors that are left open and any windows containing non fire-resistant glass.It is presumed that such glass breaks in the event of development of a significant fire. If the windows could be prevented from breaking and other sources of air to the enclosure limited, then the fire would be prevented from becoming a severe fire.V arious methods have been developed for determining the potential severity of a fire within an enclosure.These are described in SFPE (2004). The predictions of these methods are variable and are mostly based on estimating a representative heat release rate (HRR) and the proportion of total fuel ς likely to be consumed during the primary burning stage (Figure 4). Further studies of enclosure fires are required to assist with the development of improved models,as the behaviour is very complex.3.3 Role of the Building StructureIf the design objectives are to provide an adequate level of safety for the occupants and protection of adjacent properties from damage, then the structural adequacy of the building in fire need only be sufficient to allow the occupants to exit the building and for the building to ultimately deform in a way that does not lead to damage or fire spread to a building located on an adjacent site.These objectives are those associated with most building regulations including the Building Code of Australia (BCA). There could be other objectives including protection of the building against significant damage. In considering these various objectives, the following should be taken into account when considering the fire resistance of the building structure.3.3.1 Non-Structural ConsequencesSince fire can produce smoke and flame, it is important to ask whether these outcomes will threaten life safety within other parts of the building before the building is compromised by a lossof structural adequacy? Is search and rescue by the fire brigade not feasible given the likely extent of smoke? Will the loss of use of the building due to a severe fire result in major property and income loss? If the answer to these questions is in the affirmative, then it may be necessary to minimise the occurrence of a significant fire rather than simply assuming that the building structure needs to be designed for high levels of fire resistance. A low-rise shopping centre with levels interconnected by large voids is an example of such a situation.3.3.2 Other Fire Safety SystemsThe presence of other systems (e.g. sprinklers) within the building to minimise the occurrence of a serious fire can greatly reduce the need for the structural elements to have high levels of fire resistance. In this regard, the uncertainties of all fire-safety systems need to be considered. Irrespective of whether the fire safety system is the sprinkler system, stair pressurisation, compartmentation or the system giving the structure a fire-resistance level (e.g. concrete cover), there is an uncertainty of performance. Uncertainty data is available for sprinkler systems(because it is relatively easy to collect) but is not readily available for the other fire safety systems. This sometimes results in the designers and building regulators considering that only sprinkler systems are subject to uncertainty. In reality, it would appear that sprinklers systems have a high level of performance and can be designed to have very high levels of reliability.3.3.3 Height of BuildingIt takes longer for a tall building to be evacuated than a short building and therefore the structure of a tall building may need to have a higher level of fire resistance. The implications of collapse of tall buildings on adjacent properties are also greater than for buildings of only several storeys.3.3.4 Limited Extent of BurningIf the likely extent of burning is small in comparison with the plan area of the building, then the fire cannot have a significant impact on the overall stability of the building structure. Examples of situations where this is the case are open-deck carparks and very large area building such as shopping complexes where the fire-effected part is likely to be small in relation to area of the building floor plan.3.3.5 Behaviour of Floor ElementsThe effect of real fires on composite and concrete floors continues to be a subject of much research.Experimental testing at Cardington demonstrated that when parts of a composite floor are subject to heating, large displacement behaviour can develop that greatly assists the load carrying capacity of the floor beyond that which would predicted by considering only the behaviour of the beams and slabs in isolation.These situations have been analysed by both yield line methods that take into account the effects of membrane forces (Bailey, 2004) and finite element techniques. In essence, the methods illustrate that it is not necessary to insulate all structural steel elements in a composite floor to achieve high levels of fire resistance.This work also demonstrated that exposure of a composite floor having unprotected steel beams, to a localised fire, will not result in failure of the floor.A similar real fire test on a multistory reinforced concrete building demonstrated that the real structural behaviour in fire was significantly different to that expected using small displacement theory as for normal tempera- ture design (Bailey, 2002) with the performance being superior than that predicted by considering isolated member behaviour.3.4 Prescriptive Approach to DesignThe building regulations of most countries provide prescriptive requirements for the design of buildings for fire.These requirements are generally not subject to interpretation and compliance with them makes for simpler design approvalapproval––although not necessarily the most cost-effective designs.These provisions are often termed deemed-to-satisfy (DTS) provisions. Allcovered––the provision of emergency exits, aspects of designing buildings for fire safety are coveredspacings between buildings, occupant fire fighting measures, detection and alarms, measures for automatic fire suppression, air and smoke handling requirements and last, but not least, requirements for compartmentation and fire resistance levels for structural members. However, there is little evidence that the requirements have been developed from a systematic evaluation of fire safety. Rather it would appear that many of the requirements have been added one to anotherto deal with another fire incident or to incorporate a new form of technology. There does not appear to have been any real attempt to determine which provision have the most significant influence on fire safety and whether some of the former provisions could be modified.The FRL requirements specified in the DTS provisions are traditionally considered to result in member resistances that will only rarely experience failure in the event of a fire.This is why it is acceptable to use the above arbitrary point in time load combination for assessing members in fire. There have been attempts to evaluate the various deemed-to-satisfy provisions (particularly the fire- resistance requirements)from a fire-engineering perspective taking into account the possible variations in enclosure geometry, opening sizes and fire load (see FCRC, 1999).One of the outcomes of this evaluation was the recognition that deemed-to- satisfy provisions necessarily cover the broad range of buildings and thus must, on average, be quite onerous because of the magnitude of the above variations.It should be noted that the DTS provisions assume that compartmentation works and that fire is limited to a single compartment. This means that fire is normally only considered to exist at one level. Thus floors are assumed to be heated from below and columns only over one storey height.3.5 Performance-Based DesignAn approach that offers substantial benefits for individual buildings is the move towards performance-based regulations. This is permitted by regulations such as the BCA which state thata designer must demonstrate that the particular building will achieve the relevant performance requirements. The prescriptive provisions (i.e. the DTS provisions) are presumed to achieve these requirements. It is necessary to show that any building that does not conform to the DTS provisions will achieve the performance requirements.But what are the performance requirements? Most often the specified performance is simplya set of performance statements (such as with the Building Code of Australia)with no quantitative level given. Therefore, although these statements remind the designer of the key elements of design, they do not, in themselves, provide any measure against which to determine whether the design is adequately safe.Possible acceptance criteria are now considered.3.5.1 Acceptance CriteriaSome guidance as to the basis for acceptable designs is given in regulations such as the BCA. These and other possible bases are now considered in principle.(i)compare the levels of safety (with respect to achieving each of the design objectives) of the proposed alternative solution with those asso- ciated with a corresponding DTS solution for the building.This comparison may be done on either a qualitative or qualitative risk basis or perhaps a combination. In this case, the basis for comparison is an acceptable DTS solution. Such an approach requires a “holistic” approach to safety whereby all aspects relevant to safety, including the structure, are considered. This is, by far, the most common basis for acceptance.(ii)undertake a probabilistic risk assessment and show that the risk associated with the proposed design is less than that associated with common societal activities such as using pub lic transport. Undertaking a full probabilistic risk assessment can be very difficult for all but the simplest situations.Assuming that such an assessment is undertaken it will be necessary for the stakeholders to accept the nominated level of acceptable risk. Again, this requires a “holistic” approach to fire safety.(iii) a design is presented where it is demonstrated that all reasonable measures have been adopted to manage the risks and that any possible measures that have not been adopted will have negligible effect on the risk of not achieving the design objectives.(iv) as far as the building structure is concerned,benchmark the acceptable probability of failure in fire against that for normal temperature design. This is similar to the approach used when considering Building Situation 1 but only considers the building structure and not the effects of flame or smoke spread. It is not a holistic approach to fire safety.Finally, the questions of arson and terrorism must be considered. Deliberate acts of fire initiation range from relatively minor incidents to acts of mass destruction.Acts of arson are well within the accepted range of fire events experienced by build- ings(e.g. 8% of fire starts in offices are deemed "suspicious"). The simplest act is to use a small heat source to start a fire. The resulting fire will develop slowly in one location within the building and will most probably be controlled by the various fire- safety systems within the building. The outcome is likely to be the same even if an accelerant is used to assist fire spread.An important illustration of this occurred during the race riots in Los Angeles in 1992 (Hart 1992) when fires were started in many buildings often at multiple locations. In the case of buildings with sprinkler systems,the damage was limited and the fires significantly controlled.Although the intent was to destroy the buildings,the fire-safety systems were able to limit the resulting fires. Security measures are provided with systems such as sprinkler systems and include:- locking of valves- anti-tamper monitoring- location of valves in secure locationsFurthermore, access to significant buildings is often restricted by security measures.The very fact that the above steps have been taken demonstrates that acts of destruction within buildings are considered although most acts of arson do not involve any attempt to disable the fire-safety systems.At the one end of the spectrum is "simple" arson and at the other end, extremely rare acts where attempts are made to destroy the fire-safety systems along with substantial parts of thebuilding.This can be only achieved through massive impact or the use of explosives. The latter may be achieved through explosives being introduced into the building or from outside by missile attack.The former could result from missile attack or from the collision of a large aircraft. The greater the destructiveness of the act,the greater the means and knowledge required. Conversely, the more extreme the act, the less confidence there can be in designing against such an act. This is because the more extreme the event, the harder it is to predict precisely and the less understood will be its effects. The important point to recognise is that if sufficient means can be assembled, then it will always be possible to overcome a particular building design.Thus these acts are completely different to the other loadings to which a building is subjected such as wind,earthquake and gravity loading. This is because such acts of destruction are the work of intelligent beings and take into account the characteristics of the target.Should high-rise buildings be designed for given terrorist activities,then terrorists will simply use greater means to achieve the end result.For example, if buildings were designed to resist the impact effects from a certain size aircraft, then the use of a larger aircraft or more than one aircraft could still achieve destruction of the building. An appropriate strategy is therefore to minimise the likelihood of means of mass destruction getting into the hands of persons intent on such acts. This is not an engineering solution associated with the building structure.It should not be assumed that structural solutions are always the most appropriate, or indeed, possible.In the same way, aircrafts are not designed to survive a major fire or a crash landing but steps are taken to minimise the likelihood of either occurrence.The mobilization of large quantities of fire load (the normal combustibles on the floors) simultaneously on numerous levels throughout a building is well outside fire situations envisaged by current fire test standards and prescriptive regulations. Risk management measures to avoid such a possibility must be considered.4 CONCLUSIONSificantly from other “loads” such as wind, live load and earthquakes in significantlyFire differs signrespect of its origin and its effects.Due to the fact that fire originates from human activities or equipment installed within buildings, it is possible to directly influence the potential effects on the building by reducing the rate of fire starts and providing measures to directly limit fire severity.The design of buildings for fire safety is mostly achieved by following the prescriptive requirements of building codes such as the BCA. For situations that fall outside of the scope of such regulations, or where proposed designs are not in accordance with the prescriptive requirements, it is possible to undertake performance-based fire engineering designs.However,。

室内装饰装修设计外文文献翻译中英文

室内装饰装修设计外文文献翻译中英文

外文文献翻译(含:英文原文及中文译文)文献出处:Y Miyazaki. A Brief Description of Interior Decoration [J]. Building & Environment, 2005, 40(10):41-45.英文原文A Brief Description of Interior DecorationY Miyazaki一、An interior design element1 Spatial elementsThe rationalization of space and giving people a sense of beauty is the basic task of design. We must dare to explore the new image of the times and technologies that are endowed with space. We should not stick to the spatial image formed in the past.2 color requirementsIn addition to affecting the visual environment, indoor colors also directly affect people's emotions and psychology. Scientific use of color is good for work and helps health. The proper color processing can meet the functional requirements and achieve the beauty effect. In addition to observing the general laws of color, interior colors also vary with the aesthetics of the times.3 light requirementsHumans love the beauty of nature and often direct sunlight into theinterior to eliminate the sense of darkness and closure in the interior, especially the top light and the soft diffuse light, making the interior space more intimate and natural. The transformation of light and shadow makes the interior richer and more colorful, giving people a variety of feelings.4 decorative elementsThe indispensable building components such as columns, walls, and the like in the entire indoor space are combined with the function and need to be decorated to jointly create a perfect indoor environment. By making full use of the texture characteristics of different decorative materials, you can achieve a variety of interior art effects with different styles, while also reflecting the historical and cultural characteristics of the region.5 furnishingsIndoor furniture, carpets, curtains, etc., are all necessities of life. Their shapes are often furnished and most of them play a decorative role. Practicality and decoration should be coordinated with each other, and the functions and forms of seeking are unified and changed so that the interior space is comfortable and full of personality.6 green elementsGreening in interior design is an important means to improve the indoor environment. Indoor flowering trees are planted, and the use ofgreenery and small items to play a role in diffusing indoor and outdoor environments, expanding the sense of interior space, and beautifying spaces all play an active role.二、The basic principles of interior design1 interior decoration design to meet the functional requirementsThe interior design is based on the purpose of creating a good indoor space environment, so as to rationalize, comfort, and scientize the indoor environment. It is necessary to take into account the laws of people's activities to handle spatial relationships, spatial dimensions, and spatial proportions; to rationally configure furnishings and furniture, and to properly resolve indoor environments. V entilation, lighting and lighting, pay attention to the overall effect of indoor tone.2 interior design to meet the spiritual requirementsThe spirit of interior design is to influence people's emotions and even influence people's will and actions. Therefore, we must study the characteristics and laws of people's understanding; study the emotions and will of people; and study the interaction between people and the environment. Designers must use various theories and methods to impact people's emotions and sublimate them to achieve the desired design effect. If the indoor environment can highlight a certain concept and artistic conception, then it will have a strong artistic appeal and better play its role in spiritual function.3 Interior design to meet modern technical requirementsThe innovation of architectural space is closely related to the innovation of structural modeling. The two should be harmonized and unified, fully considering the image of the structural Sino-U.S. and integrating art and technology. This requires that interior designers must possess the necessary knowledge of the type of structure and be familiar with and master the performance and characteristics of the structural system. Modern interior design is in the category of modern science and technology. To make interior design better meet the requirements of spiritual function, we must maximize the use of the latest achievements in modern science and technology.4 Interior design must meet the regional characteristics and national style requirementsDue to differences in the regions where people live, geographical and climatic conditions, the living habits of different ethnic groups are not the same as cultural traditions, and there are indeed great differences in architectural styles. China is a multi-ethnic country. The differences in the regional characteristics, national character, customs, and cultural literacy of various ethnic groups make indoor decoration design different. Different styles and features are required in the design. We must embody national and regional characteristics to evoke people’s national self-respect and self-confidence.三、Points of interior designThe interior space is defined by the enclosure of the floor, wall, and top surface, thus determining the size and shape of the interior space. The purpose of interior decoration is to create a suitable and beautiful indoor environment. The floor and walls of the interior space are the backdrop for people and furnishings and furnishings, while the differences on the top surface make the interior space more varied.1 Base decoration ----- Floor decorationThe basic surface ----- is very important in people's sights. The ground floor is in contact with people, and the line of sight is near, and it is in a dynamic change. It is one of the important factors of interior decoration. Meet the following principles:2 The base should be coordinated with the overall environment to complement each other and set off the atmosphereFrom the point of view of the overall environmental effect of space, the base should be coordinated with the ceiling and wall decoration. At the same time, it should play a role in setting off the interior furniture and furnishings.3 Pay attention to the division, color and texture of the ground patternGround pattern design can be roughly divided into three situations: The first is to emphasize the independent integrity of the pattern itself,such as meeting rooms, using cohesive patterns to show the importance of the meeting. The color should be coordinated with the meeting space to achieve a quiet, focused effect; the second is to emphasize the pattern of continuity and rhythm, with a certain degree of guidance and regularity, and more for the hall, aisle and common space; third It emphasizes the abstractness of the pattern, freedom, and freedom, and is often used in irregular or layout-free spaces.4 Meeting the needs of the ground structure, construction and physical properties of the buildingWhen decorating the base, attention should be paid to the structure of the ground floor. In the premise of ensuring safety, it is convenient for construction and construction. It cannot be a one-sided pursuit of pattern effects, and physical properties such as moisture-proof, waterproof, thermal insulation, and thermal insulation should be considered. need. The bases are available in a wide variety of types, such as: wooden floors, block floors, terrazzo floors, plastic floors, concrete floors, etc., with a wide variety of patterns and rich colors. The design must be consistent with the entire space environment. Complementary to achieve good results.四、wall decorationIn the scope of indoor vision, the vertical line of sight between the wall and the person is in the most obvious position. At the same time, thewall is the part that people often contact. Therefore, the decoration of the wall is very important for the interior design. The following design principles must be met: 1 IntegrityWhen decorating a wall, it is necessary to fully consider the unity with other parts of the room, and to make the wall and the entire space a unified whole.2 PhysicalThe wall surface has a larger area in the interior space, and the status is more important and the requirements are higher. The requirements for sound insulation, warmth protection, fire prevention, etc. in the interior space vary depending on the nature of the space used, such as the guest room, high requirements. Some, while the average unit canteen, requiresa lower number.3 ArtistryIn the interior space, the decorative effect of the wall plays an important role in rendering and beautifying the indoor environment. The shape of the wall, the partition pattern, the texture and the interior atmosphere are closely related to each other. In order to create the artistic effect of the interior space, the wall The artistry of the surface itself cannot be ignored.The selection of wall decoration styles is determined according to the above principles. The forms are roughly the following: plasteringdecoration, veneering decoration, brushing decoration, coil decoration. Focusing on the coil decoration here, with the development of industry, there are more and more coils that can be used to decorate walls, such as: plastic wallpaper, wall cloth, fiberglass cloth, artificial leather, and leather. These materials are characterized by the use of It is widely used, flexible and free, with a wide variety of colors, good texture, convenient construction, moderate prices, and rich decorative effects. It is a material that is widely used in interior design.五、Ceiling decorationThe ceiling is an important part of the interior decoration, and it is also the most varied and attractive interface in the interior space decoration. It has a strong sense of perspective. Through different treatments, the styling of lamps and lanterns can enhance the space appeal and make the top surface rich in shape. Colorful, novel and beautiful.1 Design principlesPay attention to the overall environmental effects.The ceiling, wall surface and base surface together make up the interior space and jointly create the effects of the indoor environment. The design should pay attention to the harmonization of the three, and each has its own characteristics on a unified basis.The top decoration should meet the applicable aesthetic requirements.In general, the effect of indoor space should be lighter and lighter. Therefore, it is important to pay attention to the simple decoration of the top decoration, highlight the key points, and at the same time, have a sense of lightness and art.The top decoration should ensure the rationality and safety of the top structure. Cannot simply pursue styling and ignore safety2 top design(1) Flat roofThe roof is simple in construction, simple in appearance, and convenient in decoration. It is suitable for classrooms, offices, exhibition halls, etc. Its artistic appeal comes from the top shape, texture, patterns, and the organic configuration of the lamps.(2) Convex ceilingThis kind of roof is beautiful and colorful, with a strong sense of three-dimensionality. It is suitable for ballrooms, restaurants, foyers, etc. It is necessary to pay attention to the relationship between the primary and secondary relationships and the height difference of various concavo-convex layers. It is not appropriate to change too much and emphasize the rhythm of rhythm and the artistry of the overall space. .(3) Suspended ceilingV arious flaps, flat plates or other types of ceilings are hung under the roof load-bearing structures. These ceilings are often used to meetacoustic or lighting requirements or to pursue certain decorative effects. They are often used in stadiums, cinemas, and so on. In recent years, this type of roof has also been commonly used in restaurants, cafes, shops, and other buildings to create special aesthetics and interests.(4) Well format ceilingIt is in the form of a combined structural beam, in which the main and secondary beams are staggered and the relationship between the wells and beams, together with a ceiling of lamps and gypsum floral designs, is simple and generous, with a strong sense of rhythm.(5) Glass ceilingThe halls and middle halls of modern large-scale public buildings are commonly used in this form, mainly addressing the needs of large-scale lighting and indoor greening, making the indoor environment richer in natural appeal, and adding vitality to large spaces. It is generally in the form of a dome, a cone, and a zigzag. In short, interior decoration design is a comprehensive discipline, involving many disciplines such as sociology, psychology, and environmental science, and there are many things that we need to explore and study. This article mainly elaborated the basic principles and design methods of interior decoration design. No matter what style belongs to the interior design door, this article gives everyone a more in-depth understanding and comprehension of interior design. If there are inadequacies, let the criticism correct me.中文译文室内装饰简述Y Miyazaki一室内装饰设计要素1 空间要素空间的合理化并给人们以美的感受是设计基本的任务。

极限思想外文翻译pdf

极限思想外文翻译pdf

极限思想外文翻译pdfBSHM Bulletin, 2014Did Weierstrass’s differential calculus have a limit-avoiding character? His,,,definition of a limit in styleMICHIYO NAKANENihon University Research Institute of Science & Technology, Japan In the 1820s, Cauchy founded his calculus on his original limit concept and,,,developed his the-ory by using inequalities, but he did not apply theseinequalities consistently to all parts of his theory. In contrast, Weierstrass consistently developed his 1861 lectures on differential calculus in terms of epsilonics. His lectures were not based on Cauchy’s limit and are distin-guished bytheir limit-avoiding character. Dugac’s partial publication of the 1861 lecturesmakes these differences clear. But in the unpublished portions of the lectures,,,,Weierstrass actu-ally defined his limit in terms ofinequalities. Weierstrass’slimit was a prototype of the modern limit but did not serve as a foundation of his calculus theory. For this reason, he did not providethe basic structure for the modern e d style analysis. Thus it was Dini’s 1878 text-book that introduced the,,,definition of a limit in terms of inequalities.IntroductionAugustin Louis Cauchy and Karl Weierstrass were two of the most important mathematicians associated with the formalization of analysis on the basis of the e d doctrine. In the 1820s, Cauchy was the first to give comprehensive statements of mathematical analysis that were based from the outset on a reasonably clear definition of the limit concept (Edwards 1979, 310). He introduced various definitions and theories that involved his limit concept. His expressions were mainly verbal, but they could be understood in terms of inequalities: given an e, find n or d (Grabiner 1981, 7). As we show later, Cauchy actually paraphrased his limit concept in terms of e, d, and n0 inequalities, in his more complicated proofs. But it was Weierstrass’s 1861 lectures which used the technique in all proofs and also in his defi-nition (Lutzen? 2003, 185–186).Weierstrass’s adoption of full epsilonic arguments, however, didnot mean that he attained a prototype of the modern theory. Modern analysis theory is founded on limits defined in terms of e d inequalities. His lectures were not founded on Cauchy’s limit or his own original definition of limit (Dugac 1973). Therefore, in order to clarify the formation of the modern theory, it will be necessary toidentify where the e d definition of limit was introduced and used as a foundation.We do not find the word ‘limit’ in the pu blished part of the 1861 lectures.Accord-ingly, Grattan-Guinness (1986, 228) characterizesWeierstrass’s analysis aslimit-avoid-ing. However, Weierstrass actually defined his limit in terms of epsilonics in the unpublished portion of his lectures. Histheory involved his limit concept, although the concept did not function as the foundation of his theory. Based on this discovery, this paper re-examines the formation of e d calculus theory, noting mathematicians’ treat-ments of their limits. We restrict ourattention to the process of defining continuity and derivatives. Nonetheless, this focus provides sufficient information for our purposes.First, we confirm that epsilonics arguments cannot representCauchy’s limit,though they can describe relationships that involved his limit concept. Next, we examine how Weierstrass constructed a novel analysis theory which was not based2013 British Society for the History of Mathematics52 BSHM Bulletinon Cauchy’s limits but could have involved Cauchy’s resu lts. Thenwe confirmWeierstrass’s definition of limit. Finally, we note that Dini organized his analysis textbook in 1878 based on analysis performed inthe e d style.Cauchy’s limit and epsilonic argumentsCauchy’s series of textbooks on calculus, Cours d’analyse (1821), Resume deslecons? donnees a l’Ecole royale polytechnique sur le calcul infinitesimal tomepremier (1823), and Lecons? sur le calcul differentiel (1829), are often considered as the main referen-ces for modern analysis theory, the rigour of which is rooted more in the nineteenth than the twentieth century.At the beginning of his Cours d’analyse, Cauchy defined the limit concept as fol-lows: ‘When the successively attributed values of the same variable indefinitely approach a fixed value, so that finally they differ from it by as little as desired, the last is called the limit of all theothers’ (1821, 19; English translation fromGrabiner 1981, 80). Starting from this concept, Cauchy developed a theory of continuous func-tions, infinite series, derivatives, and integrals, constructing an analysis based on lim-its (Grabiner 1981, 77).When discussing the evolution of the limit concept, Grabiner writes:‘This con-cept, translated into the algebra of inequalities, was exactly what Ca uchy needed for his calculus’ (1981, 80). From the present-day point of view, Cauchy described rather than defined his kinetic concept of limits. According to his ‘definition’—which has the quality of a translation or description—he could develop any aspectof the theory by reducing it to the algebra of inequalities.Next, Cauchy introduced infinitely small quantities into his theory. ‘When the suc-cessive absolute values of a variable decrease indefinitely, in such a way as to become less than any given quantity, that variable becomes what is called an infinitesimal. Such a variable has zero for its limit’ (1821, 19; English translationfrom Birkhoff and Merzbach 1973, 2). That is to say, in Cauchy’s framework ‘thelimit of variable x is c’ is intuitively understood as ‘x indefinitely approaches c’,and is represented as ‘jx cj is as little as desired’ or ‘jx cj is infinitesimal’.Cauchy’s idea of defining infinitesimals as variables of a special kind was original, because Leibniz and Euler, for example, had treated them as constants (Boyer 1989, 575; Lutzen? 2003, 164).In Cours d’analyse Cauchy at first gave a verbal definition of a continuous func-tion. Then, he rewrote it in terms of infinitesimals:[In other words,] the function f ðxÞ will rema in continuous relative to x in a given interval if (in this interval) an infinitesimalincrement in the variable always pro-duces an infinitesimal increment in the function itself. (1821, 43; English transla-tion from Birkhoff and Merzbach 1973, 2).He introduced the infinitesimal-involving definition and adopted a modified version of it in Resume (1823, 19–20) and Lecons? (1829, 278).Following Cauchy’s definition of infinitesimals, a continuous function can be defined as a function f ðxÞ in which ‘the variable f ðx þ aÞ f ðxÞ is an infinitelysmall quantity (as previously defined) whenever the variable a is, that is, that f ðx þ aÞ f ðxÞ approaches to zero as a does’, as notedby Edwards (1979, 311). Thus,the definition can be translated into the language of e dinequalities from a modern viewpoint. Cauchy’s infinitesimals are variables, and we can also takesuch an interpretation.Volume 29 (2014) 53Cauchy himself translated his limit concept in terms of e d inequalities. He changed ‘If the difference f ðx þ 1Þ f ðxÞ converges towards a certain limit k, for increasing values of x, (. . .)’ to‘First suppose that the quantity k has a finitevalue, and denote by e a number as small as we wish. . . . we cangive the number h a value large enough that, when x is equal to orgreater than h, the difference in question is always contained between the limits k e; k þ e’ (1821, 54; Englishtranslation from Bradley and Sandifer 2009, 35).In Resume , Cauchy gave a definition of a derivative: ‘if f ðxÞ is continuous, thenits derivative is the limit of the difference quotient,,yf(x,i),f(x), ,xias i tends to 0’ (1823, 22–23). He also translated the concept of derivative asfollows: ‘Designate by d and e two very small numbers; the first being chosen in such a way that, for numerical values of i less than d, [. . .], the ratio f ðx þ iÞ fðxÞ=i always remains greater than f ’ðx Þ e and less than f ’ðxÞ þ e’ (1823,44–45; English transla-tion from Grabiner 1981, 115).These examples show that Cauchy noted that relationships involving limits or infinitesimals could be rewritten in term of inequalities. Cauchy’s argumentsabout infinite series in Cours d’analyse, which dealt with the relationship betweenincreasing numbers and infinitesimals, had such a character. Laugwitz (1987, 264; 1999, 58) and Lutzen? (2003, 167) have noted Cauchy’s strict use of the e Ncharacterization of convergence in several of his proofs. Borovick and Katz (2012) indicate that there is room to question whether or not our representation using e d inequalities conveys messages different from Cauchy’s original intention. Butthis paper accepts the inter-pretations of Edwards, Laugwitz, and Lutzen?.Cauchy’s lectures mainly discussed properties of series and functions in the limit process, which were represented as relationships between his limits or his infinitesi-mals, or between increasing numbers and infinitesimals. His contemporaries presum-ably recognized the possibility of developing analysis theory in terms of only e, d, and n0 inequalities. With a few notable exceptions, all of Cauchy’s lectures could be rewrit-ten in terms of e d inequalities. Cauchy’s limits and hisinfinitesimals were not func-tional relationships,1 so they were not representable in terms of e d inequalities.Cauchy’s limit concept was the foundation of his theory. Thus, Weierstrass’s fullepsilonic analysis theory has a different foundation from that of Cauchy.Weierstrass’s 1861 lecturesWeierstrass’s consistent use of e d argumentsWeierstrass delivered his lectures ‘On the differential calculus’ at the GewerbeInsti-tut Berlin2 in the summer semester of 1861. Notes of these lectures were taken by1Edwards (1979, 310), Laugwitz (1987, 260–261, 271–272), andFisher (1978, 16–318) point out tha t Cauchy’s infinitesimals equate to a dependent variablefunction or aðhÞ that approaches zero as h ! 0. Cauchy adopted the latter infinitesimals, which can be written in terms of e d arguments, when he intro-duced a concept of degree of infinitesimals (1823, 250; 1829, 325). Every infinitesimal of Cauchy’s is a vari-able in the parts that the present paperdiscusses.2A forerunner of the Technische Universit?at Berlin.54 BSHM BulletinHerman Amandus Schwarz, and some of them have been published in the original German by Dugac (1973). Noting the new aspects related to foundational concepts in analysis, full e d definitions of limit and continuous function, a new definition of derivative, and a modern definition of infinitesimals, Dugac considered that the nov-elty of Weierstrass’s lectures was incontestable (1978, 372, 1976, 6–7).3 After beginning his lectures by defining a variable magnitude, Weierstrass gave the definition of a function using the notion of correspondence. This brought him to the following important definition, which did not directly appear in Cauchy’s theory:(D1) If it is now possible to determine for h a bound d such thatfor all values of h which in their absolute value are smaller than d, f ðx þ hÞ f ðxÞ becomes smaller than any magnitude e, however small, then one says that infinitely small changes of the argument correspond to infinitely small changes of the function. (Dugac 1973, 119; English translation from Calinger 1995, 607)That is, Weierstrass defined not infinitely small changes of variables but ‘infinitelysmall changes of the arguments correspond(ing) to infinitely small changes of function’ that were presented in terms of e d inequalities. He founded his theory on this correspondence.Using this concept, he defined a continuous function as follows: (D2) If now a function is such that to infinitely small changes of the argument there correspond infinitely small changes of the function, one then says that it is a continuous function of the argument, or that it changes continuously with this argument. (Dugac 1973, 119–120; English translation from Calinger 1995, 607)So we see that in accordance with his definition of correspondence, Weierstrass actually defined a continuous function on an interval in terms of epsiloni cs. Since (D2) is derived by merely changing Cauchy’s term ‘produce’ to, it seems that Weierstrass took the idea of this definition from‘correspond’Cauchy. However, Weierstrass’s definition was given in terms of epsilonics, whileCauchy’s definition c an only be interpreted in these terms. Furthermore, Weierstrass achieved it without Cauchy’s limit.Luzten? (2003, 186) indicates that Weierstrass still used the concept of ‘infinitelysmall’ in his lectures. Until giving his definition of derivative, Weierstrass actuallya function continued to use the term ‘infinitesimally small’ and often wrote of ‘which becomes infinitely small with h’. But several instances of‘infinitesimallysmall’ appeared in forms of the relationships involving them. Definition (D1) gives the rela-tionship in terms of e d inequalities. We may therefore assume that Weierstrass’s lectures consistently used e d inequalities, even though his definitions were not directly written in terms of these inequalities.Weierstrass inserted sentences confirming that the relationships involving the term ‘infinitely small’ were defined in terms of e d inequalities as follows:ðhÞ is an (D3) If h denotes a magnitude which can assume infinitely small values, ’arbitrary function of h with the property that for an infinitely small value of h it3The present paper also quotes Kurt Bing’s translation included in Calinger’sClassics of mathematics.Volume 29 (2014) 55also becomes infinitely small (that is, that always, as soon as a definite arbitrary small magnitude e is chosen, a magnitude d can be determined such that for all values of h whose absolute value is smaller than d, ’ðhÞ becomes smaller than e).(Dugac 1973, 120; English translation from Calinger, 1995, 607)As Dugac (1973, 65) in dicates, some modern textbooks describe ’ðhÞ as infinitelysmall or infinitesimal.Weierstrass argued that the whole change of function can in general be decom-posed asDf ðxÞ ? f ðx þ hÞ f ðxÞ ? p:h þ hðhÞ; ð 1Þwhere the factor p is independent of h and ðh Þ is a magnitude that becomes infinitely small with h.4 However, he overlooked that such decomposition is not possible for all functions and inserted the term‘in general’. He rewrote h as dx.One can make the difference between Df ðxÞ and p:dx s maller than any magnitude with decreasing dx. Hence Weierstrass defined ‘differential’ as the changewhich a function undergoes when its argument changes by an infinitesimally small magnitude and denoted it as df ðxÞ. Then, df ðxÞ ? p:dx. Weierstrass pointed outthat the differential coefficient p is a function of x derived from f ðxÞ and called it a derivative (Dugac 1973, 120–121; English translation from Calinger 1995, 607–608). In accordance with Weierstrass’s definitions (D1) and (D3),he largelydefined a derivative in terms of epsilonics.Weierstrass did not adopt the term ‘infinitely small’ but directly used e dinequalities when he discussed properties of infinite seriesinvolving uniform conver-gence (Dugac 1973, 122–124). It may beinferred from the publishedportion of his notes that Cauchy’s limit has no place in Weierstrass’s lectures.Grattan-Guinness’s (1986, 228) description of the limit-avoiding character of his analysis represents this situation well.However, Weierstrass thought that his theory included most of the content of Cauchy’s theory. Cauchy first gave the definition of limits of variables andinfinitesi-mals. Then, he demonstrated notions and theorems that were written in terms of the relationships involving infinitesimals. From Weierstrass’s viewpoint,they were writ-ten in terms of e d inequalities. Analytical theory mainly examines properties of functions and series, which were described in the relationships involving Cauchy’s limits and infinitesimals. Weierstrass recognized this fact and had the idea of consis-tently developing his theory in terms of inequalities. Hence Weierstrass atfirst defined the relationships among infinitesimals in terms of e d inequalities. In accor-dance with this definition, Weierstrass rewrote Cauchy’sresults and naturally imported them into his own theory. This is a process that may be described as fol-lows: ‘Weierstrass completed the transformation away fromthe use of terms such as “infinitely small”’ (Katz 1998, 728).Weierstrass’s definition of limitDugac (1978, 370–372; 1976, 6–7) read (D1) as the first definition of limit withthe help of e d. But (D1) does not involve an endpoint thatvariables or functions4Dugac (1973, 65) indicated that ðhÞ corresponds to the modernnotion of oð1Þ. In addition, hðhÞ corre-sponds to the function that was introduced as ’ðhÞ in theformer quotation from Weierstrass’s sentences.。

液压系统外文文献翻译中英文

液压系统外文文献翻译中英文

外文文献翻译(含:英文原文及中文译文)英文原文Hydraulic systemW Arnold1 IntroductionThe hydraulic station is called a hydraulic pump station and is an independent hydraulic device. It is step by step to supply oil. And control the direction of hydraulic oil flow, pressure and flow, suitable for the host and hydraulic equipment can be separated on the various hydraulic machinery.After the purchase, the user only needs to connect the hydraulic station and the actuator (hydraulic or oil motor) on the mainframe with different tubings. The hydraulic machine can realize various specified actions and working cycles.The hydraulic station is a combination of manifolds, pump units or valve assemblies, electrical boxes, and tank electrical boxes. Each part function is:The pump unit is equipped with a motor and an oil pump, which is the power source of the hydraulic station and can convert mechanical energy into hydraulic oil pressure energy.V alve combination - its plate valve is mounted on the vertical plate, and the rear plate is connected with the same function as the manifold.Oil manifolds - assembled from hydraulic valves and channel bodies. It regulates hydraulic oil pressure, direction and flow.Box--a semi-closed container for plate welding. It is also equipped with an oil screen, an air filter, etc., which is used for cooling and filtering of oil and oil.Electrical box - divided into two types: one is to set the external lead terminal board; one is equipped with a full set of control appliances.The working principle of the hydraulic station: The motor drives the oil pump to rotate, then the pump sucks oil from the oil tank and supplies oil, converts the mechanical energy into hydraulic pressure energy, and the hydraulic oil passes through the manifold (or valve assembly) to adjust the direction, pressure and flow and then passes through the external tube. The way to the hydraulic cylinder or oil motor in the hydraulic machinery, so as to control the direction of the hydraulic motor, the strength of the speed and speed, to promote all kinds of hydraulic machinery to do work.(1) Development history of hydraulic pressureThe development history of hydraulics (including hydraulic power, the same below), pneumatics, and seals industry in China can be roughly divided into three stages, namely: the starting stage in the early 1950s to the early 60s; and the professional in the 60s and 70s. The growth stage of the production system; the 80-90's is a stage of rapid development. Among them, the hydraulic industry began in the early 1950s with thedevelopment of hydraulic machines such as Grinding Machines, broaching machines, and profiling lathes, which were produced by the machine tool industry. The hydraulic components were produced by the hydraulic workshop in the machine tool factory, and were produced for self use. After entering the 1960s, the application of hydraulic technology was gradually promoted from the machine tool to the agricultural machinery and engineering machinery. The original hydraulic workshop attached to the main engine plant was independent and became a professional manufacturer of hydraulic components. In the late 1960s and early 1970s, with the continuous development of mechanization of production, particularly in the provision of highly efficient and automated equipment for the second automobile manufacturing plant, the hydraulic component manufacturing industry witnessed rapid development. The batch of small and medium-sized enterprises also began to become specialized manufacturers of hydraulic parts. In 1968, the annual output of hydraulic components in China was close to 200,000 pieces. In 1973, in the fields of machine tools, agricultural machinery, construction machinery and other industries, the professional factory for the production of hydraulic parts has grown to over 100, and its annual output exceeds 1 million pieces. Such an independent hydraulic component manufacturing industry has taken shape. At this time, the hydraulic product has evolved from the original imitation Su product intoa combination of imported technology and self-designed products. The pressure has been developed towards medium and high pressures, and electro-hydraulic servo valves and systems have been developed. The application of hydraulics has been further expanded. The pneumatic industry started a few years later than hydraulics, and it was only in 1967 that it began to establish a professional pneumatic components factory. Pneumatic components began to be manufactured and sold as commodities. Its sealing industry including rubber seals, flexible graphite seals, and mechanical seals started from the production of common O-rings, oil seals, and other extruded rubber seals and asbestos seal products in the early 1950s. In the early 1960s, it began to develop and produce flexible products. Graphite seals and mechanical seals and other products. In the 1970s, a batch of batches of professional production plants began to be established one after another in the systems of the former Ministry of Combustion, the Ministry of Agriculture, and the Ministry of Agricultural Machinery, formally forming the industry, which laid the foundation for the development of the seal industry.In the 1980s, under the guidance of the national policy of reform and opening up, with the continuous development of the machinery industry, the contradiction between the basic components lags behind the host computer has become increasingly prominent and caused the attention of all relevant departments. To this end, the former Ministry of Machinesestablished the General Infrastructure Industry Bureau in 1982, and unified the original pneumatic, hydraulic, and seal specialties that were scattered in the industries of machine tools, agricultural machinery, and construction machinery, etc. The management of a piece of office, so that the industry in the planning, investment, the introduction of technology and scientific research and development and other aspects of the basic parts of the bureau's guidance and support. This has entered a period of rapid development, it has introduced more than 60 foreign advanced technology, of which more than 40 hydraulic, pneumatic 7, after digestion and absorption and technological transformation, are now mass production, and has become the industry's leading products . In recent years, the industry has intensified its technological transformation. From 1991 to 1998, the total investment of national, local, and corporate self-raised funds totaled about 2 billion yuan, of which more than 1.6 billion were hydraulic. After continuous technological transformation and technological breakthroughs, the technical level of a group of major enterprises has been further improved, and technological equipment has also been greatly improved, laying a good foundation for forming a high starting point, specialization, and mass production. In recent years, under the guidance of the principle of common development of multiple ownership systems in the country, various small and medium-sized enterprises with different ownership have rapidly emerged and haveshown great vitality. With the further opening up of the country, foreign-funded enterprises have developed rapidly, which plays an important role in raising industry standards and expanding exports. So far China has established joint ventures with famous manufacturers in the United States, Germany, Japan and other countries or directly established piston pumps/motors, planetary speed reducers, hydraulic control valves, steering gears, hydraulic systems, hydrostatic transmissions, and hydraulic components. The company has more than 50 manufacturing enterprises such as castings, pneumatic control valves, cylinders, gas processing triplets, rubber seals, and mechanical seals, and has attracted more than 200 million U.S. dollars in foreign capital.(2) Current statusBasic profileAfter more than 40 years of hard work, China's hydraulics, pneumatics and seals industry has formed a complete industrial system with a certain level of production capacity and technical level. According to the statistics of the third n ational industrial census in 1995, China’s state-owned, privately-owned, cooperative, village-run, individual, and “funded enterprises” have annual sales income of more than 1 million yuan in hydraulic, pneumatic, and seal industrial townships and above. There are a total of more than 1,300 companies, including about 700 hydraulics, and about 300 pneumatic and sealing parts. According to thestatistics of the international industry in 1996, the total output value of the hydraulic industry in China was about 2.448 billion yuan, accounting for the 6th in the world; the total output value of the pneumatic industry was about 419 million yuan, accounting for the world’s10 people.2. Current supply and demand profileWith the introduction of technology, independent development and technological transformation, the technical level of the first batch of high-pressure plunger pumps, vane pumps, gear pumps, general hydraulic valves, oil cylinders, oil-free pneumatic components and various types of seals has become remarkable. Improve, and can be stable mass production, provide guarantees for all types of host to improve product quality. In addition, certain achievements have also been made in the aspects of CAD, pollution control, and proportional servo technology for hydraulic pneumatic components and systems, and have been used for production. So far, the hydraulic, pneumatic and seal products have a total of about 3,000 varieties and more than 23,000 specifications. Among them, there are about 1,200 types of hydraulic pressure, more than 10,000 specifications (including 60 types of hydrodynamic products, 500 specifications); about 1350 types of pneumatic, more than 8,000 specifications; there are also 350 types of rubber seals, more than 5000 The specifications are now basically able to adapt to the general needs ofvarious types of mainframe products. The matching rate for major equipment sets can reach more than 60%, and a small amount of exports has started.In 1998, the domestic production of hydraulic components was 4.8 million pieces, with sales of about 2.8 billion yuan (of which mechanical systems accounted for 70%); output of pneumatic components was 3.6 million pieces, and sales were about 550 million yuan (including mechanical systems accounting for about 60%) The production of seals is about 800 million pieces, and the sales volume is about 1 billion yuan (including about 50% of mechanical systems). According to the statistics of the annual report of the China Hydraulic and Pneumatic Sealing Industry Association in 1998, the production and sales rate of hydraulic products was 97.5% (101% of hydraulic power), 95.9% of air pressure, and 98.7% of seal. This fully reflects the basic convergence of production and sales.Although China's hydraulic, pneumatic and sealing industries have made great progress, there are still many gaps compared with the development needs of the mainframe and the world's advanced level, which are mainly reflected in the variety, performance and reliability of products. . Take hydraulic products as an example, the product varieties are only 1/3 of the foreign country, and the life expectancy is 1/2 of that of foreign countries. In order to meet the needs of key hosts, imported hosts, and majortechnical equipment, China has a large number of imported hydraulic, pneumatic, and sealing products every year. According to customs statistics and relevant data analysis, in 1998, the import volume of hydraulic, pneumatic and seal products was about 200 million U.S. dollars, of which the hydraulic pressure was about 140 million U.S. dollars, the pneumatics were 30 million U.S. dollars, and the seal was about 0.3 billion U.S. dollars. The year is slightly lower. In terms of amount, the current domestic market share of imported products is about 30%. In 1998, the total demand for hydraulic parts in the domestic market was about 6 million pieces, and the total sales volume was 4 billion yuan; the total demand for pneumatic parts was about 5 million pieces, and the total sales volume was over 700 million yuan; the total demand for seals was about 1.1 billion yuan. Pieces, total sales of about 1.3 billion yuan. (3) Future developments1. The main factors affecting development(1) The company's product development capability is not strong, and the level and speed of technology development can not fully meet the current needs for advanced mainframe products, major technical equipment and imported equipment and maintenance;(2) Many companies have lagged behind in manufacturing process, equipment level and management level, and their sense of quality is not strong, resulting in low level of product performance, unstable quality,poor reliability, and insufficiency of service, and lack of user satisfaction. And trusted branded products;(3) The degree of professional specialization in the industry is low, the power is scattered, the duplication of the low level is serious, the product convergence between the region and the enterprise leads to blind competition, and the prices are reduced each other, thus the efficiency of the enterprise is reduced, the funds are lacking, and the turnover is difficult. Insufficient investment in development and technological transformation has severely restricted the overall level of the industry and its competitive strength.(4) When the degree of internationalization of the domestic market is increasing, foreign companies have gradually entered the Chinese market to participate in competition, coupled with the rise of domestic private, cooperative, foreign-funded, and individual enterprises, resulting in increasing impact on state-owned enterprises. .2. Development trendWith the continuous deepening of the socialist market economy, the relationship between supply and demand in the hydraulic, pneumatic and sealed products has undergone major changes. The seller market characterized by “shortage” has basically become a buyer’s market characterized by “structured surplus”. Replaced by. From the perspective of overall capacity, it is already in a trend of oversupply, and in particular,general low-grade hydraulic, pneumatic and seals are generally oversupply; and like high-tech products with high technological content and high value and high value-added products that are urgently needed by the host, Can not meet the needs of the market, can only rely on imports. After China's entry into the WTO, its impact may be greater. Therefore, during the “10th Five-Y ear Plan” period, the growth of the industry’s output value must not only rely on the growth of quantity. Instead, it should focus on the structural contradiction of the industry and intensify efforts to adjust the industrial structure and product structure. It should be based on the improvement of quality. Product technology upgrades in order to adapt to and stimulate market demand, and seek greater development.2. Hydraulic application on power slide(1) Introduction of Power Sliding TableUsing the binding force curve diagram and the state space analysis method to analyze and study the sliding effect and the smoothness of the sliding table of the combined machine tool, the dynamics of the hydraulic drive system of the sliding table—the self-regulating back pressure regulating system are established. mathematical model. Through the digital simulation system of the computer, the causes and main influencing factors of the slide impact and the motion instability are analyzed. What kind of conclusions can be drawn from those, if we canreasonably design the structural dimensions of hydraulic cylinders and self-regulating back pressure regulators ——The symbols used in the text are as follows:s 1 - flow source, that is, the flow rate of the governor valve outlet;S el —— sliding friction of the sliding table;R - the equivalent viscous friction coefficient of the slide;I 1 - quality of slides and cylinders;12 - self-adjusting back pressure valve core quality;C 1, c 2 - liquid volume without cylinder chamber and rod chamber;C 2 - Self-adjusting back pressure valve spring compliance;R 1, R2 - Self-adjusting back pressure valve damping orifice fluid resistance;R 9 - Self-adjusting back pressure valve valve fluid resistance;S e2——initial pre-tightening force of self-adjusting back pressure valve spring;I 4, I5 - Equivalent liquid sense of the pipeline;C 5, C 6 - equivalent liquid capacity of the pipeline;R 5, R7 - Equivalent liquid resistance of the pipeline;V 3, V4 - cylinder rodless cavity and rod cavity volume;P 3, P4—pressure of the rodless cavity and rod cavity of the cylinder;F - the slide bears the load;V - speed of slide motion;In this paper, the power bond diagram and the state space splitting method are used to establish the system's motion mathematical model, and the dynamic characteristics of the slide table can be significantly improved.In the normal operation of the combined machine tool, the magnitude of the speed of the slide, its direction and the load changes it undergoes will affect its performance in varying degrees. Especially in the process of work-in-process, the unsteady movement caused by the advancing of the load on the slide table and the cyclical change of the load will affect the surface quality of the workpiece to be machined. In severe cases, the tool will break. According to the requirements of the Dalian Machine Tool Plant, the author used the binding force curve diagram and the state space analysis method to establish a dynamic mathematical model of a self-adjusting back pressure and speed adjustment system for the new hydraulic drive system of the combined machine tool slide. In order to improve the dynamic characteristics of the sliding table, it is necessary to analyze the causes and main influencing factors of the impetus and movement of the sliding table. However, it must pass the computer's digital simulation and the final results obtained from the research.(2) Dynamic Mathematical ModelThe working principle diagram of the self-adjusting back pressure speedregulation system of the combined machine tool slide hydraulic drive system is shown in the figure. This system is used to complete the work-cycle-stop-rewind. When the sliding table is working, the three-position four-way reversing valve is in the illustrated position. The oil supply pressure of the oil pump will remain approximately constant under the effective action of the overflow valve, and the oil flow passes through the reversing valve and adjusts the speed. The valve enters the rodless chamber of the cylinder to push the slide forward. At the same time, the pressurized oil discharged from the rod chamber of the cylinder will flow back to the tank through the self-regulating back pressure valve and the reversing valve. During this process, there was no change in the operating status of both the one-way valve and the relief valve. The complex and nonlinear system of the hydraulic drive system of the self-adjusting back pressure governor system is a kind of self-adjusting back-pressure governor system. To facilitate the study of its dynamic characteristics, a simple and reasonable dynamic mathematical model that only considers the main influencing factors is established. Especially important [1][2]. From the theoretical analysis and the experimental study, we can see that the system process time is much longer than the process time of the speed control valve. When the effective pressure bearing area of the rodless cavity of the fuel tank is large, the flow rate at the outlet of the speed control valve is instantaneous. The overshoot is reflected in thesmall change in speed of the slide motion [2]. In order to further broaden and deeply study the dynamic characteristics of the system so that the research work can be effectively performed on a miniature computer, this article will further simplify the original model [2], assuming that the speed control valve is output during the entire system pass. When the flow is constant, this is considered to be the source of the flow. The schematic diagram of the dynamic model structure of this system is shown in Fig. 2. It consists of a cylinder, a sliding table, a self-adjusting back pressure valve, and a connecting pipe.The power bond graph is a power flow graph. It is based on the transmission mode of the system energy, based on the actual structure, and uses the centralized parameters to represent the role of the subsystems abstractly as a resistive element R, a perceptual element I, and a capacitive element. Three kinds of role of C. Using this method, the physical concept of modeling is clear, and combined with the state-space analysis method, the linear system can be described and analyzed more accurately. This method is an effective method to study the dynamic characteristics of complex nonlinear systems in the time domain. According to the main characteristics of each component of the self-adjusting back pressure control system and the modeling rules [1], the power bond diagram of the system is obtained. The upper half of each key in the figure represents the power flow. The two variables that makeup the power are the force variables (oil pressure P and force F) and the flow variables (flow q and velocity v). The O node indicates that the system is connected in parallel, and the force variables on each key are equal and the sum of the flow variables is zero; 1 The nodes represent the series connection in the system, the flow variables on each key are equal and the sum of the force variables is Zero. TF denotes a transformer between different energy forms. The TF subscripted letter represents the conversion ratio of the flow variable or the force variable. The short bar on the key indicates the causal relationship between the two variables on the key. The full arrow indicates the control relationship. There are integral or differential relationships between the force and flow variables of the capacitive and perceptual elements in the three types of action elements. Therefore, a complex nonlinear equation of state with nine state variables can be derived from Fig. 3 . In this paper, the research on the dynamic characteristics of the sliding table starts from the two aspects of the slide's hedging and the smoothness of the motion. The fourth-order fixed-length Runge-Kutta is used for digital simulation on the IBM-PC microcomputer.(3) Slide advanceThe swaying phenomenon of the slide table is caused by the sudden disappearance of the load acting on the slide table (such as drilling work conditions). In this process, the table load F, the moving speed V, and thepressure in the two chambers of the cylinder P3 and P4 can be seen from the simulation results in Fig. 4. When the sliding table moves at a uniform speed under the load, the oil pressure in the rodless cavity of the oil cylinder is high, and a large amount of energy is accumulated in the oil. When the load suddenly disappears, the oil pressure of the cavity is rapidly reduced, and the oil is rapidly reduced. When the high-pressure state is transferred to the low-pressure state, a lot of energy is released to the system, resulting in a high-speed forward impact of the slide. However, the front slide of the sliding table causes the pressure in the rod cavity of the oil cylinder to cause the back pressure to rise, thereby consuming part of the energy in the system, which has a certain effect on the kicking of the slide table. We should see that in the studied system, the inlet pressure of the self-adjusting back pressure valve is subject to the comprehensive effect of the two-chamber oil pressure of the oil cylinder. When the load suddenly disappears, the pressure of the self-adjusting back pressure valve rapidly rises and stably exceeds the initial back pressure value. It can be seen from the figure that self-adjusting back pressure in the speed control system when the load disappears, the back pressure of the cylinder rises more than the traditional speed control system, so the oil in the rod cavity of the cylinder absorbs more energy, resulting in the amount of forward momentum of the slide It will be about 20% smaller than traditionalspeed control systems. It can be seen from this that the use of self-adjusting back-gear speed control system as a drive system slider has good characteristics in suppressing the forward punch, in which the self-adjusting back pressure valve plays a very large role.(4) The smoothness of the slideWhen the load acting on the slide changes periodically (such as in the case of milling), the speed of the slide will have to fluctuate. In order to ensure the processing quality requirements, it must reduce its speed fluctuation range as much as possible. From the perspective of the convenience of the discussion of the problem, assume that the load changes according to a sine wave law, and the resulting digital simulation results are shown in Figure 5. From this we can see that this system has the same variation rules and very close numerical values as the conventional speed control system. The reason is that when the change of the load is not large, the pressure in the two chambers of the fuel tank will not have a large change, which will eventually lead to the self-regulating back pressure valve not showing its effect clearly.(5) Improvement measuresThe results of the research show that the dynamic performance of a sliding table with self-regulating back pressure control system as a drive system is better than that of a traditional speed control system. To reduce the amount of kick in the slide, it is necessary to rapidly increase the backpressure of the rod cavity when the load disappears. To increase the smoothness of the sliding table, it is necessary to increase the rigidity of the system. The main measure is to reduce the volume of oil. From the system structure, it is known that the cylinder has a large volume between the rod cavity and the oil discharge pipe, as shown in Fig. 6a. Its existence in terms of delay and attenuation of the self-regulating back pressure valve function, on the other hand, also reduces the rigidity of the system, it will limit the further improvement of the propulsion characteristics and the smoothness of the motion. Thus, improving the dynamic characteristics of the sliding table can be handled by two methods: changing the cylinder volume or changing the size of the self-regulating back pressure valve. Through the simulation calculation of the structural parameters of the system and the comparison of the results, it can be concluded that the ratio of the volume V4 between the rod cavity and the oil discharge pipe to the volume V3 between the rodless cavity and the oil inlet pipe is changed from 5.5 to 5.5. At 1 oclock, as shown in the figure, the diameter of the bottom end of the self-adjusting back pressure valve is increased from the original 10mm to 13mm, and the length of the damper triangle groove is reduced from the original lmm to 0.7mm, which will enable the front of the slide table. The impulse is reduced by 30%, the transition time is obviously shortened, and the smoothness of the slide motion will also be greatly improved.中文译文液压系统W Arnold1. 绪论液压站称液压泵站,是独立的液压装置。

世界上茶和咖啡-消费的模式【外文翻译】

世界上茶和咖啡-消费的模式【外文翻译】

原文The worlds of tea and coffee: Patterns of consumption Material Source: 2003 Kluwer Academic Publishers. Author: David grigg Abstract Coffee and tea are both drunk in most countries, but typically one predominates. Coffee is the preferred drink in European the Americas, tea elsewhere. Until the early eighteenth century coffee production and consumption was confined to the Islamic world, tea production to East Asia. European traders altered this pattern dramatically. The present pattern of coffee consumption is influenced by income per capita,that of tea is not. Religious influences played some part in the early development of both tea and coffee but have little relevance at the present. National factors have influenced wider patterns. British preference for tea was taken to all their colonies. In recent years fears about health have had some influence on coffee consumption.IntroductionGeographers have always been interested in the production of food on the farm, but recently there has been a growing interest in aspects of food beyond the farm gate such as food processing, food security, restaurants and food health (Atkins and Bowler, 2001; Bell and Valentine, 1997).However, somewhat surprisingly the geography of food consumption–who eats what, where and why –has been an interest of only a few geographers and economists (Cépède and Lengelle, 1961; Grigg, 1995; Simoons, 1990; Bennett,1954; Kariel, 1966). But if little attention has been paid to international and regional variations in food consumption, the geography of drink attracts even less. Granted the production of wine has its students (Unwin, 1991; Blij, 1984) but alcoholic beverages are not the major drinks in most countries. In Europe coffee is generally the leading drink –excluding water –except in Britain and Ireland where it is tea ( P.V.G.D,1998.). Tea and coffee compete with each other in many countries, a topic investigated by N. Berdichevsky (1976),when however there was little data available on consumption in much of Africa and Asia, and Berdichevsky excluded producer countries from his study. TheFood and Agriculture Organization of the United Nations subsequently published data on food consumption, and it is now available for most member states from 1961 to 1996(FAO, 2001). It seems worthwhile, now world-wide statistics are available, to reexamine the geography of tea and coffee consumption.The nature of tea and coffeeTea is a drink made by pouring hot water on the dried leaves of the tea plant, camellia sinensis. Coffee is prepared in a similar way, with hot water and the seeds of the coffee tree, of which coffea arabica and coffea canephoria var. robusta are the most used. The flesh of the cherries is removed, and the seeds roasted. The two drinks have a number of properties in common. The aroma and taste of both is pleasant although some find them bitter, especially coffee, unless milk and sugar are added. Hot drinks not only warm the body, but assuage hunger, at least temporarily; more important, prior to the advent of safe public water supplies, the boiling of water reduces the harmful bacteria carried in many water sources. Neither drink has any major nutritional value; a cup of tea contains only four calories, but forty if milk and sugar are added (Encyclopaedia Britannica, 1985a,p. 735).Far more important however is that both contain caffeine which stimulates the central nervous system, reduces sleepiness and increases vigilance; it is this that explains the popularity of both drinks. When they were first introduced into Europe in the seventeenth century the only alternative drinks, other than the frequently polluted water supplies, were the alcoholic beverages, which were also free of bacterial contamination. Tea and coffee were thus a valuable alternative to wine, beer or spirits and much beloved by temperance campaigners. In the twentieth century other non-alcoholic drinks have become widely available, such as mineral waters, soft drinks and juices, and have competed with both tea and coffee. In the United States for example, the consumption of soft drinks exceeds that of tea, coffeeand alcoholic beverages combined (P.V.G.D., 1998).World Patterns of consumption per capitaPredominates in North Africa, and is the preferred drink in much of the rest of Africa, although the amount drunk there is very small. There are distinct outliers of tea drinking in the British Isles, in South Africa, Argentina, Chile, and New Zealand (and Australia until recently). Coffee is the favoured drink only in Europe and the Americas. Not surprisingly, the per capita distribution of tea and coffee consumption largely replicates the map of preferences. Coffee consumption is highest in North America and Europe, but hardly drunk at all in the former Soviet Union, Africa orAsia, except in Japan, the Philippines, Israel and South Korea (Figure 3). However it is notable that whilst coffee is the preferred beverage in most of Latin America, wheremuch of the production takes place, per capita consumption is below that in Western Europe and North America. Tea consumption per capita is at its highest in south west and south Asia and North Africa, Russia and also in the British Isles, South Africa, Australia and New Zealand. Tea is little drunk in most of the Americas, tropical Africa or in Europe (Figure 4).Economic factorsThere is a marked difference between the influence of income per capita upon coffee and upon tea. The consumption of coffee per capita for all countries has been regressed upon GDP per capita which has been logarithmically transformed. There is a high correlation (r = 0.72) between income and consumption, and income accounts for half the variation in consumption (r2 = 0.52)(Figure 5). As noted earlier the highest consumption is in Western Europe and North America, the lowest in Africa and Asia, with most of the countries in Latin America falling between the two, as they do in incomes (Figure 3). In contrast tea shows very little correlation (Figure 6) between income and consumption (r = 0.17) nor does income help account for variations in consumption (r2 = 0.03). This is explained by the fact that coffee is the preferred drink in the richer countries, in North America and Europe, and tea drinking is the preferred drink in the poorer countries of Asia and Africa. However if the regression analysis is confined to only those countries where tea is the preferred drink, and those where coffee consumption per capita exceeds tea are excluded, then there is a much higher correlation r( = 0.65) between tea and income, and income is a more powerful determinant (r2 = 0 .4) of variations. It is debatable what conclusions can be drawn from this evidence. Income is clearly important in determining in which countries there are the highest levels of consumption of the more expensive beverage, coffee; but this does not preclude the possibility of people in rich countries that can afford coffee, choosing to drink the cheaper beverage, tea. Cultural factorsIn 1900 even the United States, where coffee consumption had run well ahead of tea, still had a tea consumption much the same as Russia, where far more tea than coffee was drunk. In all the English- speaking countries, except the United States, more cups of tea were drunk than cups of coffee; tea was imported mainly from the British dominions in South Asia. However in the twentieth century the allegiance to tea has weakened (Table 6); the consumption of tea has declined in the English speaking countries except surprisingly in the United States, and coffee consumption has increased everywhere but the United States and South Africa; more cups of coffee than tea are now drunk in the United States, Canada and Australia; only New Zealand, theUnited Kingdom and South Africa remain faithful to tea (FAO, 2001) and only in South Africa is a greater quantity of tea than coffee consumed (Table 6). There are three possible reasons for the decline of tea and the increase of coffee. First is the changing nature of immigration. Since 1945 migration from Britain and Ireland has been a falling proportion of all migrants to Australia, New Zealand and Canada, reducing the proportion from tea drinking countries. Second, and a more likely explanation, has been the rising incomes in all these countries. It has been noted that in Britain since 1950 tea has behaved like an inferior good; as long as incomes were low the cheaper beverage was preferred, but as incomes have risen so coffee consumption has increased, and tea consumption has fallen (Ritson,1994). Much the same is true of the other English–speaking countries other than the United States. In the latter, concerns about the effect of caffeine on health may help explain the fall of coffee drinking, although the competition from soft drinks may be an equally powerful factor. A third possible reason may be the revival of the coffee house, which was an important factor in the early spread of coffee in the Middle East and in Western Europe in the eighteenth century. In Britain coffee houses had a temporary revival in the 1950s with the introduction from Italy of the expresso machine; more potent has been the rise of the Starbucks coffee chain in the United States and its spread to Britain. Coffee and tea have become associated with different lifestyles, tea being drunk at home by the old, coffee by the young and outgoing.ConclusionsIt is not possible to give a simple explanation of the world pattern of consumption of tea and coffee. It can be shown how the location of production and consumption of both tea and coffee has changed over time. Thus coffee drinking was once confined to the Middle East, then spread to Western Europe and later North America, and when production shifted to Latin America so consumption increased there. But consumption has never been other than negligible in tropical Africa and east, south and south east Asia. Tea consumption remained confined to East Asia until the middle of the seventeenth century. Consumption then spread to the English speaking world and the Muslim countries of the Middle East and North Africa, as well as Russia and its empire. But China remained the leading source oftea until replaced by the plantations of south and south east Asia in the 1880s. Incomes and price have been important but not always paramount factors in determining whether tea or coffee predominates. As tea has always been cheaper per litre than coffee this may explain the sway the former has always had in Asia. Japan and Korea becamecoffee drinkers after the growth of incomes, and American influence in the 1940s and 1950s. The presence of high incomes suggests why consumers in the countries of Western Europe and North America have been able to drink coffee in large quantities. But it does not follow that the richer countries automatically drink coffee, the poor tea. Tea was the preferred drink in Britain, Australia, New Zealand and Canada for a long period when they were among the richest countries in the world. Cultural factors have perhaps been less important than might have been expected, although the migration of the British and Irish spread tea drinking. If no simple model can be provided to explain the patterns of consumption, at least this analysis demonstrates the great diversity in drinking habits and the need to look at the geography of consumption from different viewpoints.译文世界上茶和咖啡: 消费的模式资料来源: 2003 年学术出版机构作者:葛里格大卫摘要咖啡和茶两个在多数国家被喝,但通常是一个占主导地位。

道路与桥梁工程中英文对照外文翻译文献

道路与桥梁工程中英文对照外文翻译文献

中英文对照外文翻译(文档含英文原文和中文翻译)Bridge research in EuropeA brief outline is given of the development of the European Union, together withthe research platform in Europe. The special case of post-tensioned bridges in the UK is discussed. In order to illustrate the type of European research being undertaken, an example is given from the University of Edinburgh portfolio: relating to the identification of voids in post-tensioned concrete bridges using digital impulse radar.IntroductionThe challenge in any research arena is to harness the findings of different research groups to identify a coherent mass of data, which enables research and practice to be better focused. A particular challenge exists with respect to Europe where language barriers are inevitably very significant. The European Community was formed in the 1960s based upon a political will within continental Europe to avoid the European civil wars, which developed into World War 2 from 1939 to 1945. The strong political motivation formed the original community of which Britain was not a member. Many of the continental countries saw Britain’s interest as being purelyeconomic. The 1970s saw Britain joining what was then the European Economic Community (EEC) and the 1990s has seen the widening of the community to a European Union, EU, with certain political goals together with the objective of a common European currency.Notwithstanding these financial and political developments, civil engineering and bridge engineering in particular have found great difficulty in forming any kind of common thread. Indeed the educational systems for University training are quite different between Britain and the European continental countries. The formation of the EU funding schemes —e.g. Socrates, Brite Euram and other programs have helped significantly. The Socrates scheme is based upon the exchange of students between Universities in different member states. The Brite Euram scheme has involved technical research grants given to consortia of academics and industrial partners within a number of the states—— a Brite Euram bid would normally be led by partners within a number of the statesan industrialist.In terms of dissemination of knowledge, two quite different strands appear to have emerged. The UK and the USA have concentrated primarily upon disseminating basic research in refereed journal publications: ASCE, ICE and other journals. Whereas the continental Europeans have frequently disseminated basic research at conferences where the circulation of the proceedings is restricted.Additionally, language barriers have proved to be very difficult to break down. In countries where English is a strong second language there has been enthusiastic participation in international conferences based within continental Europe —e.g. Germany, Italy, Belgium, The Netherlands and Switzerland. However, countries where English is not a strong second language have been hesitant participants }—e.g. France.European researchExamples of research relating to bridges in Europe can be divided into three types of structure:Masonry arch bridgesBritain has the largest stock of masonry arch bridges. In certain regions of the UK up to 60% of the road bridges are historic stone masonry arch bridges originally constructed for horse drawn traffic. This is less common in other parts of Europe as many of these bridges were destroyed during World War 2.Concrete bridgesA large stock of concrete bridges was constructed during the 1950s, 1960s and 1970s. At the time, these structures were seen as maintenance free. Europe also has a large number of post-tensioned concrete bridges with steel tendon ducts preventing radar inspection. This is a particular problem in France and the UK.Steel bridgesSteel bridges went out of fashion in the UK due to their need for maintenance as perceived in the 1960s and 1970s. However, they have been used for long span and rail bridges, and they are now returning to fashion for motorway widening schemes in the UK.Research activity in EuropeIt gives an indication certain areas of expertise and work being undertaken in Europe, but is by no means exhaustive.In order to illustrate the type of European research being undertaken, an example is given from the University of Edinburgh portfolio. The example relates to the identification of voids in post-tensioned concrete bridges, using digital impulse radar.Post-tensioned concrete rail bridge analysisOve Arup and Partners carried out an inspection and assessment of the superstructure of a 160 m long post-tensioned, segmental railway bridge in Manchester to determine its load-carrying capacity prior to a transfer of ownership, for use in the Metrolink light rail system..Particular attention was paid to the integrity of its post-tensioned steel elements.Physical inspection, non-destructive radar testing and other exploratory methods were used to investigate for possible weaknesses in the bridge.Since the sudden collapse of Ynys-y-Gwas Bridge in Wales, UK in 1985, there has been concern about the long-term integrity of segmental, post-tensioned concrete bridges which may b e prone to ‘brittle’ failure without warning. The corrosion protection of the post-tensioned steel cables, where they pass through joints between the segments, has been identified as a major factor affecting the long-term durability and consequent strength of this type of bridge. The identification of voids in grouted tendon ducts at vulnerable positions is recognized as an important step in the detection of such corrosion.Description of bridgeGeneral arrangementBesses o’ th’ Barn Bridge is a 160 m long, three span, segmental, post-tensionedconcrete railway bridge built in 1969. The main span of 90 m crosses over both the M62 motorway and A665 Bury to Prestwick Road. Minimum headroom is 5.18 m from the A665 and the M62 is cleared by approx 12.5 m.The superstructure consists of a central hollow trapezoidal concrete box section 6.7 m high and 4 m wide. The majority of the south and central spans are constructed using 1.27 m long pre-cast concrete trapezoidal box units, post-tensioned together. This box section supports the in site concrete transverse cantilever slabs at bottom flange level, which carry the rail tracks and ballast.The center and south span sections are of post-tensioned construction. These post-tensioned sections have five types of pre-stressing:1. Longitudinal tendons in grouted ducts within the top and bottom flanges.2. Longitudinal internal draped tendons located alongside the webs. These are deflected at internal diaphragm positions and are encased in in site concrete.3. Longitudinal macalloy bars in the transverse cantilever slabs in the central span .4. Vertical macalloy bars in the 229 mm wide webs to enhance shear capacity.5. Transverse macalloy bars through the bottom flange to support the transverse cantilever slabs.Segmental constructionThe pre-cast segmental system of construction used for the south and center span sections was an alternative method proposed by the contractor. Current thinkingire suggests that such a form of construction can lead to ‘brittle’ failure of the ententire structure without warning due to corrosion of tendons across a construction joint,The original design concept had been for in site concrete construction.Inspection and assessmentInspectionInspection work was undertaken in a number of phases and was linked with the testing required for the structure. The initial inspections recorded a number of visible problems including:Defective waterproofing on the exposed surface of the top flange.Water trapped in the internal space of the hollow box with depths up to 300 mm.Various drainage problems at joints and abutments.Longitudinal cracking of the exposed soffit of the central span.Longitudinal cracking on sides of the top flange of the pre-stressed sections.Widespread sapling on some in site concrete surfaces with exposed rusting reinforcement.AssessmentThe subject of an earlier paper, the objectives of the assessment were:Estimate the present load-carrying capacity.Identify any structural deficiencies in the original design.Determine reasons for existing problems identified by the inspection.Conclusion to the inspection and assessmentFollowing the inspection and the analytical assessment one major element of doubt still existed. This concerned the condition of the embedded pre-stressing wires, strands, cables or bars. For the purpose of structural analysis these elements、had been assumed to be sound. However, due to the very high forces involved,、a risk to the structure, caused by corrosion to these primary elements, was identified.The initial recommendations which completed the first phase of the assessment were:1. Carry out detailed material testing to determine the condition of hidden structural elements, in particularthe grouted post-tensioned steel cables.2. Conduct concrete durability tests.3. Undertake repairs to defective waterproofing and surface defects in concrete.Testing proceduresNon-destructi v e radar testingDuring the first phase investigation at a joint between pre-cast deck segments the observation of a void in a post-tensioned cable duct gave rise to serious concern about corrosion and the integrity of the pre-stress. However, the extent of this problem was extremely difficult to determine. The bridge contains 93 joints with an average of 24 cables passing through each joint, i.e. there were approx. 2200 positions where investigations could be carried out. A typical section through such a joint is that the 24 draped tendons within the spine did not give rise to concern because these were protected by in site concrete poured without joints after the cables had been stressed.As it was clearly impractical to consider physically exposing all tendon/joint intersections, radar was used to investigate a large numbers of tendons and hence locate duct voids within a modest timescale. It was fortunate that the corrugated steel ducts around the tendons were discontinuous through the joints which allowed theradar to detect the tendons and voids. The problem, however, was still highly complex due to the high density of other steel elements which could interfere with the radar signals and the fact that the area of interest was at most 102 mm wide and embedded between 150 mm and 800 mm deep in thick concrete slabs.Trial radar investigations.Three companies were invited to visit the bridge and conduct a trial investigation. One company decided not to proceed. The remaining two were given 2 weeks to mobilize, test and report. Their results were then compared with physical explorations.To make the comparisons, observation holes were drilled vertically downwards into the ducts at a selection of 10 locations which included several where voids were predicted and several where the ducts were predicted to be fully grouted. A 25-mm diameter hole was required in order to facilitate use of the chosen horoscope. The results from the University of Edinburgh yielded an accuracy of around 60%.Main radar sur v ey, horoscope verification of v oids.Having completed a radar survey of the total structure, a baroscopic was then used to investigate all predicted voids and in more than 60% of cases this gave a clear confirmation of the radar findings. In several other cases some evidence of honeycombing in the in site stitch concrete above the duct was found.When viewing voids through the baroscopic, however, it proved impossible to determine their actual size or how far they extended along the tendon ducts although they only appeared to occupy less than the top 25% of the duct diameter. Most of these voids, in fact, were smaller than the diameter of the flexible baroscopic being used (approximately 9 mm) and were seen between the horizontal top surface of the grout and the curved upper limit of the duct. In a very few cases the tops of the pre-stressing strands were visible above the grout but no sign of any trapped water was seen. It was not possible, using the baroscopic, to see whether those cables were corroded.Digital radar testingThe test method involved exciting the joints using radio frequency radar antenna: 1 GHz, 900 MHz and 500 MHz. The highest frequency gives the highest resolution but has shallow depth penetration in the concrete. The lowest frequency gives the greatest depth penetration but yields lower resolution.The data collected on the radar sweeps were recorded on a GSSI SIR System 10.This system involves radar pulsing and recording. The data from the antenna is transformed from an analogue signal to a digital signal using a 16-bit analogue digital converter giving a very high resolution for subsequent data processing. The data is displayed on site on a high-resolution color monitor. Following visual inspection it isthen stored digitally on a 2.3-gigabyte tape for subsequent analysis and signal processing. The tape first of all records a ‘header’ noting the digital radar settings together with the trace number prior to recording the actual data. When the data is played back, one is able to clearly identify all the relevant settings —making for accurate and reliable data reproduction.At particular locations along the traces, the trace was marked using a marker switch on the recording unit or the antenna.All the digital records were subsequently downloaded at the University’s NDT laboratory on to a micro-computer.(The raw data prior to processing consumed 35 megabytes of digital data.) Post-processing was undertaken using sophisticated signal processing software. Techniques available for the analysis include changing the color transform and changing the scales from linear to a skewed distribution in order to highlight、突出certain features. Also, the color transforms could be changed to highlight phase changes. In addition to these color transform facilities, sophisticated horizontal and vertical filtering procedures are available. Using a large screen monitor it is possible to display in split screens the raw data and the transformed processed data. Thus one is able to get an accurate indication of the processing which has taken place. The computer screen displays the time domain calibrations of the reflected signals on the vertical axis.A further facility of the software was the ability to display the individual radar pulses as time domain wiggle plots. This was a particularly valuable feature when looking at individual records in the vicinity of the tendons.Interpretation of findingsA full analysis of findings is given elsewhere, Essentially the digitized radar plots were transformed to color line scans and where double phase shifts were identified in the joints, then voiding was diagnosed.Conclusions1. An outline of the bridge research platform in Europe is given.2. The use of impulse radar has contributed considerably to the level of confidence in the assessment of the Besses o’ th’ Barn Rail Bridge.3. The radar investigations revealed extensive voiding within the post-tensioned cable ducts. However, no sign of corrosion on the stressing wires had been foundexcept for the very first investigation.欧洲桥梁研究欧洲联盟共同的研究平台诞生于欧洲联盟。

外文翻译原文

外文翻译原文

DOI10.1007/s10711-012-9699-zORIGINAL PAPERParking garages with optimal dynamicsMeital Cohen·Barak WeissReceived:19January2011/Accepted:22January2012©Springer Science+Business Media B.V.2012Abstract We construct generalized polygons(‘parking garages’)in which the billiard flow satisfies the Veech dichotomy,although the associated translation surface obtained from the Zemlyakov–Katok unfolding is not a lattice surface.We also explain the difficulties in constructing a genuine polygon with these properties.Keywords Active vitamin D·Parathyroid hormone-related peptide·Translation surfaces·Parking garages·Veech dichotomy·BilliardsMathematics Subject Classification(2000)37E351Introduction and statement of resultsA parking garage is an immersion h:N→R2,where N is a two dimensional compact connected manifold with boundary,and h(∂N)is afinite union of linear segments.A parking garage is called rational if the group generated by the linear parts of the reflections in the boundary segments isfinite.If h is actually an embedding,the parking garage is a polygon; thus polygons form a subset of parking garages,and rationals polygons(i.e.polygons all of whose angles are rational multiples ofπ)form a subset of rational parking garages.The dynamics of the billiardflow in a rational polygon has been intensively studied for over a century;see[7]for an early example,and[5,10,13,16]for recent surveys.The defi-nition of the billiardflow on a polygon readily extends to a parking garage:on the interior of N the billiardflow is the geodesicflow on the unit tangent bundle of N(with respect to the pullback of the Euclidean metric)and at the boundary,theflow is defined by elastic reflection (angle of incidence equals the angle of return).Theflow is undefined at thefinitely many M.Cohen·B.Weiss(B)Ben Gurion University,84105Be’er Sheva,Israele-mail:barakw@math.bgu.ac.ilM.Cohene-mail:comei@bgu.ac.ilpoints of N which map to‘corners’,i.e.endpoints of boundary segments,and hence at thecountable union of codimension1submanifolds corresponding to points in the unit tangentbundle for which the corresponding geodesics eventually arrive at corners in positive or neg-ative time.Since the direction of motion of a trajectory changes at a boundary segment viaa reflection in its side,for rational parking garages,onlyfinitely many directions of motionare assumed.In other words,the phase space of the billiardflow decomposes into invarianttwo-dimensional subsets corresponding tofixing the directions of motion.Veech[12]discovered that the billiardflow in some special polygons exhibits a strikingly he found polygons for which,in any initial direction,theflow is eithercompletely periodic(all orbits are periodic),or uniquely ergodic(all orbits are equidistrib-uted).Following McMullen we will say that a polygon with these properties has optimaldynamics.We briefly summarize Veech’s strategy of proof.A standard unfolding construc-tion usually attributed to Zemlyakov and Katok[15]1,associates to any rational polygon Pa translation surface M P,such that the billiardflow on P is essentially equivalent to thestraightlineflow on M P.Associated with any translation surface M is a Fuchsian group M,now known as the Veech group of M,which is typically trivial.Veech found M and P forwhich this group is a non-arithmetic lattice in SL2(R).We will call these lattice surfaces and lattice polygons respectively.Veech investigated the SL2(R)-action on the moduli space of translation surfaces,and building on earlier work of Masur,showed that lattice surfaces haveoptimal dynamics.From this it follows that lattice polygons have optimal dynamics.This chain of reasoning remains valid if one starts with a parking garage instead of apolygon;namely,the unfolding construction associates a translation surface to a parkinggarage,and one may define a lattice parking garage in an analogous way.The arguments ofVeech then show that the billiardflow in a lattice parking garage has optimal dynamics.Thisgeneralization is not vacuous:lattice parking garages,which are not polygons,were recentlydiscovered by Bouw and Möller[2].The term‘parking garage’was coined by Möller.A natural question is whether Veech’s result admits a converse,i.e.whether non-latticepolygons or parking garages may also have optimal dynamics.In[11],Smillie and the sec-ond-named author showed that there are non-lattice translation surfaces which have optimaldynamics.However translation surfaces arising from billiards form a set of measure zero inthe moduli space of translation surfaces,and it was not clear whether the examples of[11]arise from polygons or parking garages.In this paper we show:Theorem1.1There are non-lattice parking garages with optimal dynamics.An example of such a parking garage is shown in Fig.1.Veech’s work shows that for lattice polygons,the directions in which all orbits are periodicare precisely those containing a saddle connection,i.e.a billiard path connecting corners ofthe polygon which unfold to singularities of the corresponding surface.Following Cheunget al.[3],if a polygon P has optimal dynamics,and the periodic directions coincide with thedirections of saddle connections,we will say that P satisfies strict ergodicity and topologicaldichotomy.It is not clear to us whether our example satisfies this stronger property.As weexplain in Remark3.2below,this would follow if it were known that the center of the regularn-gon is a‘connection point’in the sense of Gutkin,Hubert and Schmidt[8]for some nwhich is an odd multiple of3.Veech also showed that for a lattice polygon P,the number N P(T)of periodic strips on P of length at most T satisfies a quadratic growth estimate of the form N P(T)∼cT2for a positive constant c.As we explain in Remark3.3,our examples also satisfy such a quadratic growth estimate.1But dating back at least to Fox and Kershner[7].Fig.1A non-lattice parkinggarage with optimal dynamics.(Here 2/n represents angle 2π/n )It remains an open question whether there is a genuine polygon which has optimal dynam-ics and is not a lattice polygon.Although our results make it seem likely that such a polygon exists,in her M.Sc.thesis [4],the first-named author obtained severe restrictions on such a polygon.In particular she showed that there are no such polygons which may be constructed from any of the currently known lattice examples via the covering construction as in [11,13].We explain these results and prove a representative special case in §4.2PreliminariesIn this section we cite some results which we will need,and deduce simple consequences.For the sake of brevity we will refer the reader to [10,11,16]for definitions of translation surfaces.Suppose S 1,S 2are compact orientable surfaces and π:S 2→S 1is a branched cover.That is,πis continuous and surjective,and there is a finite 1⊂S 1,called the set of branch points ,such that for 2=π−1( 1),the restriction of πto S 2 2is a covering map of finite degree d ,and for any p ∈ 1,#π−1(p )<d .A ramification point is a point q ∈ 2for which there is a neighborhood U such that {q }=U ∩π−1(π(q ))and for all u ∈U {q },# U ∩π−1(π(u )) ≥2.If M 1,M 2are translation surfaces,a translation map is a surjective map M 2→M 1which is a translation in charts.It is a branched cover.In contrast to other authors (cf.[8,13]),we do not require that the set of branch points be distinct from the singularities of M 1,or that they be marked.It is clear that the ramification points of the cover are singularities on M 2.If M is a lattice surface,a point p ∈M is called periodic if its orbit under the group of affine automorphisms of M is finite.A point p ∈M is called a connection point if any seg-ment joining a singularity with p is contained in a saddle connection (i.e.a segment joining singularities)on M .The following proposition summarizes results discussed in [7,9–11]:Proposition 2.1(a)A non-minimal direction on a translation surface contains a saddle connection.(b)If M 1is a lattice surface,M 2→M 1is translation map with a unique branch point,then any minimal direction on M 2is uniquely ergodic.(c)If M2→M1is a translation map such that M1is a lattice surface,then all branchpoints are periodic if and only if M2is a lattice surface.(d)If M2→M1is a translation map with a unique branch point,such that M1is a latticesurface and the branch point is a connection point,then any saddle connection direction on M2is periodic.Corollary2.2Let M2→M1be a translation map such that M1is a lattice surface with a unique branch point p.Then:(1)M2has optimal dynamics.(2)If p is a connection point then M2satisfies topological dichotomy and strict ergodicity.(3)If p is not a periodic point then M2is not a lattice surface.Proof To prove(1),by(b),the minimal directions are uniquely ergodic,and we need to prove that the remaining directions are either completely periodic or uniquely ergodic. By(a),in any non-minimal direction on M2there is a saddle connectionδ,and there are three possibilities:(i)δprojects to a saddle connection on M1.(ii)δprojects to a geodesic segment connecting the branch point p to itself.(iii)δprojects to a geodesic segment connecting p to a singularity.In case(i)and(ii)since M1is a lattice surface,the direction is periodic on M1,hence on M2as well.In case(iii),there are two subcases:ifδprojects to a part of a saddle connec-tion on M1,then it is also a periodic direction.Otherwise,in light of Proposition2.1(a),the direction must be minimal in M1,and hence,by Proposition2.1(b),uniquely ergodic in M2. This proves(1).Note also that if p is a connection point then the last subcase does not arise, so all directions which are non-minimal on M2are periodic.This proves(2).Statement(3) follows from(c).We now describe the unfolding construction[7,15],extended to parking garages.Let P=(h:N→R2).An edge of P is a connected subset L of∂N such that h(L)is a straight segment and L is maximal with these properties(with respect to inclusion).A vertex of P is any point which is an endpoint of an edge.The angle at a vertex is the total interior angle, measured via the pullback of the Euclidean metric,at the vertex.By convention we always choose the positive angles.Note that for polygons,angles are less than2π,but for parking garages there is no apriori upper bound on the angle at a vertex.Since our parking garages are rational,all angles are rational multiples ofπ,and we always write them as p/q,omitting πfrom the notation.Let G P be the dihedral group generated by the linear parts of reflections in h(L),for all edges L.For the sake of brevity,if there is a reflection with linear part gfixing a line parallel to L,we will say that gfixes L.Let S be the topological space obtained from N×G P by identifying(x,g1)with(x,g2)whenever g−11g2fixes an edge containing h(x).Topologically S is a compact orientable surface,and the immersions g◦h on each N×{g}induce an atlas of charts to R2which endows S with a translation surface structure.We denote this translation surface by M P,and writeπP for the map N×G P→M P.We will be interested in a‘partial unfolding’which is a variant of this construction,in which we reflect a parking garage repeatedly around several of its edges to form a larger parking garage.Formally,suppose P=(h:N→R2)and Q=(h :N →R2)are parking garages.For ≥1,we say that P tiles Q by reflections,and that is the number of tiles,if the following holds.There are maps h 1,...h :N→N and g1,...,g ∈G P(not necessarily distinct)satisfying:(A)The h i are homeomorphisms onto their images,and N = h i (N ).(B)For each i ,the linear part of h ◦h i ◦h −1is everywhere equal to g i .(C)For each 1≤i <j ≤ ,let L i j =h i (N )∩h j (N )and L =(h i )−1(L i j ).Then (h j )−1◦h i is the identity on L ,and L is either empty,or a vertex,or an edge of P .If L is an edge then h i (N )∪h j (N )is a neighborhood of L i j.If L i j is a vertex then there is a finite set of i =i 1,i 2,...,i k =j such that h i s (N )contains a neighborhood of L i j ,and each consecutive pair h i t (N ),h i t +1(N )intersect along an edge containing L i j .V orobets [13]realized that a tiling of parking garages gives rise to a branched cover.More precisely:Proposition 2.3Suppose P tiles Q by reflections with tiles,M P ,M Q are the correspond-ing translation surfaces obtained via the unfolding construction,and G P ,G Q are the cor-responding reflection groups.Then there is a translation map M Q →M P ,such that the following hold:(1)G Q ⊂G P .(2)The branch points are contained in the G P -orbit of the vertices of P .(3)The degree of the cover is [G P :G Q ].(4)Let z ∈M P be a point which is represented (as an element of N ×{1,...,r })by(x ,k )with x a vertex in P with angle m n (where gcd (m ,n )=1).Let (y i )⊂M Q be the pre-images of z,with angles k i m n in Q .Then z is a branch point of the cover if and only if k i n for some i.Proof Assertion (1)follows from the fact that Q is tiled by P .Since this will be impor-tant in the sequel,we will describe the covering map M Q →M P in detail.We will map (x ,g )∈N ×G Q to πP (x ,gg i )∈M P ,where x =h i (x ).We now check that this map is independent of the choice of x ,i ,and descends to a well-defined map M Q →M P ,which is a translation in charts.If x =h i (x 1)=h j (x 2)then x 1=x 2since (h i )−1◦h j is the identity.If x is in the relative interior of an edge L i j thenπP (x ,gg i )=πP (x ,gg j )(1)since (gg i )−1gg j =g −1i g j fixes an edge containing h (x 1).If x 1is a vertex of P then one proves (1)by an induction on k ,where k is as in (C).This shows that the map is well-defined.We now show that it descends to a map M Q →M P .Suppose (x ,g ),(x ,g )are two points in N ×G Q which are identified in M Q ,i.e.x ∈∂N is in the relative interior of an edge fixed by g −1g .By (C)there is a unique i such that x is in the image of h i .Thus (x ,g )maps to (x ,gg i )and (x ,g )maps to (x ,g g i ),and g −1i g −1g g i fixes the edge through x =g −1i (x ).It remains to show that the map we have defined is a translation in charts.This follows immediately from the chain rule and (B).Assertion (2)is simple and left to the reader.For assertion (3)we note that M P (resp.M Q )is made of |G P |(resp. |G Q |)copies of P .The point z will be a branch point if and only if the total angle around z ∈M P differs from the total angle around one of the pre-images y i ∈M Q .The total angle at a singularity corresponding to a vertex with angle r /s (where gcd (r ,s )=1)is 2r π,thus the total angle at z is 2m πand the total angle at y i is 2k i m πgcd (k i ,n ).Assertion (4)follows.3Non-lattice dynamically optimal parking garagesIn this section we prove the following result,which immediately implies Theorem1.1: Theorem3.1Let n≥9be an odd number divisible by3,and let P be an isosceles triangle with equal angles1/n.Let Q be the parking garage made of four copies of P glued as in Fig.1, so that Q has vertices(in cyclic order)with angles1/n,2/n,3/n,(n−2)/n,2/n,3(n−2)/n. Then M P is a lattice surface and M Q→M P is a translation map with one aperiodic branchpoint.In particular Q is a non-lattice parking garage with optimal dynamics.Proof The translation surface M P is the double n-gon,one of Veech’s original examples of lattice surfaces[12].The groups G P and G Q are both equal to the dihedral group D n.Thus by Proposition2.3,the degree of the cover M Q→M P is four.Again by Proposition2.3, since n is odd and divisible by3,the only vertices which correspond to branch points are the two vertices z1,z2with angle2/n(they correspond to the case k i=2while the other vertices correspond to1or3).In the surface M P there are two points which correspond to vertices of equal angle in P(the centers of the two n-gons),and these points are known to be aperiodic [9].We need to check that z1and z2both map to the same point in M P.This follows from the fact that both are opposite the vertex z3with angle3/n,which also corresponds to the center of an n-gon,so in M P project to a point which is distinct from z3. Remark3.2As of this writing,it is not known whether the center of the regular n-gon is a connection point on the double n-gon surface.If this turns out to be the case for some n which is an odd multiple of3,then by Corollary2.2(2),our construction satisfies strict ergodicity and topological dichotomy.See[1]for some recent related results.Remark3.3Since our examples are obtained by taking branched covers over lattice surfaces, a theorem of Eskin et al.[6,Thm.8.12]shows that our examples also satisfy a quadratic growth estimate of the form N P(T)∼cT2;moreover§9of[6]explains how one may explicitly compute the constant c.4Non-lattice optimal polygons are hard tofindIn this section we present results indicating that the above considerations will not easily yield a non-lattice polygon with optimal dynamics.Isolating the properties necessary for our proof of Theorem3.1,we say that a pair of polygons(P,Q)is suitable if the following hold:•P is a lattice polygon.•P tiles Q by reflections.•The corresponding cover M Q→M P as in Proposition2.3has a unique branch point which is aperiodic.In her M.Sc.thesis at Ben Gurion University,thefirst-named author conducted an exten-sive search for a suitable pair of polygons.By Corollary2.2,such a pair will have yielded a non-lattice polygon with optimal dynamics.The search begins with a list of candidates for P,i.e.a list of currently known lattice polygons.At present,due to work of many authors, there is a fairly large list of known lattice polygons but there is no classification of all lattice polygons.In[4],the full list of lattice polygons known as of this writing is given,and the following is proved:Theorem4.1(M.Cohen)Among the list of lattice surfaces given in[4],there is no P for which there is Q such that(P,Q)is a suitable pair.The proof of Theorem4.1contains a detailed case-by-case analysis for each of the differ-ent possible P.These cases involve some common arguments which we will illustrate in this section,by proving the special case in which P is any of the obtuse triangles investigated byWard[14]:Theorem4.2For n≥4,let P=P n be the(lattice)triangle with angles1n,12n,2n−32n.Then there is no polygon Q for which(P,Q)is a suitable pair.Our proof relies on some auxiliary statements which are of independent interest.In all of them,M Q→M P is the branched cover with unique branch point corresponding to a suitable pair(P,Q).These statements are also valid in the more general case in which P,Q are parking garages.Recall that an affine automorphism of a translation surface is a homeomorphism which is linear in charts.We denote by Aff(M)the group of affine automorphisms of M and by D:Aff(M)→GL2(R)the homomorphism mapping an affine automorphism to its linear part.Note that we allow orientation-reversing affine automorphisms,i.e.detϕmay be1 or−1.We now explain how G P acts on M P by translation equivalence.LetπP:N×G P→M P and S be as in the discussion preceding Proposition2.3,and let g∈G P.Since the left action of g on G is a permutation and preserves the gluing ruleπP,the map N×G P→N×G P sending(x,g )to(x,g−1g )induces a homeomorphismϕ:S→S and g◦h◦ϕis a translation in charts.Thus g∈G P gives a translation isomorphism of M P,and similarly g∈G P gives a translation isomorphism of M Q.Lemma4.3The branch point of the cover p:M Q→M P isfixed by G Q.Proof Since G Q⊂G P,any g∈G Q induces translation isomorphisms of both M P and M Q.We denote both by g.The definition of p given in thefirst paragraph of the proof of Proposition2.3shows that p◦g=g◦p;namely both maps are induced by sending (x ,g )∈N ×G Q toπP(x,gg g i),where x =h i(x).Since the cover p has a unique branch point,any g∈G Q mustfix it. Lemma4.4If an affine automorphismϕof a translation surface has infinitely manyfixed points then Dϕfixes a nonzero vector,in its linear action on R2.Proof Suppose by contradiction that the linear action of Dϕon the plane has zero as a uniquefixed point,and let Fϕbe the set offixed points forϕ.For any x∈Fϕwhich is not a singularity,there is a chart from a neighborhood U x of x to R2with x→0,and a smaller neighborhood V x⊂U x,such thatϕ(V x)⊂U x and when expressed in this chart,ϕ|V x is given by the linear action of Dϕon the plane.In particular x is the onlyfixed point in V x. Similarly,if x∈Fϕis a singularity,then there is a neighborhood U x of x which maps to R2 via afinite branched cover ramified at x→0,such that the action ofϕin V x⊂U x covers the linear action of Dϕ.Again we see that x is the onlyfixed point in V x.By compactness wefind that Fϕisfinite,contrary to hypothesis. Lemma4.5Suppose M is a lattice surface andϕ∈Aff(M)has Dϕ=−Id.Then afixed point forϕis periodic.Proof LetF1={σ∈Aff(M):Dσ=−Id}.Thenϕ∈F1and F1isfinite,since it is a coset for the group ker D which is known to be finite.Let A⊂M be the set of points which arefixed by someσ∈F1.By Lemma4.4this is afinite set,which contains thefixed points forϕ.Thus in order to prove the Lemma,it suffices to show that A is Aff(M)-invariant.Letψ∈Aff(M),and let x∈A,so that x=σ(x)with Dσ=−Id.Since-Id is central in GL2(R),D(σψ)=D(ψσ),so there is f∈ker D such thatψσ=fσψ.Thereforeψ(x)=ψσ(x)=fσψ(x),and fσ∈F1.This proves thatψ(x)∈A.Remark4.6This improves Theorem10of[8],where a similar conclusion is obtained under the additional assumptions that M is hyperelliptic and Aff(M)is generated by elliptic ele-ments.The following are immediate consequences:Corollary4.7Suppose(P,Q)is a suitable pair.Then•−Id/∈D(G Q).•None of the angles between two edges of Q are of the form p/q with gcd(p,q)=1and q even.Proof of Theorem4.2We will suppose that Q is such that(P,Q)are a suitable pair and reach a contradiction.If n is even,then Aff(M P)contains a rotation byπwhichfixes the points in M P coming from vertices of P.Thus by Lemma4.5all vertices of P give rise to periodic points,contradicting Proposition2.1(c).So n must be odd.Let x1,x2,x3be the vertices of P with corresponding angles1/n,1/2n,(2n−3)/2n. Then x3gives rise to a singularity,hence a periodic point.Also using Lemma4.5and the rotation byπ,one sees that x2also gives rise to a periodic point.So the unique branch point must correspond to the vertex x1.The images of the vertex x1in P give rise to two regular points in M P,marked c1,c2in Fig.2.Any element of G P acts on{c1,c2}by a permutation, so by Lemma4.3,G Q must be contained in the subgroup of index twofixing both of the c i. Let e1be the edge of P opposite x1.Since the reflection in e1,or any edge which is an image of e1under G P,swaps the c i,we have:e1is not a boundary edge of Q.(2) We now claim that in Q,any vertex which corresponds to the vertex x3from P is alwaysdoubled,i.e.consists of an angle of(2n−3)/n.Indeed,for any polygon P0,the group G P0 is the dihedral group D N where N is the least common multiple of the denominators of theangles at vertices of P0.In particular it contains-Id when N is even.Writing(2n−3)/2n in reduced form we have an even denominator,and since,by Corollary4.7,−Id/∈G Q,in Q the angle at vertex x3must be multiplied by an even integer2k.Since2k(2n−3)/2n is bigger than2if k>1,and since the total angle at a vertex of a polygon is less than2π,we must have k=1,i.e.any vertex in Q corresponding to the vertex x3is always doubled.This establishes the claim.It is here that we have used the assumption that Q is a polygon and not a parking garage.Fig.2Ward’s surface,n=5Fig.3Two options to start the construction ofQThere are two possible configurations in which a vertex x3is doubled,as shown in Fig.3. The bold lines indicate lines which are external,i.e.boundary edges of Q.By(2),the con-figuration on the right cannot occur.Let us denote the polygon on the left hand side of Fig.3by Q0.It cannot be equal to Q,since it is a lattice polygon.We now enlarge Q0by adding copies of P step by step,as described in Fig.4.Without loss of generality wefirst add triangle number1.By(2),the broken line indicates a side which must be internal in Q.Therefore,we add triangle number 2.We denote the resulting polygon by Q1.One can check by computing angles,using thefact that n is odd,and using Proposition2.3(4)that the cover M Q1→M P will branch overthe points a corresponding to vertex x2.Since the allowed branching is only over the points corresponding to x1,we must have Q1 Q,so we continue the construction.Without loss of generality we add triangle number3.Again,by(2),the broken line indicates a side which must be internal in Q.Therefore,we add triangle number4,obtaining Q2.Now,using Prop-osition2.3(4)again,in the cover M Q2→M P we have branching over two vertices u andv which are both of type x1and correspond to distinct points c1and c2in M P.This implies Q2 Q.Fig.4Steps of the construction of QSince both vertices u and v are delimited by2external sides,we cannot change the angle to prevent the branching over one of these points.This means that no matter how we continue to construct Q,the branching in the cover M Q→M P will occur over at least two points—a contradiction.Acknowledgments We are grateful to Yitwah Cheung and Patrick Hooper for helpful discussions,and to the referee for a careful reading and helpful remarks which improved the presentation.This research was supported by the Israel Science Foundation and the Binational Science Foundation.References1.Arnoux,P.,Schmidt,T.:Veech surfaces with non-periodic directions in the tracefield.J.Mod.Dyn.3(4),611–629(2009)2.Bouw,I.,Möller,M.:Teichmüller curves,triangle groups,and Lyapunov exponents.Ann.Math.172,139–185(2010)3.Cheung,Y.,Hubert,P.,Masur,H.:Topological dichotomy and strict ergodicity for translation surfaces.Ergod.Theory Dyn.Syst.28,1729–1748(2008)4.Cohen,M.:Looking for a Billiard Table which is not a Lattice Polygon but satisfies the Veech dichotomy,M.Sc.thesis,Ben-Gurion University(2010)/pdf/1011.32175.DeMarco,L.:The conformal geometry of billiards.Bull.AMS48(1),33–52(2011)6.Eskin,A.,Marklof,J.,Morris,D.:Unipotentflows on the space of branched covers of Veech surfaces.Ergod.Theorm Dyn.Syst.26(1),129–162(2006)7.Fox,R.H.,Kershner,R.B.:Concerning the transitive properties of geodesics on a rational polyhe-dron.Duke Math.J.2(1),147–150(1936)8.Gutkin,E.,Hubert,P.,Schmidt,T.:Affine diffeomorphisms of translation surfaces:Periodic points,Fuchsian groups,and arithmeticity.Ann.Sci.École Norm.Sup.(4)36,847–866(2003)9.Hubert,P.,Schmidt,T.:Infinitely generated Veech groups.Duke Math.J.123(1),49–69(2004)10.Masur,H.,Tabachnikov,S.:Rational billiards andflat structures.In:Handbook of dynamical systems,vol.1A,pp.1015–1089.North-Holland,Amsterdam(2002)11.Smillie,J.,Weiss,B.:Veech dichotomy and the lattice property.Ergod.Theorm.Dyn.Syst.28,1959–1972(2008)Geom Dedicata12.Veech,W.A.:Teichmüller curves in moduli space,Eisenstein series and an application to triangularbilliards.Invent.Math.97,553–583(1989)13.V orobets,Y.:Planar structures and billiards in rational polygons:the Veech alternative.(Russian);trans-lation in Russian Math.Surveys51(5),779–817(1996)14.Ward,C.C.:Calculation of Fuchsian groups associated to billiards in a rational triangle.Ergod.TheoryDyn.Syst.18,1019–1042(1998)15.Zemlyakov,A.,Katok,A.:Topological transitivity of billiards in polygons,Math.Notes USSR Acad.Sci:18:2291–300(1975).(English translation in Math.Notes18:2760–764)16.Zorich,A.:Flat surfaces.In:Cartier,P.,Julia,B.,Moussa,P.,Vanhove,P.(eds.)Frontiers in numbertheory,physics and geometry,Springer,Berlin(2006)123。

金属热处理中英文对照外文翻译文献

金属热处理中英文对照外文翻译文献

中英文对照外文翻译文献(文档含英文原文和中文翻译)原文:Heat treatment of metalThe generally accepted definition for heat treating metals and metal alloys is “heating and cooling a solid metal or alloy in a way so as to obtain specific conditions or properties.” Heating for the sole purpose of hot working (as in forging operations) is excluded from this definition.Likewise,the types of heat treatment that are sometimes used for products such as glass or plastics are also excluded from coverage by this definition.Transformation CurvesThe basis for heat treatment is the time-temperature-transformation curves or TTT curves where,in a single diagram all the three parameters are plotted.Because of the shape of the curves,they are also sometimes called C-curves or S-curves.To plot TTT curves,the particular steel is held at a given temperature and the structure is examined at predetermined intervals to record the amount of transformation taken place.It is known that the eutectoid steel (T80) under equilibrium conditions contains,all austenite above 723℃,whereas below,it is the pearlite.To form pearlite,the carbon atoms should diffuse to form cementite.The diffusion being a rate process,would require sufficient time for complete transformation of austenite to pearlite.From different samples,it is possible to note the amount of the transformation taking place at any temperature.These points are then plotted on a graph with time and temperature as the axes.Through these points,transformation curves can be plotted as shown in Fig.1 for eutectoid steel.The curve at extreme left represents the time required for the transformation of austenite to pearlite to start at any given temperature.Similarly,the curve at extreme right represents the time required for completing the transformation.Between the two curves are the pointsrepresenting partial transformation. The horizontal lines Ms and Mf represent the start and finish of martensitic transformation.Classification of Heat Treating ProcessesIn some instances,heat treatment procedures are clear-cut in terms of technique and application.whereas in other instances,descriptions or simple explanations are insufficient because the same technique frequently may be used to obtain different objectives.For example, stress relieving and tempering are often accomplished with the same equipment and by use of identical time and temperature cycles.The objectives,however,are different for the two processes. The following descriptions of the principal heat treating processes are generally arranged according to their interrelationships.Normalizing consists of heating a ferrous alloy to a suitable temperature (usually 50°F to 100°F or 28℃ to 56℃) above its specific upper transformation temperature.This is followed by cooling in still air to at least some temperature well below its transformation temperature range.For low-carbon steels, the resulting structure and properties are the same as those achieved by full annealing;for most ferrous alloys, normalizing and annealing are not synonymous.Normalizing usually is used as a conditioning treatment, notably for refining the grains of steels that have been subjected to high temperatures for forging or other hot working operations. The normalizing process usually is succeeded by another heat treating operation such as austenitizing for hardening, annealing, or tempering.Annealing is a generic term denoting a heat treatment that consists of heating to and holding at a suitable temperature followed by cooling at a suitable rate. It is used primarily to soften metallicmaterials, but also to simultaneously produce desired changes in other properties or in microstructure. The purpose of such changes may be, but is not confined to, improvement of machinability, facilitation of cold work (known as in-process annealing), improvement of mechanical or electrical properties, or to increase dimensional stability. When applied solely to relive stresses, it commonly is called stress-relief annealing, synonymous with stress relieving.When the term “annealing” is applied to ferrous alloys without qualification, full annealing is applied. This is achieved by heating above the alloy’s transformation temperature, then applying a cooling cycle which provides maximum softness. This cycle may vary widely, depending on composition and characteristics of the specific alloy.Quenching is a rapid cooling of a steel or alloy from the austenitizing temperature by immersing the work piece in a liquid or gaseous medium. Quenching medium commonly used include water, 5% brine, 5% caustic in an aqueous solution, oil, polymer solutions, or gas (usually air or nitrogen).Selection of a quenching medium depends largely on the hardenability of material and the mass of the material being treating (principally section thickness).The cooling capabilities of the above-listed quenching media vary greatly. In selecting a quenching medium, it is best to avoid a solution that has more cooling power than is needed to achieve the results, thus minimizing the possibility of cracking and warp of the parts being treated. Modifications of the term quenching include direct quenching, fog quenching, hot quenching, interrupted quenching, selective quenching, spray quenching, and time quenching.Tempering. In heat treating of ferrous alloys, tempering consists of reheating the austenitized and quench-hardened steel or iron to some preselected temperature that is below the lower transformation temperature (generally below 1300 ℃ or 705 ℃ ). Tempering offers a means of obtaining various combinations of mechanical properties. Tempering temperatures used for hardened steels are often no higher than 300 ℃(150 ℃). The term “tempering” should not be confused with either process annealing or stress relieving. Even though time and temperature cycles for the three processes may be the same, the conditions of the materials being processed and the objectives may be different.Stress relieving. Like tempering, stress relieving is always done by heating to some temperature below the lower transformation temperature for steels and irons. For nonferrous metals, the temperature may vary from slightly above room temperature to several hundred degrees, depending on the alloy and the amount of stress relief that is desired.The primary purpose of stress relieving is to relieve stresses that have been imparted to the workpiece from such processes as forming, rolling, machining or welding. The usual procedure is toheat workpiece to the pre-established temperature long enough to reduce the residual stresses (this is a time-and temperature-dependent operation) to an acceptable level; this is followed by cooling at a relatively slow rate to avoid creation of new stresses.The generally accepted definition for heat treating metals and metal alloys is “heating and cooling a solid metal or alloy in a way so as to obtain specific conditions or properties.” Heating for the sole purpose of hot working (as in forging operations) is excluded from this definition.Likewise,the types of heat treatment that are sometimes used for products such as glass or plastics are also excluded from coverage by this definition.Transformation CurvesThe basis for heat treatment is the time-temperature-transformation curves or TTT curves where,in a single diagram all the three parameters are plotted.Because of the shape of the curves,they are also sometimes called C-curves or S-curves.To plot TTT curves,the particular steel is held at a given temperature and the structure is examined at predetermined intervals to record the amount of transformation taken place.It is known that the eutectoid steel (T80) under equilibrium conditions contains,all austenite above 723℃,whereas below,it is pearlite.To form pearlite,the carbon atoms should diffuse to form cementite.The diffusion being a rate process,would require sufficient time for complete transformation of austenite to pearlite.From different samples,it is possible to note the amount of the transformation taking place at any temperature.These points are then plotted on a graph with time and temperature as the axes.Through these points,transformation curves can be plotted as shown in Fig.1 for eutectoid steel.The curve at extreme left represents the time required for the transformation of austenite to pearlite to start at any given temperature.Similarly,the curve at extreme right represents the time required for completing the transformation.Between the two curves are the points representing partial transformation. The horizontal lines Ms and Mf represent the start and finish of martensitic transformation.Classification of Heat Treating ProcessesIn some instances,heat treatment procedures are clear-cut in terms of technique and application.whereas in other instances,descriptions or simple explanations are insufficient because the same technique frequently may be used to obtain different objectives.For example, stress relieving and tempering are often accomplished with the same equipment and by use of identical time and temperature cycles.The objectives,however,are different for the two processes.The following descriptions of the principal heat treating processes are generally arranged according to their interrelationships.Normalizing consists of heating a ferrous alloy to a suitable temperature (usually 50°F to 100°F or 28℃ to 56℃) above its specific upper transformation temperature.This is followed by cooling in still air to at least some temperature well below its transformation temperature range.For low-carbon steels, the resulting structure and properties are the same as those achieved by full annealing;for most ferrous alloys, normalizing and annealing are not synonymous.Normalizing usually is used as a conditioning treatment, notably for refining the grains of steels that have been subjected to high temperatures for forging or other hot working operations. The normalizing process usually is succeeded by another heat treating operation such as austenitizing for hardening, annealing, or tempering.Annealing is a generic term denoting a heat treatment that consists of heating to and holding at a suitable temperature followed by cooling at a suitable rate. It is used primarily to soften metallic materials, but also to simultaneously produce desired changes in other properties or in microstructure. The purpose of such changes may be, but is not confined to, improvement of machinability, facilitation of cold work (known as in-process annealing), improvement of mechanical or electrical properties, or to increase dimensional stability. When applied solely to relive stresses, it commonly is called stress-relief annealing, synonymous with stress relieving.When the term “annealing” is applied to ferrous alloys without qualification, full annealing is applied. This is achieved by heating above the alloy’s transformation temperature, then applying a cooling cycle which provides maximum softness. This cycle may vary widely, depending on composition and characteristics of the specific alloy.Quenching is a rapid cooling of a steel or alloy from the austenitizing temperature by immersing the workpiece in a liquid or gaseous medium. Quenching medium commonly used include water, 5% brine, 5% caustic in an aqueous solution, oil, polymer solutions, or gas (usually air or nitrogen).Selection of a quenching medium depends largely on the hardenability of material and the mass of the material being treating (principally section thickness).The cooling capabilities of the above-listed quenching media vary greatly. In selecting aquenching medium, it is best to avoid a solution that has more cooling power than is needed to achieve the results, thus minimizing the possibility of cracking and warp of the parts being treated. Modifications of the term quenching include direct quenching, fog quenching, hot quenching, interrupted quenching, selective quenching, spray quenching, and time quenching.Tempering. In heat treating of ferrous alloys, tempering consists of reheating the austenitized and quench-hardened steel or iron to some preselected temperature that is below the lower transformation temperature (generally below 1300 ℃ or 705 ℃). Tempering offers a means of obtaining various combinations of mechanical properties. Tempering temperatures used for hardened steels are often no higher than 300 ℃(150 ℃). The term “tempering” should not be confused with either process annealing or stress relieving. Even though time and temperature cycles for the three processes may be the same, the conditions of the materials being processed and the objectives may be different.Stress relieving. Like tempering, stress relieving is always done by heating to some temperature below the lower transformation temperature for steels and irons. For nonferrous metals, the temperature may vary from slightly above room temperature to several hundred degrees, depending on the alloy and the amount of stress relief that is desired.The primary purpose of stress relieving is to relieve stresses that have been imparted to the workpiece from such processes as forming, rolling, machining or welding. The usual procedure is to heat workpiece to the pre-established temperature long enough to reduce the residual stresses (this is a time-and temperature-dependent operation) to an acceptable level; this is followed by cooling at a relatively slow rate to avoid creation of new stresses.The generally accepted definition for heat treating metals and metal alloys is “heating and cooling a solid metal or alloy in a way so as to obtain specific conditions or properties.” Heating for the sole purpose of hot working (as in forging operations) is excluded from this definition.Likewise,the types of heat treatment that are sometimes used for products such as glass or plastics are also excluded from coverage by this definition.Transformation CurvesThe basis for heat treatment is the time-temperature-transformation curves or TTT curves where,in a single diagram all the three parameters are plotted.Because of the shape of the curves,they are also sometimes called C-curves or S-curves.To plot TTT curves,the particular steel is held at a given temperature and the structure is examined at predetermined intervals to record the amount of transformation taken place.It is known that the eutectoid steel (T80) under equilibrium conditions contains,all austenite above 723℃,whereas below,it is pearlite.To form pearlite,the carbon atoms should diffuse to form cementite.The diffusion being a rate process,would require sufficient time for complete transformation of austenite to pearlite.From different samples,it is possible to note the amount of the transformation taking placeat any temperature.These points are then plotted on a graph with time and temperature as the axes.Through these points,transformation curves can be plotted as shown in Fig.1 for eutectoid steel.The curve at extreme left represents the time required for the transformation of austenite to pearlite to start at any given temperature.Similarly,the curve at extreme right represents the time required for completing the transformation.Between the two curves are the points representing partial transformation. The horizontal lines Ms and Mf represent the start and finish of martensitic transformation.Classification of Heat Treating ProcessesIn some instances,heat treatment procedures are clear-cut in terms of technique and application.whereas in other instances,descriptions or simple explanations are insufficient because the same technique frequently may be used to obtain different objectives.For example, stressrelieving and tempering are often accomplished with the same equipment and by use of identical time and temperature cycles.The objectives,however,are different for the two processes.The following descriptions of the principal heat treating processes are generally arranged according to their interrelationships.Normalizing consists of heating a ferrous alloy to a suitable temperature (usually 50°F to 100°F or 28℃ to 56℃) above its specific upper transformation temperature.This is followed by cooling in still air to at least some temperature well below its transformation temperature range.For low-carbon steels, the resulting structure and properties are the same as those achieved by full annealing;for most ferrous alloys, normalizing and annealing are not synonymous.Normalizing usually is used as a conditioning treatment, notably for refining the grains of steels that have been subjected to high temperatures for forging or other hot working operations. The normalizing process usually is succeeded by another heat treating operation such as austenitizing for hardening, annealing, or tempering.Annealing is a generic term denoting a heat treatment that consists of heating to and holding at a suitable temperature followed by cooling at a suitable rate. It is used primarily to soften metallic materials, but also to simultaneously produce desired changes in other properties or in microstructure. The purpose of such changes may be, but is not confined to, improvement of machinability, facilitation of cold work (known as in-process annealing), improvement of mechanical or electrical properties, or to increase dimensional stability. When applied solely to relive stresses, it commonly is called stress-relief annealing, synonymous with stress relieving.When the term “annealing” is applied to ferrous alloys without qualification, full annealing is applied. This is achieved by heating above the alloy’s transformation temperature, then applying a cooling cycle which provides maximum softness. This cycle may vary widely, depending on composition and characteristics of the specific alloy.Quenching is a rapid cooling of a steel or alloy from the austenitizing temperature by immersing the workpiece in a liquid or gaseous medium. Quenching medium commonly used include water, 5% brine, 5% caustic in an aqueous solution, oil, polymer solutions, or gas (usually air or nitrogen).Selection of a quenching medium depends largely on the hardenability of material and the mass of the material being treating (principally section thickness).The cooling capabilities of the above-listed quenching media vary greatly. In selecting a quenching medium, it is best to avoid a solution that has more cooling power than is needed to achieve the results, thus minimizing the possibility of cracking and warp of the parts being treated. Modifications of the term quenching include direct quenching, fog quenching, hot quenching, interrupted quenching, selective quenching, spray quenching, and time quenching.Tempering. In heat treating of ferrous alloys, tempering consists of reheating the austenitized and quench-hardened steel or iron to some preselected temperature that is below the lower transformation temperature (generally below 1300 ℃ or 705 ℃). Tempering offers a means of obtaining various combinations of mechanical properties. Tempering temperatures used for hardened steels are often no higher than 300 oF (150 ℃). The term “tempering” should not be confused with either process annealing or stress relieving. Even though time and temperature cycles for the three processes may be the same, the conditions of the materials being processed and the objectives may be different.Stress relieving. Like tempering, stress relieving is always done by heating to some temperature below the lower transformation temperature for steels and irons. For nonferrous metals, the temperature may vary from slightly above room temperature to several hundred degrees, depending on the alloy and the amount of stress relief that is desired.The primary purpose of stress relieving is to relieve stresses that have been imparted to the workpiece from such processes as forming, rolling, machining or welding. The usual procedure is to heat workpiece to the pre-established temperature long enough to reduce the residual stresses (this is a time-and temperature-dependent operation) to an acceptable level; this is followed by cooling at a relatively slow rate to avoid creation of new stresses.金属热处理对于热处理金属和金属合金普遍接受的定义是对于热处理金属和金属合金普遍接受的定义是“加热和冷却的方式了坚实的金“加热和冷却的方式了坚实的金属或合金,以获得特定条件或属性为唯一目的。

机械类外文文献及翻译大全

机械类外文文献及翻译大全
机械类的外文翻译机械类外文文献机械类外文翻译外文文献翻译外文文献翻译格式外文文献翻译网站外文文献翻译要求外文文献翻译软件
外文文献及其翻译 机械/数控/模具/PLC数控编程/机电一体化毕业设计外文文献及其翻译(联系 QQ:2947387549)
15号规格自动挖掘机的线性、非线性和经典的控制器--中文翻译.doc 15号规格自动挖掘机的线性、非线性和经典的控制器--外文文献.pdf 20.9 可机加工性.doc 20.9 可机加工性——外文文献.doc 20世纪到21世纪水性涂料面临的技术挑战-中文翻译.doc 20世纪到21世纪水性涂料面临的技术挑战-外文文献.pdf 5-氟尿嘧啶、阿霉素和柔红霉素在医院废水中的去向和通过活性污泥法的ห้องสมุดไป่ตู้除以及通过膜生物反应系 5-氟尿嘧啶、阿霉素和柔红霉素在医院废水中的去向和通过活性污泥法的去除以及通过膜生物反应系 5轴数铣中心下注塑模具自动抛光过程-中文翻译.doc 5轴数铣中心下注塑模具自动抛光过程-外文文献.pdf 781型铣边机维修-中文翻译.doc 781型铣边机维修-外文文献.doc AISI 304 不锈钢基体ZrN涂层的抗腐蚀性-中文翻译.doc AISI 304 不锈钢基体ZrN涂层的抗腐蚀性-外文文献.pdf ALGORYTHMS控制速度和斯特雷奇-中文翻译.doc ALGORYTHMS控制速度和斯特雷奇-外文文献.pdf AT89C51单片机-中文翻译.doc AT89C51单片机-外文文献.pdf Catia-UG-Proe 的比较与前景-中文翻译.doc Catia-UG-Proe 的比较与前景-外文文献.doc CVT--外文文献.pdf CVT.doc C型搅拌摩擦焊的现状与发展-中文翻译.doc C型搅拌摩擦焊的现状与发展-外文文献.doc DS1302时钟芯片-中文翻译.doc DS1302时钟芯片-外文文献.doc MASTERDRIVES-独一无二而范围完全的驱动器-中文翻译.doc MASTERDRIVES-独一无二而范围完全的驱动器-外文文献.doc MS Access MRP(制造资源计划)小企业的MRP-小企业的ERP基于微软的Access数据库ERPMRP ERP.MRP 制造软件项目-中文翻译.doc MS Access MRP(制造资源计划)小企业的MRP-小企业的ERP基于微软的Access数据库ERPMRP ERP.MRP 制造软件项目-外文文献.doc PAC——新一代工业控制系统, 可编程自动化控制发展的未来-中文翻译.doc PAC——新一代工业控制系统, 可编程自动化控制发展的未来-外文文献.doc PLC、工业PC与DCS的特点与趋势--中文翻译.doc PLC、工业PC与DCS的特点与趋势-外文文献.doc PLC安装须知--中文翻译.doc PLC安装须知-外文文献.doc PLC的最新发展趋势.doc PLC的最新发展趋势——外文文献.pdf PLC的特点.doc PLC的特点——外文文献.pdf PLC简介-外文文献.doc PLC简介.doc Wincc在供热站恒压供水监控系统中的应用-中文翻译.docx Wincc在供热站恒压供水监控系统中的应用-外文文献.docx 一个合理的解决多种应用-外文文献.doc 一个合理的解决多种应用.doc 一种实用的办法--带拖车移动机器人的反馈控制-中文翻译.doc 一种实用的办法--带拖车移动机器人的反馈控制-外文文献.doc

汽车电子系统中英文对照外文翻译文献

汽车电子系统中英文对照外文翻译文献

汽车电子系统中英文对照外文翻译文献汽车电子系统中英文对照外文翻译文献1汽车电子系统中英文对照外文翻译文献(文档含英文原文和中文翻译)The Changing Automotive Environment: High-Temperature ElectronicsR. Wayne Johnson, Fellow, IEEE, John L. Evans, Peter Jacobsen, James R. (Rick) Thompson, and Mark ChristopherAbstract —The underhood automotive environment is harsh and current trends in the automotive electronics industry will be pushing the temperatureenvelope for electronic components. The desire to place engine control unitson the engine and transmission control units either on or in the transmissionwill push the ambient temperature above 125125℃℃.However, extreme cost pressures,increasing reliability demands (10 year/241 350 km) and the cost of field failures (recalls, liability, customer loyalty) will make the shift to higher temperatures occur incrementally. The coolest spots on engine and in the transmission will be used. These large bodies do provide considerableheat sinking to reduce temperature rise due to power dissipation in the controlunit. The majority of near term applications will be at 150 ℃ or less andthese will be worst case temperatures, not nominal. The transition toX-by-wire technology, replacing mechanical and hydraulic systems with electromechanical systems will require more power electronics. Integrationof power transistors and smart power devices into the electromechanical℃ to 200℃ . Hybridactuator will require power devices to operate at 175electric vehicles and fuel cell vehicles will also drive the demand for higher temperature power electronics. In the case of hybrid electric and fuel cell vehicles, the high temperature will be due to power dissipation. Thealternates to high-temperature devices are thermal management systems which add weight and cost. Finally, the number of sensors in vehicles is increasingas more electrically controlled systems are added. Many of these sensors mustwork in high-temperature environments. The harshest applications are exhaustgas sensors and cylinder pressure or combustion sensors. High-temperature electronics use in automotive systems will continue to grow, but it will be gradual as cost and reliability issues are addressed. This paper examines themotivation for higher temperature operation,the packaging limitations evenat 125 C with newer package styles and concludes with a review of challenge at both the semiconductor device and packaging level as temperatures push beyond 125 ℃.Index Terms—Automotive, extreme-environment electronics.I. INTRODUCTIONI N 1977, the average automobile contained $110 worth of electronics [1]. By 2003 the electronics content was $1510 per vehicle and is expected to reach$2285 in 2013 [2].The turning point in automotive electronics was governmentTABLE IMAJOR AUTOMOTIVE ELECTRONIC SYSTEMSTABLE IIAUTOMOTIVETEMPERATUREEXTREMES(DELPHIDELCOELECTRONIC SYSTEMS) [3]regulation in the 1970s mandating emissions control and fuel economy. The complex fuel control required could not be accomplished using traditional mechanical systems. These government regulations coupled with increasing semiconductor computing power at decreasing cost have led to an ever increasing array of automotive electronics. Automotive electronics can be divided into five major categories as shown in Table I.The operating temperature of the electronics is a function of location, power dissipation by the electronics, and the thermal design. The automotive electronics industry defines high-temperature electronics as electronics operating above 125 ℃. However, the actual temperature for various electronics mounting locations varies considerably. Delphi Delco Electronic Systems recently published the typical continuous maximum temperatures as reproduced in Table II [3]. The corresponding underhood temperatures are shown in Fig. 1. The authors note that typical junction temperatures for integrated circuits are 10 ℃to15℃ higher than ambient or baseplate temperature, while power devices can reach 25 ℃ higher. At-engine temperatures of 125℃ peak can be maintained by placing the electronics on theintake manifold.Fig. 1. Engine compartment thermal profile (Delphi Delco Electronic Systems) [3].TABLE III THEAUTOMOTIVEENVIRONMENT(GENERALMOTORS ANDDELPHIDELCO ELECTRONICSYSTEMS) [4]TABLE IV REQUIREDOPERATIONTEMPERATURE FORAUTOMOTIVEELECTRONIC SYSTEMS(TOYOTAMOTORCORP. [5]TABLE VMECHA TRONICMAXIMUMTEMPERA TURERANGES(DAIMLERCHRYSLER,EA TONCORPORA TION, ANDAUBURNUNIVERSITY) [6]Fig. 2. Automotive temperatures and related systems (DaimlerChrysler) [8].automotive electronic systems [8]. Fig. 3 shows an actual measured transmission transmission temperature temperature temperature profile profile profile during during during normal normal normal and and excessive excessive driving drivingconditions [8]. Power braking is a commonly used test condition where the brakes are applied and the engine is revved with the transmission in gear.A similar real-world situation would be applying throttle with the emergencybrake applied. Note that when the temperature reached 135135℃℃,the over temperature light came on and at the peak temperature of 145145℃℃,the transmission was beginning to smell of burnt transmission fluid.TABLE VI2002I NTERNA TIONAL T ECHNOLOGY R OADMAPFOR S EMICONDUCTORS A MBI ENTOPERA TINGTEMPERA TURES FORHARSHENVIRONMENTS (AUTOMOTIVE) [9]The 2002 update to the International Technology Roadmap for Semiconductors (ITRS) did not reflect the need for higher operating temperatures for complex integrated circuits, but did recognize increasing temperature requirements for power and linear devices as shown in Table VI [9]. Higher temperature power devices (diodes and transistors) will be used for the power section of power converters and motor drives for electromechanical actuators. Higher temperature linear devices will be used for analog control of power converters and for amplification and some signal processing of sensor outputs prior to transmission to the control units. It should be noted that at the maximum rated temperature for a power device, the power handling capability is derated to zero. Thus, a 200℃ rated power transistor in a 200℃ environment would have zero current carrying capability. Thus, the actual operating environments must be lower than the maximum rating.In the 2003 edition of the ITRS, the maximum junction temperatures identified forharsh-environment complex integrated circuits was raised to 150℃through 2018 [9]. Theambient operating temperature extreme for harsh-environment complex integrated circuits was defined as 40℃to 125℃ through 2009, increasing to 40℃to 150℃for 2010 and beyond. Power/linear devices were not separately listed in 2003.The ITRS is consistent with the current automotive high-temperature limitations. Delphi Delco Electronic Systems offers two production engine controllers (one on ceramic and one on thin laminate) for direct mounting on the engine. These controllers are rated for operation over the temperature range of 40℃to 125℃. The ECU must be mounted on the coolest spot on the engine. The packaging technology is consistent with 140℃ operation, but the ECU is limited by semiconductor and capacitor technologies to 125℃.The future projections in the ITRS are not consistent with the desire to place controllers on-engine or in-transmission. It will not always be possible to use the coolest location for mounting control units. Delphi Delco Electronics Systems has developed an in-transmission controller for use in an ambient temperature of 140℃[10] using ceramic substrate technology. DaimlerChrysler is also designing an in-transmission controller for usewith a maximum ambient temperature of 150℃ (Figs. 4 and 5) [11].II. MECHATRONICSMechatronics, or the integration of electrical and mechanical systems offers a number ofadvantages in automotive assembly. Integration of the engine controller with the engine allows pretest of the engine as a complete system prior to vehicle assembly. Likewise with the integration of the transmission controller and the transmission, pretesting and tuning to account for machining variations can be performed at the transmission factory prior to shipment to the automobile assembly site. In addition, most of the wires connecting to a transmission controller run to the solenoid pack inside the transmission. Integration of the controller into the transmission reduces the wiring harness requirements at the automobile assembly level.Fig. 4. Prototype DaimlerChrysler ceramic transmission controller [11]Fig. 5. DaimlerChrysler in-transmission module [11].The trend in automotive design is to distribute control with network communications. As the industry moves to more X-by-wire systems, this trend will continue. Automotivefinalassembly plants assemble subsystems and components supplied by numerous vendors to build the vehicle. Complete mechatronic subsystems simplify the design, integration, management, inventory control, and assembly of vehicles. As discussed in the previous section, higher temperature electronics will be required to meet future mechatronic designs.III. PACKAGINGCHALLENGES AT125℃Trends in electronics packaging, driven by computer and portable products are resulting in packages which will not meet underhood automotive requirements at 125℃. Most notable are leadless and area array packages such as small ball grid arrays (BGAs) and quadflatpacks no-lead (QFNs). Fig. 6 shows the thermal cycle test 40 ℃to 125℃ results for two sizes of QFN from two suppliers [12]. A typical requirement is for the product to survive 2000–2500 thermal cycles with<1% failure for underhood applications. Smaller I/O QFNs have been found to meet the requirements.Fig. 7 presents the thermal cycle results for BGAs of various body sizes [13]. The die size in the BGA remained constant (8.6 *8.6 mm). As the body size decreases so does the reliability. Only the 23-mm BGA meets the requirements. The 15-mm BGA with the 0.56-mm-thick BT substrate nearly meets the minimum requirements. However, the industry trend is to use thinner BT substrates (0.38 mm) for BGA packages.One solution to increasing the thermal cycle performance of smaller BGAs is to use underfill. Capillary underfill was dispensed and cured after reflow assembly of the BGA. Fig. 8 shows a Weibull plot of the thermal cycle data for the 15-mm BGAs with four different underfills. Underfill UF1 had no failures after 5500 cycles and is, therefore, not plotted. Underfill, therefore, provides a viable approach to meeting underhood automotive requirements with smaller BGAs, but adds process steps, time, and cost to the electronics assembly process.Since portable and computer products dominate the electronics market, the packages developed for these applications are replacing traditional packages such as QFPs for new devices. The automotive electronics industry will have to continuedeveloping assembly approaches such as underfill just to use these new packages in current underhood applications.IV. TECHNOLOGY CHALLENGES ABOVE125 ℃The technical challenges for high-temperature automotive applications are interrelated, but can be divided into semiconductors, passives, substrates,interconnections, and housings/connectors. Industries such as oil well logging have successfully fielded high-temperature electronics operating at 200℃ and above. However, automotive electronics are further constrained by high-volume production, low cost, and long-term reliability requirements. The typical operating life for oil well logging electronics may only be 1000 h, production volumes are in the range of 10s or 100s and, while cost is a concern, it is not a dominant issue. In the following paragraphs, the technical challenges for high-temperature automotive electronics are discussed.Semiconductors: The maximum rated ambient temperature for most silicon basedintegrated circuits is 85℃, which is sufficient for consumer, portable, and computing product applications. Devices for military and automotive applications are typically rated to 125℃. A few integrated circuits are rated to 150℃, particularly for power supply controllers and a few automotive applications. Finally, many power semiconductor devices are derated to zero power handling capability at 200℃.Nelmset al.and Johnsonet al.have shown that power insulated-gate bipolar transistors (IGBTs) and metal–oxide–semiconductorfield-effect transistors (MOSFETs) can be used at 200℃[14], [15]. The primary limitations of these power transistors at the higher temperatures are the packaging (the glass transition temperature of common molding compounds is in the 180℃ to 200℃range) and the electrical stress on the transistor during hard switching.A number of factors limit the use of silicon at high temperatures. First, with a bandgap of 1.12 eV, the silicon p-n junction becomes intrinsic at high temperature (225℃ to 400℃depending on doping levels). The intrinsic carrier concentration is given by (1)As the temperature increases, the intrinsic carrier concentration increases. When the intrinsic carrier concentration nears the doping concentration level, p-n junctions behave as resistors, not diodes, and transistors lose their switching characteristics. One approach used in high-temperature integrated circuit design is to increase the doping levels, which increases the temperature at which the device becomes intrinsic. However, increasing the doping levels decreases the depletion widths, resulting in higher electricfields within the device that can lead to breakdown.A second problem is the increase in leakage current through a reverse-biased p-n junction with increasing temperature. Reverse-biased p-n junctions are commonly used in IC design to provide isolation between devices. The saturation current (I,the ideal reverse-bias current of the junction) is proportional to the square of the intrinsic carrier concentrationwhere Ego=bandgap energy atT= 0KThe leakage current approximately doubles for each 10℃rise in junction temperature. Increased junction leakage currents increase power dissipation within the device and can lead to latch-up of the parasitic p-n-p-n structure in complimentary metal–oxide–semiconductor (CMOS) devices. Epitaxial-CMOS (epi-CMOS) has been developed to improve latch-up resistance as the device dimensions are decreased due to scaling and provides improved high-temperature performance compared to bulk CMOS.Silicon-on-insulator (SOI) technology replaces reverse-biased p-n junctions with insulators, typically SiO2 , reducing the leakage currents and extending the operating range of silicon above 200℃. At present, SOI devices are more expensive than conventional p-njunction isolated devices. This is in part due to the limited use of SOI technology. With the continued scaling of device dimensions, SOI is being used in some high-performance applications and the increasing volume may help to eventually lower the cost.Other device performance issues at higher temperatures include gate threshold voltage shifts, decreased noise margin, decreased switching speed, decreased mobility, decreased gain-bandwidth product, and increased amplifier input–offset voltage [16]. Leakage currents also increase for insulators with increasing temperature. This results in increased gate leakage currents, and increased leakage of charge stored in memory cells (data loss). For dynamic memory, the increased leakage currents require faster refresh rates. For nonvolatile memory, the leakage limits the life of the stored data, a particular issue for FLASH memory used in microcontrollers and automotive electronics modules.Beyond the electrical performance of the device, the device reliability must also be considered. Electromigration of the aluminum metallization is a major concern. Electromigration is the movement of the metal atoms due to their bombardment by electrons (current flow). Electromigration results in the formation of hillocks and voids in the conductor traces. The mean time to failure (MTTF) for electromigration is related to the current density (J)and temperature(T) as shown in (3)The exact rate of electromigration and resulting time to failure is a function of the aluminum microstructure. Addition of copper to the aluminum increases electromigration resistance. The trend in the industry to replace aluminum with copper will improve the electromigration resistance by up to three orders of magnitude [17].Time dependent dielectric breakdown (TDDB) is a second reliability concern. Time to failure due to TDDB decreases with increasing temperature. Oxide defects, including pinholes, asperities at the Si–SiO2 interface and localized changes in chemical structure that reduce the barrier height or increase the charge trapping are common sources of early failure [18]. Breakdown can also occur due to hole trapping (Fowler–Nordheim tunneling). The holes can collect at weak spots in the Si–SiO2 interface, increasing the electricfield locally and leading to breakdown [18]. The temperature dependence of time-to-breakdown(tBD) can be expressed as [18]Values reported for Etbd vary in the literature due to its dependence on the oxidefield and the oxide quality. Furthermore, the activation energy increases with breakdown time [18].With proper high-temperature design, junction isolated silicon integrated circuits can be used to junction temperatures of 150℃ to 165℃, epi-CMOS can extend the range to 225℃to 250℃ and SOI can be used to 250℃ to 280℃ [16, pp. 224]. High-temperature, nonvolatile memory remains an issue.For temperatures beyond the limits of silicon, silicon carbidebased semiconductors are being developed. The bandgap of SiC ranges from 2.75–3.1 depending on the polytype. SiC has lower leakage currents and higher electric field strength than Si. Due to its wider bandgap, SiC can be used as a semiconductor device at temperatures over 600℃. Theprimary focus of SiC device research is currently for power devices. SiC power devices may eventuallyfind application as power devices in braking systems and direct fuel injection. High-temperature sensors have also been fabricated with SiC. Berget al.have demonstrated a SiCbased sensor for cylinder pressure in combustion engines [19] at up to 350℃ and Casadyet al.[20] have shown a SiC-based temperature sensor for use to 500℃. At present, the wafer size, cost, and device yield have made SiC devices too expensive for general automotive use. Most SiC devices are discrete, as the level of integration achieved in SiC to date is low.Passives: Thick and thin-film chip resistors are typically rated to 125 ℃. Naefeet al.[21] and Salmonet al.[22] have shown that thick-film resistors can be used at temperatures above 200℃ if the allowable absolute tolerance is 5% or greater. The resistors studied were specifically formulated with a higher softening point glass. The minimum resistance as afunction of temperature was shifted from 25℃to 150℃to minimize the temperature coefficient of resistance (TCR) over the temperature range to 300℃. TaN and NiCr thin-film resistors have been shown to have less than 1% drift after 1000 h at 200℃ [23]. Thus, for tighter tolerance applications, thin-film chip resistors are preferred. Wire wound resistors provide a high-temperature option for higher power dissipation levels [21].High-temperature capacitors present more of a challenge. For low-value capacitors, negative-positive-zero (NPO) ceramic and MOS capacitors provide low-temperature coefficient of capacitance (TCC) to 200℃. NPO ceramic capacitorshave been demonstrated to 500℃ [24]. Higher dielectric constant ceramics (X7R, X8R, X9U), used to achieve the high volumetric efficiency necessary for larger capacitor values, exhibit a significant capacitance decrease above the Curie temperature, which is typically between 125℃ to 150℃. As the temperature increases, the leakage current increases, the dissipation factor increases, and the breakdown strength decreases. Increasing the dielectric tape thickness to increase breakdown strength reduces the capacitance and is a tradeoff. X7R ceramic capacitors have been shown to be stable when stored at 200℃ [23]. X9U chip capacitors are commercially available for use to 200 C, but there is a significant decrease in capacitance above 150℃.Consideration must also be given to the capacitor electrodes and terminations. Ni is now being substituted for Ag and PdAg to lower capacitor cost. The impact of this change on hightemperature reliability must be evaluated. The surface finish for ceramic capacitor terminations is typically Sn. The melting point of the Sn (232℃) and its interaction with potential solders/brazes must also be considered. Alternate surfacefinishes may be required.For higher value, low-voltage requirements, wet tantalum capacitors show reasonable behavior at 200℃ if the hermetic seal does not lose integrity [23]. Aluminum electrolytics are also available for use to 150℃. Mica paper (260℃) and Teflonfilm (200℃) capacitors can provide higher voltage capability, but are large and bulky [25]. High-temperature capacitors are relatively expensive. V capacitors are relatively expensive. Volumetrically efficient, high-voltage, highcapacitance, olumetrically efficient, high-voltage, highcapacitance, high-temperature and low-cost capacitors are still needed.Standard transformers and inductor cores with copper wire and teflon insulation are suitable for operation to 200℃. For higher temperature operation, the magnetic core, the conductor metal (Ni instead of Cu) and insulator must be selected to be compatible with the higher temperatures [16, pp. 651–652] Specially designed transformers can be used to 450℃ to 500℃, however, they are limited in operating frequency.Crystals are required for clock frequency generation for microcontrollers. Crystals with acceptable frequency shift over the temperature range from 55℃to 200℃ have been demonstrated [22]. However, the selection of packaging materials and assembly process for the crystal are key to high-temperature performance and reliability. For example, epoxies used in assembly must be compatible with 200℃ operation.Substrates: Thick-film substrates with gold metallization have been used in circuits to 500℃ [21], [23]. Palladium silver, platinum silver, and silver conductors are morecommonly used in automotive hybrids for reduced cost. Silver migration has been observed with an unpassivated PdAg thick-film conductor under bias at 300℃ [21]. The time-to-failure needs to be examined as a function of temperature and bias voltage with and without passivation. Low-temperature cofired ceramic (LTCC) and high-temperature cofired ceramic (HTCC) are also suitable for high-temperature automotive applications. Embedded resistors are standard to thick-film hybrids, LTCC, and some HTCC technologies. As previously mentioned, thick-film resistors have been demonstrated at temperatures 200℃. Dielectric tapes for embedded capacitors have also been developed for LTCC and HTCC. However, these embedded capacitors have not been characterized for high-temperature use.High-Tg laminates are also available for fabrication of hightemperature printed wiring boards. Cyanate esters [Tg=250℃by differential scanning calorimetry (DSC)], polyimide (260℃by DSC), and liquid crystal polymers(Tm>280℃)provide options for use to 200℃. Cyanate ester boards have been used successfully in test vehicles at 175℃, but failed when exposed to 250℃ [26]. The higher coefficient of thermal expansion (CTE) of the laminate substrates compared to the ceramics must be considered in the selection of component attachment materials. The temperature limits of the laminates with respect to assembly temperatures must also be carefully considered. Work is ongoing to develop and implement embedded resistor and capacitor technology for laminate substrates for conventional temperature ranges. This technology has not been extended to high-temperature applications.One method many manufacturers are using to address the higher temperatures whilemaintaining lower cost is the use of laminate substrates attached to metal. The typical design involves the use of higher Tg( +140℃ and above) laminate substrates attached to an aluminum plate (approximately 2.54-mm thick) using a sheet or liquid adhesive. To assist in thermal performance, the laminate substrate is often thinner (0.76 mm) than traditional automotive substrates for under-the-hood applications. While this design provides improved thermal performance, the attachment of the laminate to aluminum increases the CTE for the overall substrates. The resultant CTE is very dependent on the ability of the attachment material to decouple the CTE between the laminate substrate and the metal backing. However, regardless of the attachment material used, the combination of the laminate and metal will increase the CTE of the overall substrate above that of a stand-alone laminate substrate. This impact can be quite significant in the reliability performance for components with low CTE values (such as ceramic chip resistors). Fig. 9 illustrates the impact of two laminate-to-metal attachment options compared to standard laminate substrates [27], [28]. The reliability data presented is for 2512 ceramic chip resistors attached to a 0.79-mm-thick laminate substrate attached to aluminum using two attachment materials. Notice that while one material significantly outperforms the other, both are less reliable than the same chip resistor attached to laminate without metal backing.This decrease in reliability is also exhibited on small ball grid array (BGA) packages. Fig. 10 shows the reliability of a 15-mm BGA package attached to laminate compared to the same package attached to a laminate substrate with metal backing [27], [28]. The attachment material used for the metal-backed substrate was the best material selected from previous testing. Notice again that the metal-backed substrate deteriorates the reliability. This reliability deterioration is of particular concern since many IC packages used for automotive applications are ball grid array packages and the packaging trend is for reduced packaging size. These packaging trends make the use of metal-backed substrates difficult for next generation products.One potential solution to the above reliability concern is the use of encapsulants and underfills. Fig. 11 illustrates how conformal coating can improve component reliability for surface mount chip resistors [27], [28]. Notice that the reliability varies greatly depending on material composition. However, for components which meet a marginal level of reliability, conformal coatings may assist the design in meeting the target reliability requirements. The same scenario can be found for BGA underfills. Typical underfill materials may extend the component life by a factor of two or more. For marginal IC packages, this enhancement may provide enough reliability improvement toall the designs to meet under-the-hood requirements. Unfortunately, the improvements provided byencapsulants and underfills increase the material cost and adds one or more manufacturing processes for material dispense and cure.Interconnections: Methods of mechanical and electrical interconnection of the active and passive components to the board include chip and wire,flip-chip, and soldering of packaged parts. In chip and wire assembly, epoxy die-attach materials can beused to 165℃ [29]. Polyimide and silicone die-attach materials can be used to 200℃. For higher temperatures, SnPb ( >90Pb), AuGe, AuSi, AuSn, and AuIn have been used. However,with the exception of SnPb, these are hard brazes and with increasing die size, CTE mismatches between the die and the substrate will lead to cracking with thermal。

外文翻译资料及译文

外文翻译资料及译文

附录C:外文翻译资料Article Source:Business & Commercial Aviation, Nov 20, 2000. 5-87-88 Interactive Electronic Technical Manuals Electronic publications can increase the efficiency of your digital aircraft and analogtechnicians.Benoff, DaveComputerized technical manuals are silently revolutionizing the aircraft maintenance industry by helping the technician isolate problems quickly, and in the process reduce downtime and costs by more than 10 percent.These electronic publications can reduce the numerous volumes of maintenance manuals, microfiche and work cards that are used to maintain engines, airframes, avionics and their associated components."As compared with the paper manuals, electronic publications give us greater detail and reduced research times," said Chuck Fredrickson, general manager of Mercury Air Center in Fort Wayne, Ind.With all the advances in computer hardware and software technologies, such as high quality digital multimedia, hypertext and the capability to store and transmit digital multimedia via CD-ROMs/ networks, technical publication companies have found an effective, cost-efficient method to disseminate data to technicians.The solution for many operators and OEMs is to take advantage of today's technology in the form of Electronic Technical Manuals (ETM) or Interactive Technical Manuals (IETM). An ETM is any technical manual prepared in digital format that has the ability to be displayed using any electronic hardware media. The difference between the types of ETM/IETMs is the embedded functionality and implementation of the data."The only drawback we had to using ETMs was getting enough computers to meet our technicians' demand," said Walter Berchtold, vice president of maintenance at Jet Aviation's West Palm Beach, Fla., facility.A growing concern is the cost to print paper publications. In an effort to reduce costs, some aircraft manufacturers are offering incentives for owners to switch from paper to electronic publications. With an average printing cost of around 10 cents per page, a typical volume of a paper technical manual can cost the manufacturer over $800 for each copy. When producing a publication electronically, average production costs for a complete set of aircraft manuals are approximately $20 per copy. It is not hard to see the cost advantages of electronic publications.Another advantage of ETMs is the ease of updating information. With a paper copy, the manufacturer has to reprint the revised pages and mail copies to all the owners. When updates are necessary for an electronic manual, changes can either be e-mailed to theowners or downloaded from the manufacturer's Web site.So why haven't more flight departments converted their publications to ETM/IETMs? The answer lies in convincing technicians that electronic publications can increase their efficiency."We had an initial learning curve when the technicians switched over, but now that they are familiar with the software they never want to go back to paper," said Fredrickson.A large majority of corporate technicians also said that while they like the concept of having a tool that aids the troubleshooting process, they are fearful to give up all of their marked-up paper manuals.In 1987, a human factors study was conducted by the U.S. government to compare technician troubleshooting effectiveness, between paper and electronic methodology, and included expert troubleshooting procedures with guidance through the events. Results of the project indicated that technicians using electronic media took less than half the time to complete their tasks than those using the paper method, and technicians using the electronic method accomplished 65 percent more in that reduced time.The report also noted that new technicians using the electronic technical manuals were 12-percent more efficient than the older, more experienced technicians. (Novices using paper took 15 percent longer than the experts.)It is interesting that 90 percent of the technicians who used the electronic manuals said they preferred them to the paper versions. This proved to the industry that with proper training, the older technicians could easily transition from paper to electronic media.Electronic publications are not a new concept, although how they are applied today is. "Research over the last 20 years has provided a solid foundation for today's IETM implementation," said Joseph Fuller of the U.S. Naval Surface Warfare Center. "IETMs such as those for the Apache, Comanche, F-22, JSTAR and V-22 have progressed from concept to military and commercial implementation."In the late 1970s, the U.S. military investigated the feasibility of converting existing paper and microfilm. The Navy Technical Information Presentation System (NTIPS) and the Air Force Computer- based Maintenance Aid System (CMAS) were implemented with significant cost savings.The report stated that transition to electronic publications resulted in reductions in corrective maintenance time, fewer false removals of good components, more accurate and complete maintenance data collection reports, reduction in training requirements and reduced system downtime.The problem that the military encountered was ETMs were created in multiple levels of complexity with little to no standardization. Options for publications range from simple page-turning programs to full-functioning automated databases.This resulted in the classification of ETMs so that the best type of electronic publication could be selected for the proper application.Choosing a LevelWith all of the OEM and second- and third-party electronic publications that are available it is important that you choose the application level that is appropriate for your operation.John J. Miller, BAE Systems' manager of electronic publications, told B/CAthat "When choosing the level of an ETM/IETM, things like complexity of the aircraft and its systems, ease of use, currency of data and commonality of data should be the deciding factors; and, of course, price. If operational and support costs are reduced when you purchase a full-functioning IETM, then you should purchase the better system."Miller is an expert on the production, sustainment and emerging technologies associated with electronic publications, and was the manager of publications for Boeing in Philadelphia.Electronic publications are classified in one of five categories. A Class 1 publication is a basic electronic "page turner" that allows you to view the maintenance manual as it was printed. With a Class 2 publication all the original text of the manual is viewed as one continuous page with no page breaks. In Class 3, 4 and 5 publications the maintenance manual is viewed on a computer in a frame-based environment with increasing options as the class changes. (See sidebar.)Choosing the appropriate ETM for your operation is typically limited to whatever is being offered on the market, but since 1991 human factors reports state the demand has increased and, therefore, options are expected to follow.ETM/IETM ProvidersCompanies that create ETM/IETMs are classified as either OEM or second party provider. Class 1, 3 and 4 ETM/IETMs are the most commonly used electronic publications for business and commercial operators and costs can range anywhere from $100 to $3,000 for each ETM/ IETM. The following are just a few examples ofETM/IETMs that are available on the market.Dassault Falcon Jet offers operatorsof the Falcon 50/50EX, 900/900EX and 2000 a Class 4 IETM called the Falcon Integrated Electronic Library by Dassault (FIELD). Produced in conjunction with Sogitec Industries in Suresnes Cedex, France, the electronic publication contains service documentation, basic wiring, recommended maintenance and TBO schedules, maintenance manual, tools manual, service bulletins, maintenance and repair manual, and avionics manual.The FIELD software allows the user to view the procedures and hot- link directly to the Illustrated parts catalog. The software also enables the user to generate discrepancy forms, quotation sheets, annotations in the manual and specific preferences for each user.BAE's Miller said most of the IETM presentation systems have features called "Technical Notes." If a user of the electronic publication notices a discrepancy or needs to annotate the manual for future troubleshooting, the user can add a Tech Note (an electronic mark-up) to the step or procedure and save it to the base document. The next time that or another user is in the procedure, clicking on the tech note icon launches a pop-up screen displaying the previous technician's comments. The same electronic transfer of tech notes can be sent to other devices by using either a docking station or through a network server. In addition, systems also can use "personal notes" similar to technical notes that are assigned ID codes that only the authoring technician can access.Requirements for the FIELD software include the minimum of a 16X CD-ROM drive,Pentium II 200 MHz computer, Windows 95, Internet Explorer 4 SP 1 and Database Access V3.5 or higher.Raytheon offers owners of Beech and Hawker aircraft a Class 4 IETM called Raytheon Electronic Publication Systems (REPS). The REPS software links the frame-based procedures with the parts catalog using a single CD-ROM.Raytheon Aircraft Technical Publications said other in- production Raytheon aircraft manual sets will be converted to the REPS format, with the goal of having all of them available by 2001. In addition Raytheon offers select Component Maintenance Manuals (CMM). The Class 1 ETM is a stand-alone "page-turner" electronic manual that utilizes the PDF format of Adobe Acrobat.Other manufacturers including Bombardier, Cessna and Gulfstream offer operators similar online and PDF documentation using a customer- accessed Web account.Boeing is one manufacturer that has developed an onboard Class 5 IETM. Called the Computerized Fault Reporting System (CFRS), it has replaced the F-15 U.S. Air Force Fault Reporting Manuals. Technologies that are currently being applied to Boeing's military system are expected to eventually become a part of the corporate environment.The CFRS system determines re-portable faults by analyzing information entered during a comprehensive aircrew debrief along with electronically recovered maintenance data from the Data Transfer Module (DTM). After debrief the technicians can review aircraft faults and schedule maintenance work to be performed. The maintenance task is assigned a Job Control Number (JCN) and is forwarded electronically to the correct work center or shop. Appropriate information is provided to the Air Force's Core Automated Maintenance System (CAMS).When a fault is reported by pilot debrief, certain aircraft systems have the fault isolation procedural data on a Portable Maintenance Aid (PMA). The JCN is selected on a hardened laptop with a wireless Local Area Network (LAN) connection to the CFRS LAN infrastructure. The Digital Wiring Data System (DWDS) displays aircraft wiring diagrams to the maintenance technician for wiring fault isolation. On completion of maintenance, the data collected is provided to the Air Force, Boeing and vendors for system analysis.Third party IETM developers such as BAE Systems and Dayton T. Brown offer OEMs the ability to subcontract out the development of Class 1 through 5 ETM/IETMs. For example, Advantext, Inc. offers PDF and IPDF Class 1 ETMs for manufacturers such as Piper and Bell Helicopters. Technical publications that are available include maintenance manuals, parts catalogs, service bulletins, wiring diagrams, service letters and interactive parts ordering forms.The difference between the PDF and IPDF version is that the IPDF version has the ability to search for text and include hyperlinks. A Class 1 ETM, when printed, is an exact reproduction of the OEM manuals, including any misspellings or errors. Minimum requirements for the Advantext technical publications is a 486 processor, 16 MB RAM with 14 MB of free hard disk space and a 4X CD-ROM or better.Aircraft Technical Publishers (ATP) offers Class 1, 2 and 3 ETM/ IETMsfor the Beechjet 400/400A; King Air 300/ 350, 200 and 90; Learjet 23/24/25/28/29/35/36/55; Socata TB9/10/20/21 and TBM 700A; Sabreliner 265-65, -70 and -80; andBeech 1900. The libraries can include maintenance manuals, Illustrated parts bulletins, wiring manuals, Airworthiness Directives, Service Bulletins, component maintenance manuals and structural maintenance manuals. System minimum requirements are Pentium 133 MHz, Windows 95 with 16 MB RAM, 25 MB free hard disk space and a 4X CD-ROM or better.Additional providers such as Galaxy Scientific are providing ETM/ IETMs to the FAA. This Class 2, 3 and 4 publication browser is used to store, display and edit documentation for the Human Factors Section of the administration."Clearly IETMs have moved from research to reality," said Fuller, and the future looks to hold more promise.The Future of Tech PubsThe use of ETM/IETMs on laptop and desktop computers has led research and development corporations to investigate the human interface options to the computer. Elements that affect how a technician can interface with a computer are the work environment, economics and ease of use. Organizations such as the Office of Naval Research have focused their efforts on the following needs of technicians: -- Adaptability to the environment.-- Ease of use.-- Improved presentation of complex system relationship.-- Maximum reuse and distribution of engineering data.-- Intelligent data access.With these factors in mind, exploratory development has begun in the areas of computer vision, augmented reality display and speech recognition.Computer vision can be created using visual feedback from a head- mounted camera. The camera identifies the relative position and orientation of an object in an observed scene, and the object is used to correlate the object with a three-dimensional model. In order for a computer vision scenario to work, engineering data has to be provided through visually compatible software.When systems such as Sogitech's View Tech electronic publication browser and Dassault Systemes SA's Enovia are combined, a virtual 3D model is generated.The digital mockup allows the engineering information to directly update the technical publication information. If a system such as CATIA could be integrated into a Video Reference System (VRS), then it could be possible that a technician would point the camera to the aircraft component, the digital model identifies the component and the IETM automatically displays the appropriate information.This example of artificial intelligence is already under development at companies like Boeing and Dassault. An augmented reality display is a concept where visual cues are presented to users on a head-mounted, see-through display system.The cues are presented to the technician based on the identification of components on a 3D model and correlation with the observed screen. The cues are then presented as stereoscopic images projected onto the object in the observed scene.In addition a "Private Eye" system could provide a miniature display of the maintenance procedure that is provided from a palm- size computer. Limited success hascurrently been seen in similar systems for the disabled. The user of a Private Eye system can look at the object selected and navigate without ever having to touch the computer. Drawbacks from this type of system are mental and eye fatigue, and spatial disorientation.Out of all the technologies, speech recognition has developed into an almost usable and effective system. The progression through maintenance procedures is driven by speaker-independent recognition. A state engine controls navigation, and launches audio responses and visual cues to the user. Voice recognition software is available, although set up and use has not been extremely successful.Looking at other industries, industrial manufacturing has already started using "Palm Pilot" personal digital assistants (PDAs) to aid technicians in troubleshooting. These devices allow the technician to have the complete publication beside them when they are in tight spaces. "It would be nice to take the electronic publications into the aircraft, so we are not constantly going back to the work station to print out additional information," said Jet Aviation's Berchtold.With all the advantages that a ETM/ IETM offers it should be noted that electronic publications are not the right solution all of the time, just as CBT is not the right solution for training in every situation. Only you can determine if electronic publications meet your needs, and most technical publication providers offer demo copies for your review. B/CA IllustrationPhoto: Photograph: BAE Systems' Christine Gill prepares a maintenance manual for SGML conversion BAE Systems; Photograph: Galaxy Scientific provides the FAA's human factors group with online IETM support.; Photograph: Raytheon's Class 4 IETM "REPS" allows a user to see text and diagrams simultaneously with hotlinks to illustrated parts catalogs.外文翻译资料译文部分文章出处:民航商业杂志,2000-11-20,5-87-88交互式电子技术手册的电子出版物可以提高数字飞机和模拟技术的效率。

外文翻译—电力电子技术(英文+译文)

外文翻译—电力电子技术(英文+译文)

1 Power Electronic ConceptsPower electronics is a rapidly developing technology. Components are tting higher current and voltage ratings, the power losses decrease and the devices become more reliable. The devices are also very easy tocontrol with a mega scale power amplification. The prices are still going down pr. kVA and power converters are becoming attractive as a mean to improve the performance of a wind turbine. This chapter will discuss the standard power converter topologies from the simplest converters for starting up the turbine to advanced power converter topologies, where the whole power is flowing through the converter. Further, different park solutions using power electronics arealso discussed.1.1 Criteria for concept evaluationThe most common topologies are selected and discussed in respect to advantages and drawbacks. Very advanced power converters, where many extra devices are necessary in order to get a proper operation, are omitted.1.2 Power convertersMany different power converters can be used in wind turbine applications. In the case of using an induction generator, the power converter has to convert from a fixed voltage and frequency to a variable voltage and frequency. This may be implemented in many different ways, as it will be seen in the next section. Other generator types can demand other complex protection. However, the most used topology so far is a soft-starter, which is used during start up in order to limit the in-rush current and thereby reduce the disturbances to the grid.1.2.1 Soft starterThe soft starter is a power converter, which has been introduced to fixedspeed wind turbines to reduce the transient current during connection or disconnection of the generator to the grid. When the generator speed exceeds the synchronous speed, the soft-starter is connected. Using firing angle control of the thyristors in the soft starter the generator is smoothly connected to the grid over a predefined number of grid periods. An example of connection diagram for the softstarter with a generator is presented in Figure1.Figure 1. Connection diagram of soft starter with generators.The commutating devices are two thyristors for each phase. These are connected in anti-parallel. The relationship between the firing angle (﹤) and the resulting amplification of the soft starter is non-linear and depends additionally on the power factor of the connected element. In the case of a resistive load, may vary between 0 (full on) and 90 (full off) degrees, in the case of a purely inductive load between 90 (full on) and 180 (full off) degrees. For any power factor between 0 and 90 degrees, w ill be somewhere between the limits sketched in Figure 2.Figure 2. Control characteristic for a fully controlled soft starter.When the generator is completely connected to the grid a contactor (Kbyp) bypass the soft-starter in order to reduce the losses during normal operation. The soft-starter is very cheap and it is a standard converter in many wind turbines.1.2.2 Capacitor bankFor the power factor compensation of the reactive power in the generator, AC capacitor banks are used, as shown in Figure 3. The generators are normally compensated into whole power range. The switching of capacitors is done as a function of the average value of measured reactive power during a certain period.Figure 3. Capacitor bank configuration for power factor compensation ina wind turbine.The capacitor banks are usually mounted in the bottom of the tower or in thenacelle. In order to reduce the current at connection/disconnection of capacitors a coil (L) can be connected in series. The capacitors may be heavy loaded and damaged in the case of over-voltages to the grid and thereby they may increase the maintenance cost.1.2.3 Diode rectifierThe diode rectifier is the most common used topology in power electronic applications. For a three-phase system it consists of six diodes. It is shown in Figure 4.Figure 4. Diode rectifier for three-phase ac/dc conversionThe diode rectifier can only be used in one quadrant, it is simple and it is notpossible to control it. It could be used in some applications with a dc-bus.1.2.4 The back-to-back PWM-VSIThe back-to-back PWM-VSI is a bi-directional power converter consisting of two conventional PWM-VSI. The topology is shown in Figure 5.To achieve full control of the grid current, the DC-link voltage must be boosted to a level higher than the amplitude of the grid line-line voltage. The power flow of the grid side converter is controlled in orderto keep the DC-link voltage constant, while the control of the generator side is set to suit the magnetization demand and the reference speed. The control of the back-to-back PWM-VSI in the wind turbine application is described in several papers (Bogalecka, 1993), (Knowles-Spittle et al., 1998), (Pena et al., 1996), (Yifan & Longya, 1992), (Yifan & Longya, 1995).Figure 5. The back-to-back PWM-VSI converter topology.1.2.4.1 Advantages related to the use of the back-to-back PWM-VSIThe PWM-VSI is the most frequently used three-phase frequency converter. As a consequence of this, the knowledge available in the field is extensive and well established. The literature and the available documentation exceed that for any of the other converters considered in this survey. Furthermore, many manufacturers produce components especially designed for use in this type of converter (e.g., a transistor-pack comprising six bridge coupled transistors and anti paralleled diodes). Due to this, the component costs can be low compared to converters requiring components designed for a niche production.A technical advantage of the PWM-VSI is the capacitor decoupling between the grid inverter and the generator inverter. Besides affording some protection, this decoupling offers separate control of the two inverters, allowing compensation of asymmetry both on the generator side and on the grid side, independently.The inclusion of a boost inductance in the DC-link circuit increases the component count, but a positive effect is that the boost inductance reduces the demands on the performance of the grid side harmonic filter, and offers some protection of the converter against abnormal conditions on the grid.1.2.4.2 Disadvantages of applying the back-to-back PWM-VSIThis section highlights some of the reported disadvantages of the back-to-back PWM-VSI which justify the search for a more suitable alternative converter:In several papers concerning adjustable speed drives, the presence of the DC link capacitor is mentioned as a drawback, since it is heavy and bulky, it increases the costs and maybe of most importance, - it reduces the overall lifetime of the system. (Wen-Song & Ying-Yu, 1998); (Kim & Sul, 1993); (Siyoung Kim et al., 1998).Another important drawback of the back-to-back PWM-VSI is the switching losses. Every commutation in both the grid inverter and the generator inverter between the upper and lower DC-link branch is associated with a hard switching and a natural commutation. Since the back-to-back PWM-VSI consists of two inverters, the switching losses might be even more pronounced. The high switching speed to the grid may also require extra EMI-filters.To prevent high stresses on the generator insulation and to avoid bearing current problems (Salo & Tuusa, 1999), the voltage gradient may have to be limited by applying an output filter.1.2.5 Tandem converterThe tandem converter is quite a new topology and a few papers only have treated it up till now ((Marques & Verdelho, 1998); (Trzynadlowski et al., 1998a); (Trzynadlowski et al., 1998b)). However, the idea behind the converter is similar to those presented in ((Zhang et al., 1998b)), where the PWM-VSI is used as an active harmonic filter to compensate harmonic distortion. The topology of the tandem converter is shown inFigure 6.Figure 6. The tandem converter topology used in an induction generator wind turbine system.The tandem converter consists of a current source converter, CSC, in thefollowing designated the primary converter, and a back-to-back PWM-VSI, designated the secondary converter. Since the tandem converter consists of four controllable inverters, several degrees of freedom exist which enable sinusoidal input and sinusoidal output currents. However, in this context it is believed that the most advantageous control of the inverters is to control the primary converter to operate in square-wave current mode. Here, the switches in the CSC are turned on and off only once per fundamental period of the input- and output current respectively. In square wave current mode, the switches in the primary converter may either be GTO.s, or a series connection of an IGBT and a diode.Unlike the primary converter, the secondary converter has to operateat a high switching frequency, but the switched current is only a small fraction of the total load current. Figure 7 illustrates the current waveform for the primary converter, the secondary converter, is, and the total load current il.In order to achieve full control of the current to/from the back-to-back PWMVSI, the DC-link voltage is boosted to a level above the grid voltage. As mentioned, the control of the tandem converter is treated in only a few papers. However, the independent control of the CSC and the back-to-back PWM-VSI are both well established, (Mutschler & Meinhardt, 1998); (Nikolic & Jeftenic, 1998); (Salo & Tuusa, 1997); (Salo & Tuusa, 1999).Figure 7. Current waveform for the primary converter, ip, the secondary converter, is, and the total load current il.1.2.5.1Advantages in the use of the Tandem ConverterThe investigation of new converter topologies is commonly justifiedby thesearch for higher converter efficiency. Advantages of the tandem converter are the low switching frequency of the primary converter, and the low level of the switched current in the secondary converter. It is stated that the switching losses of a tandem inverter may be reduced by 70%, (Trzynadlowski et al., 1998a) in comparison with those of an equivalent VSI, and even though the conduction losses are higher for the tandem converter, the overall converter efficiency may be increased.Compared to the CSI, the voltage across the terminals of the tandem converter contains no voltage spikes since the DC-link capacitor of the secondary converter is always connected between each pair of input- and output lines (Trzynadlowski et al., 1998b).Concerning the dynamic properties, (Trzynadlowski et al., 1998a) states that the overall performance of the tandem converter is superior to both the CSC and the VSI. This is because current magnitude commands are handled by the voltage source converter, while phase-shift current commands are handled by the current source converter (Zhang et al., 1998b).Besides the main function, which is to compensate the current distortion introduced by the primary converter, the secondary converter may also act like an active resistor, providing damping of the primary inverter in light load conditions (Zhang et al., 1998b).1.2.5.2 Disadvantages of using the Tandem ConverterAn inherent obstacle to applying the tandem converter is the high number of components and sensors required. This increases the costs and complexity of both hardware and software. The complexity is justified by the redundancy of the system (Trzynadlowski et al., 1998a), however the system is only truly redundant if a reduction in power capability and performance is acceptable.Since the voltage across the generator terminals is set by the secondary inverter, the voltage stresses at the converter are high.Therefore the demands on the output filter are comparable to those when applying the back-to-back PWM-VSI.In the system shown in Figure 38, a problem for the tandem converter in comparison with the back-to-back PWM-VSI is the reduced generator voltage. By applying the CSI as the primary converter, only 0.866% of the grid voltage can be utilized. This means that the generator currents (and also the current through the switches) for the tandem converter must be higher in order to achieve the same power.1.2.6 Matrix converterIdeally, the matrix converter should be an all silicon solution with no passive components in the power circuit. The ideal conventional matrix converter topology is shown in Figure 8.Figure 8. The conventional matrix converter topology.The basic idea of the matrix converter is that a desired input current (to/from the supply), a desired output voltage and a desired output frequency may be obtained by properly connecting the output terminals of the converter to the input terminals of the converter. In order to protect the converter, the following two control rules must be complied with: Two (or three) switches in an output leg are never allowed to be on at the same time. All of the three output phases must be connected to an input phase at any instant of time. The actual combination of the switchesdepends on the modulation strategy.1.2.6.1 Advantages of using the Matrix ConverterThis section summarises some of the advantages of using the matrix converter in the control of an induction wind turbine generator. For a low output frequency of the converter the thermal stresses of the semiconductors in a conventional inverter are higher than those in a matrix converter. This arises from the fact that the semiconductors in a matrix converter are equally stressed, at least during every period of the grid voltage, while the period for the conventional inverter equals the output frequency. This reduces thethermal design problems for the matrix converter.Although the matrix converter includes six additional power switches compared to the back-to-back PWM-VSI, the absence of the DC-link capacitor may increase the efficiency and the lifetime for the converter (Schuster, 1998). Depending on the realization of the bi-directional switches, the switching losses of the matrix inverter may be less than those of the PWM-VSI, because the half of the switchings become natural commutations (soft switchings) (Wheeler & Grant, 1993).1.2.6.2 Disadvantages and problems of the matrix converterA disadvantage of the matrix converter is the intrinsic limitation of the output voltage. Without entering the over-modulation range, the maximum output voltage of the matrix converter is 0.866 times the input voltage. To achieve the same output power as the back-to-back PWM-VSI, the output current of the matrix converter has to be 1.15 times higher, giving rise to higher conducting losses in the converter (Wheeler & Grant, 1993).In many of the papers concerning the matrix converter, the unavailability of a true bi-directional switch is mentioned as one of the major obstacles for the propagation of the matrix converter. In the literature, three proposals for realizing a bi-directional switch exists. The diode embedded switch (Neft & Schauder, 1988) which acts like a truebi-directional switch, the common emitter switch and the common collector switch (Beasant et al., 1989).Since real switches do not have infinitesimal switching times (which is not desirable either) the commutation between two input phases constitutes a contradiction between the two basic control rules of the matrix converter. In the literature at least six different commutation strategies are reported, (Beasant et al., 1990); (Burany, 1989); (Jung & Gyu, 1991); (Hey et al., 1995); (Kwon et al., 1998); (Neft & Schauder, 1988). The most simple of the commutation strategies are those reported in (Beasant et al., 1990) and (Neft & Schauder, 1988), but neither of these strategies complies with the basic control rules.译文1 电力电子技术的内容电力电子技术是一门正在快速发展的技术,电力电子元器件有很高的额定电流和额定电压,它的功率减小元件变得更加可靠、耐用.这种元件还可以用来控制比它功率大很多倍的元件。

食品包装外文翻译文献中英文

食品包装外文翻译文献中英文

食品包装外文翻译文献(含:英文原文及中文译文)英文原文Scheme for industrialized production of steamed breadSteamed bread in our country people's daily lives in the main position, is the traditional popular staple food. According to statistics, the amount of flour used to bread flour accounted for about 40% of the total. At present, the production condition of steamed bread is not suitable for people's living rhythm and nutrition and health requirements. In recent years, although the production environment of steamed bread has improved, but still remain in the workshop type stage. Steamed bread production of process appears simple, but includes the formulation of raw materials, fermentation method, process selection, quality factors, hobbies, evaluation of multiple subject, relate to biotechnology, cereal chemistry, fermentation science, food engineering, mechanics and other multi application discipline techniques. Because our country wheaten food for a long time in family workshops processing mode based, coupled with flour staple food of the diversity and complexity of wheat varieties, traditional staple food processing by experience, for a long time domestic research based on staple food process is relatively weak. Steamed bread industrialized production of steamed bread, the process should be simple,continuous, fast and efficient. Therefore, after the fermentation process, then neutralizing the forming. Process is as follows: Flour + water + yeast and fermentation < doped mixing > forming a shaping a discharge a Xing hair a steaming a product cooling packaging.Staple food steamed bread industry will drive the industrialization of other Chinese traditional pasta, promote related industries the development of as traditional staple food, steamed bread, dumplings, noodles, as the main raw materials are flour, in mechanism research and application research have many similarities and differences. Such as easy to aging, easy to mildew and other issues in the flour food ubiquitous, is caused by flour food shelf life is short, not easy to store the main reason that restrict the circulation radius of flour food. To capture the steamed bread aging resistance, mildew key technologies, to solve dumplings, noodles and other food related issues, many varieties of Chinese traditional food industry of have a strong reference, for the comprehensive development of nearly 200 billion yuan of the industrialization of staple food market has a positive role in promoting. Also the staple food industrialization involves the national economy of our country the first, the second and third industry, the industrial chain relates to grow wheat, flour processing, food processing, food service industry, mechanical design and processing, testing equipment, such as related industries, driven by industry, the scientific and technologicallevel of the related industries, relevant industry labor productivity and international competitiveness of the market existing production equipment steamed bread simply copy the handmade process to improve. However, due to the surface and mechanical stirring strength is greater, forming the spiral extrusion would seriously undermine the strength of dough, the dough in the machining process temperature rise too higher defect is the steamed bread machine processing beyond the main reason for the comprehensive quality of bread by hand. Steamed Buns is also involved in biotechnology, cereal chemistry, fermentation, food engineering, mechanics and other application technology.At present, as the unique Chinese characteristics of traditional food safety and health, China has introduced the world's interest in steamed bread. In recent years, Europe and the United States food companies and research institutions have set off a boom in the study of Chinese steamed bread.Industry of Chinese steamed bread is in the workshop and intensive chemical plant to the industrialization and industrialization development is an important transition period, the scientific research community and the industry is bound to see tremendous market potential in the future, thereby increasing the intensity of the use of scientific research investment and high technology, especially in the use of modern science and technology discovery and maintenance of traditional craft productionof the unique flavor of bread at the same time, will gradually raise the production automation and mechanization. Because the way to rely on a large number of artificial production can not adapt to the growing tension of human resources in China, but also does not meet the requirements of safety and health of consumers. Through the implementation of the promotion of staple food steamed bread industrialization projects, will lead a batch of products of good quality, high technology content, strong market competitiveness of enterprises are booming, gradually replace the workshops and other backward processing methods, further improve the industrial concentration and the core competitiveness.Development of steamed bread machine relates to food engineering, fermentation theory, mechanical design, multi applied science and technology, and requires a lot of food based on research results do support. Therefore Engineering Center has established a special flour food machinery research, by the technology personnel and mechanical design personnel together, in the development of steamed bread in the process of the equipment, engineering center make full use of the accumulated over the years the scientific research in wheat, flour, flour food, in the in-depth study of traditional handmade bread and surface, forming, baking, steaming process principle of system based on, according to scientific industrialization production process requirements for mechanical design, the hand-made steamed bread shaping method is successfully simulatedby using intelligent bionics technology. And on this basis for a series of technical innovationWheat flour is the main raw material for the production of steamed bread, and the protein content of the steamed bread production process and the quality of steamed bread have a great impact. Rheological measurement showed that the content of wet gluten flour in the production of 39.7% or so steamed bread is better. This process of dough, good quality, good air, moderate strength, surface color Steamed Buns white, thin silk. The internal structure is small and flexible honeycomb layers. With high gluten content of flour production, although the volume of steamed bread, but the fermentation time is long, the surface of the finished product surface color, there are blisters. The volume is smaller than 60% Steamed Buns gluten content in wheat flour, the internal honeycomb structure is rough, poor flavor.Plays an important role in the process of production of steamed bread fermentation of steamed bread production process of fermentation, fermentation quality will directly affect the steamed bread quality and nutrient composition, fermentation is using the yeast flour sugar and other nutrients alcohol fermentation to produce CO2 and alcohol, the dough swelling loose and flexible. With the dough fermentation, lactic acid and acetic acid, such as lactic acid and acetic acid, can also be produced by lactic acid and acetic acid. Alcohol and acid can be combined into estersof aromatic substances, aromatic flavor. Therefore, alcohol, organic acids and esters are important substances Steamed Buns produce wine, sweet dough. If the fermentation time is long, also can form aldehyde, ketone and so on carbonyl compound, they are also the important flavor substance. The temperature, humidity, time and the protein content of flour should be controlled strictly in the process of dough fermentation.Double roller spiral forming Steamed Buns used rub finish forming machine. Its working principle is by motor starting will and good fermentation of dough into surface feed bucket inner feeder cast will shape the dough into a packing auger, and the introduction of dough extruding mouth rotary cutting knife cut into uniform size of dough, and turn into a twin roll type forming groove. Fermentation under the screw driven rapidly knead the dough into feed hopper, feeder cast will shape the dough into a packing auger, and launched crowded mouth rotary rubbing to form a smooth surface spherical dough blank. If you need a different shape, a specially designed auxiliary device to achieve the weight of billet surface by an adjusting device set. The double roller forming machine theory error, leave some blank dough spin marks affect the quality of the finished appearance. And set up a plastic machine to be eliminated. The spherical twist ratio of height to diameter greater than the green cylinder, and then become the steaming process thoroughly smooth spherical. The production of key equipment Steamed BunsOne is to solve the problem of labor intensity, the two is to ensure product hygiene, three are automatically arranged accurately, to avoid sticking to the skin. Dial plate discharge machine.The plate-type discharge machine feeds the formed dough to the discharge machine at a certain distance from the conveyor belt. The upper part of the discharge machine is provided with a rotating carboxylate plate, and the direction of rotation of the plate is perpendicular to the direction of movement of the conveyor belt. Whenever a certain number of boring heads pass through the conveyor belt, the dial plate moves once and a predetermined number of boring heads are dialed into the locators located under the conveyor belt. With the natural drop of the steamed bread and the arrangement of the discharge port, a certain number of steamed breads are arranged neatly on the tray or main conveyor belt with a certain clearance, and the next process is performed. The number of steamers dialed by the dial plate is based on the output of the production line. set up. Zhu Keqing, the industrialized production technology of the traditional staple steamer, completely synchronizes the movement of the dial plate with the conveyor belt speed. Both must be connected by a transmission system. This discharge machine, structureSimple and convenient for the layout of the production line.Automatic discharge machine. The discharge principle of this kind of discharge machine is to use the self weight of the hammer to guide theraw blank into a fixed positioning device through an intermittent placement guide to achieve automatic alignment. The machine is mainly composed of three parts: guide, fixed positioning device and main transmission mechanism. After the boring head has been formed, the feeding port fed into the guide is lifted by the conveyor belt, and the guide is a raceway that can rotate flexibly. One is to control the rolling drop of the boring head and the other is to accurately realize the rotation at a certain angle, and ensure that the outlet of the boring head is opposite to the hole arranged on the plane of the positioning device. The function of the fixed positioning device is to change the position of the blank to achieve discharge. The arrayed blanks fall onto the tray or conveyor belt and run forward to the next process.Steaming is the two key processes that determine the final quality of steamed bread. The production process combines the two processes together in a single positioning situation, and the bursting and steaming are controlled and implemented according to the specified process conditions and requirements. The purpose of bursting is to make the dough continue to expand more evenly, and the volume after the curl increases compared to the original. There are two forms of bursting process, one is a batch layout and the other is a continuous layout. The batch type arrangement uses a special boring tray. After arranging the boring heads, the rakes are sent together with the pallet trucks and sentinto the masher to be steamed. The continuous method is to link the bursting and steaming process with a set of mechanisms and complete the steaming under the appropriate process conditions. Continuous production We have developed two devices. One is a tunnel type and the other is a cascade type. Tunnel type device. The tunnel type device consists of a power source, a chain, a sprocket, a carriage and a guide rail. This kind of conveying device feeds the dough at one end of the steamer, ensures the steam process conditions in the steam chamber, and matures when the hammer is output from the other end. At the turning point of the device, the breadboard automatically falls off due to the white weight, and the conveyor belt enters the finished product warehouse. This device has a simple structure, convenient maintenance, and high reliability. However, it has a large floor area and relatively low thermal efficiency. The return trip cannot be used. It has brought some difficulties to the promotion of the production line.Stacked device. The stacking device ensures the structure is simple and the action is realized. The conveyor belt is composed of multiple pallets. A long and a short two pins are projected from both ends of each pallet. The long pins are hinged on the two chains and the short pins are covered with rollers. In the horizontal movement state, the chain drives the carriage, the short pins have supporting rails, and all the carriages are in a plane. During cornering, the long axis moves in a circular motionwith the wheel, and the short axis enters an arc-shaped track to perform a circular motion synchronously, and the dragging plate realizes a translational transition at the turning point, which ensures that the position of the hammer head when turning. When the steamed bread is steamed and the short axis support rail is removed at the outlet position for a certain distance, the weight loss will be reversed along the long axis to disengage the steamed bread.Shantou factory production must be adapted to local conditions, the scale is moderate, and the output and variety must take into account the market demand. In terms of equipment, we should focus on the development of multi-functional continuous cooking machines and single machines with complete functions to produce taro products of different flavors and varieties.中文译文工业化生产馒头的方案馒头在我国人民日常生活中占主要地位,是传统的大众主食。

环境会计信息披露外文文献翻译中英文.pdf

环境会计信息披露外文文献翻译中英文.pdf

外文文献翻译原文及译文(本文档归max118 网hh2018 所有,仅供下载使用)中文标题:印度环境会计披露实践的影响因素:来自NIFTY 公司的经验证据文献出处:The IUP Journal of Accounting Research & Audit Practices, Vol. 15, No. 1, 2016译文字数:3900 多字原文Factors Influencing Environmental Accounting and Disclosure Practices in India: Empirical Evidence from NIFTY CompaniesB Omnamasivaya* and M S V PrasadThe study examines the factors determining the level of environmental disclosure information by taking a sample of NIFTY 50 companies from National Stock Exchange (NSE). The environmental information disclosure is measured by using an Environmental Accounting Disclosure Index (EADI) and the variables used in the study are profitability, corporate size, age, financial leverage, industry type, legal ownership and foreign operations. The relationship is tested using multiple regression analysis. The results show that there is a positive relationship between EADI and profitability, financial leverage, industry type and legal ownership, and a negative relationship between EADI and corporate size, age and foreign operations.IntroductionClimate change is one of the greatest challenges that the world is facing today. Climate change is the variation in the global climate over time. The climate change creates manifold problems like global warming, glacier meltdown, soil erosion, land degradation, deforestation, loss of biodiversity and all kinds of pollution. Human influence on the nature is one of the major causes of such problems. Indiscriminate use of resourcesand undue influence on nature in the name of development can be identified as the prime causes of climate change. As a result, in the last few decades, the adverse effect of environmental pollution on economic development has become a public concern all over the world (Goswami, 2014).The state of world‘s environment and the impact of mankind on the ecology of the world have led to increased public concern and scrutiny of the operations and performance of organizations. Globally, corporations are expected to include environmental concerns in business operations and interaction with stakeholders. As a result, firms can no longer ignore the problems of the society in which they operate. This has thus instituted a social contract between organizations and the environment, thereby making environmental responsibility a corporate dictate (Olayinka and Oluwamayowa, 2014).Every business has responsibility to use the resources at judiciously. Every enterprise needs to behave like a good corporate citizen, and the corporate behavior is judged by its actions related to the community, the steps taken to protect the environment or pollution control. In the context of the Indian corporate sector, companies are not performing as good citizens. Due to this reason many laws have been laid down by the government for making the companies good corporate citizens and fulfill their social responsibility (Chauhan, 2005).In India, the economic reforms initiated in the 1990s have unwittingly contributed to a rise in environmental problems. The awareness level of stakeholders and public regarding the environmental issues has increased the pressure on companies to disclose environmental information. As a result, the companies have started disclosing the environmental information in annual reports and sustainability reports to satisfy all their stakeholders.The Indian government has taken several steps to protect the environment. It has set up the Ministry of Environment, Forest and Climate Change (MoEFCC) with the aim to coordinate, among the states and the various ministries, the issues relating to environmental protection and antipollution measures. Necessary legislation has also been passed. In India, Central Pollution Control Board (CPCB) and State Pollution Control Board (SPCB) were established under the Water Act. The CPCB has identified 17 categories of industries which are highly polluting (Joshi et al., 2011).In India, specific environmental accounting rules or environmental disclosure guidelines for communication to different stakeholder groups are not available for Indian companies. There is no mandatory requirement for quantitative disclosure of (financial) environmental information in annual reports either under the Companies Act or as per the Indian Accounting Standards. Furthermore there are 23 stockexchanges in India which are controlled by the Securities Exchange Board of India (SEBI) Act, 1992. Each of these stock exchanges has different listing requirement for Indian companies to disclose environmental information. Therefore, any environmental disclosure by Indian companies is purely voluntary (Makori and Jagongo, 2013). Against this backdrop, the present study examines the factors determining the level of environmental disclosure information in India.Legitimacy TheoryIn order to explain the reasons for environmental disclosure, we use legitimacy theory. There are many theories which explain the various reasons for social and environmental accounting disclosures, but legitimacy theory is the most suitable theory to explain the environmental disclosure. Organizations cannot survive without meeting the societal expectations. The society expects that the organizations should be proactive in protecting the environment and minimizing the environmental hazards. In case organizations fail to meet the societal expectations, there is a severe threat to their existence. Nowadays Indian companies are legitimizing because of the awareness about environmental disclosure practices in the society. Therefore, Indian companies are taking several steps to protect the environment and are disclosing the relevant environmental information in their annual reports and company websites.Legitimacy relates to the environmental issues which are disclosedin the companies’ annual reports. This indicates the management concerns towards the community. Therefore, the management of different companies or managers have different ideas or thoughts about what the society expects and managers will adapt different strategies to show the society that the organization is meeting the expectations of the community (Zain, 2006).The theory of legitimacy is based on two fundamental ideas: companies need to legitimize their activities, and the process of legitimacy that confers benefits to businesses. Thus, the first element is compatible with the idea that environmental disclosure is related to the social pressure. In this context, the need for legitimacy is not the same for all companies due to the degree of social pressure the company is exposed to, and the level of response to this pressure. There are a number of factors which determine the degree of social pressure on companies and their responses to the pressure. These factors are potential determinants of corporate social disclosure. The second component is based on the idea that companies can expect to benefit by a legitimate behavior based on the social responsibility activity. In addition to that, the legitimacy theory provides a comprehensive framework to explain both the determinants and consequences of social disclosure (Mohamed et al., 2014).Literature ReviewKokubu et al. (2001) examined the annual reports of 1,203 companies to investigate the determinants of environmental disclosure. Environmental disclosure was measured by using an environmental disclosure index and the six independent variables used in the study were company size, financial performance, strength of consumer relations, dependence on debt, dependence on the capital market and type of industry. The study found that company size and industry type influence environmental disclosure.Elijido-Ten (2004) conducted a study on the determinants of environmental disclosures by using 40 Malaysian companies by applying stakeholder theory. The environmental disclosure was measured by using an environmental disclosure index. The study used three determinants: stakeholder power, strategic posture and economic performance. The study found that both top management and government power were the determinants of environmental disclosure, and it was also found that there was no relationship between economic performance and environmental disclosure.Yuen et al. (2009) examined 200 companies to investigate the relationship between firm characteristics and voluntary disclosure. Voluntary disclosure practices were measured by using a disclosure index and the independent variables used in the study were concentration of ownership, ownership by state, individual ownership, firm size, leverage,profitability and type of industry. The study found that individual ownership, audit committee, firm size, and leverage positively related to voluntary disclosure.Galani et al. (2011) examined the relationship between environmental disclosure and firm size by using 100 Greek companies. Environmental disclosure was measured by using environmental disclosure index and the independent variables tested in the study were profitability, size and listing status. The study found that there was a positive significant relationship between environmental disclosure and size of the firm and it was also found that there was no relationship between environmental disclosure and profitability listing requirements.Joshi et al. (2011) analyzed as ma ny as 45 Indian companies’ annual reports to investigate the factors influencing environmental disclosure. The environmental disclosure was measured using environmental disclosure index and the independent variables used in the study were profitability, size, accounting firm, industry, foreign operations, age, ownership and financial leverage. The study found that size and industry were significant determinants for environmental disclosure.Rouf (2011) examined the relationship between firm-specific characteristics and Corporate Social Responsibility Disclosure (CSRD) by taking 176 Bangladesh companies. CSRD was measured by using the CSRD index and the variables in the study were independent directorsand firm size. The study found that there was a positive relationship between CSRD and independent directors and firm size did not affect CSRD.Abdo and Al-Drugi (2012) studied whether any company characteristics influenced environmental disclosures by using 43 Libyan oil and gas companies. Environmental disclosures were measured using content analysis through word count and four characteristics were selected: company’s size, privatization, age, and nationality. The study found that there was a positive association between environmental disclosure and company’s size, company’s privatization, and company’s nationality; and it was also found that the age of the company was significant and negatively related to the level of environmental disclosure.Oba and Fodio (2012) examined the relationship between board characteristics and quality of environmental disclosure by taking 21 companies in Nigeria. Environmental disclosure was measured by using an environmental disclosure index and the independent variables used in the study were board size, foreign directors, gender mix, and board independence. The study found that there was no relationship between board size and environmental disclosure.Suttipun and Stanton (2012) conducted a study on the determinants of environmental disclosure by using 75 Thai companies. The environmental disclosure was measured by word count and the fiveindependent variables used in the study were size of the company, type of industry, ownership status, profitability and country of origin of the company. The study found that there was a positive relationship between environmental disclosure and size of the company.Development of HypothesesCorporate SizeMany of the researchers found a positive relationship between environmental disclosure and size, and many studies supported that large- sized firms disclose more on environment (e.g., Kokubu et al. 2001; Joshi et al., 2011; Suttipun and Stanton, 2012; Makori and Jagongo, 2013; Akbaş , 2014; and Sulaimana et al., 2014).There is a contrast between small enterprises and large enterprises. Large companies require more funds and for that they raise funds through external sources. For attracting the investors and to reduce the agency cost, large companies disclose more information and therefore get public support (Joshi et al., 2011).ProfitabilityThe profitability of a firm is an important factor in determining the environmental disclosure practices. As for whether environmental issues are important or not, it is argued that when the profit is low, the importance of environmental issues is low (Joshi et al., 2011). Many studies have reported that there is a positive relationship betweenprofitability and environmental disclosure (e.g., Nurhayati et al., 2015). A very few studies did not support that (e.g., Galani et al. 2011; Rouf, 2011; Akbaş , 2014; and Sulaimana et al., 2014).Many studies have used the profitability ratios like Return on Assets (ROA), Return on Investment (ROI), Return on Equity (ROE), Net Profit Margin and Dividend Per Share (DPS) to measure the firm profitability. This study uses ROE to measure profitability.Financial LeverageThe agency theory states that with the increase of debt proportion in capital structure, the greater is likely to be the conflict of interest between shareholders, creditors and managers; and the higher the agency cost, the greater is the incentive for managers to disclose more information. From the perspective of social and environmental responsibilities, companies with higher financial leverage are willing to disclose more environmental information to maintain good relationship with stakeholders (Joshi et al., 2011).Many studies have supported the association between financial leverage and environmental disclosure (Joshi et al., 2011; and Sulaimana et al., 2014). They reported that financial leverage has no impact on the disclosure level in India. Kokubu et al. (2001) stated that debt did not significantly influence the corporate environmental reports in Japan. However, this study uses debt-equity ratio for measuring financialleverage.Industry TypeMany studies have examined whether the industry influences the disclosure of environmental information, and many studies have supported strongly that environmental-sensitive companies disclose more environmental information than non-environmental-sensitive companies. Joshi et al. (2011) stated that environmental-sensitive companies in India are likely to disclose more environmental protection information than others. Akbaş (2014) reported that t here is a significant positive relationship between industry membership and the extent of environmental disclosure.ConclusionThe study examined the factors influencing EADI by taking a sample of 50 companies listed on NSE. The environmental accounting disclosure is measured by EADI, and the independent variables used in the study are corporate size, age, profitability, financial leverage, legal ownership, industry and foreign operations. The relationship is tested using multiple regression analysis. The R2 under the model is 0.6033, which indicates that the model is capable of explaining 60.33% of variability in the disclosure of environmental information in the sample companies. The adjusted R2 indicates that 53.72% of variation in the dependent variable is explained by the variations in the independentvariables. The results of multiple regression reveal that there is a positive relationship between EADI and profitability, financial leverage, industry type, and legal ownership, and a negative relationship between EADI and corporate size, age and foreign operations.Limitations: The main limitation of the study is that the data was selected only for one year. The sample size was also limited. Another limitation of the study is that there are many variables which may influence environmental disclosure like board of directors, CEO’s role, audit firm size, etc., but we have selected very few variables.Future Scope: There is huge scope for further research on environmental accounting disclosure in the Indian context, as there is less amount of research on this subject. Further research can focus on the relationship between environmental accounting disclosure practices and financial performance of the companies.译文印度环境会计披露实践的影响因素:来自NIFTY 公司的经验证据B Omnamasivaya,M S V Prasad该研究通过从国家证券交易所(NSE)获取NIFTY 50 公司的样本来分析环境披露信息水平的影响因素。

汽车工程客运车辆中英文对照外文翻译文献

汽车工程客运车辆中英文对照外文翻译文献

汽车⼯程客运车辆中英⽂对照外⽂翻译⽂献(⽂档含英⽂原⽂和中⽂翻译)中英⽂翻译Passenger vehicles in the United StatesFrom Wikipedia, the free encyclopediaThe neutrality of this article is disputed. Please see the discussion on the talk page. Please do not remove this message until the dispute is resolved. (December 2007)Note: this article adopts the U.S. Department of Transportation'sdefinition of a passenger vehicle, to mean a car or truck, used for passengers, excluding buses and trains.The United States is home to the largest passenger vehicle market of any country in the world.[1]Overall, there were an estimated 254.4 million registered passenger vehicles in the United States according to a 2007 DOT study.[2] This number, along with the average age of vehicles, has increased steadily since 1960, indicating a growing number of vehicles per capita. The United States is also home to three large vehicle manufacturers: General Motors, Ford Motor Company and Chrysler, which have historically been referred to as the "Big Three." Chrysler however is no longer among the top three; but is number five, behind Toyota and Honda. The motor car though has clearly become an integral part of American life, with vehicles outnumbering licensed drivers.[2] StatisticsThe United States Department of Transportation's Federal Highway Administration as well as the National Automobile Dealers Association have published data in regard to the total number of vehicles, growth trends, and ratios between licensed drivers, the general population, and the increasing number of vehicles on American roads. Overall passenger vehicles have been outnumbering licensed drivers since 1972 at an ever increasing rate, while light trucks and vehicles manufactured by foreign marques have gained a larger share of the automotive market in theUnited States. In 2001, 70% of Americans drove to work in cars.[3] New York City is the only locality in the country where more than half of all households do not own a car (the figure is even higher in Manhattan, over 75%; nationally, the rate is 8%).[3]Total number of vehiclesAccording to the US Bureau of Transportation Statistics for 2009 there are 254,212,610 registered passenger vehicles. Of these, 193,979,654 were classified as "Light duty vehicle, short wheel base, while another 40,488,025 were listed as "Light duty vehicle, long wheel base." Yet another 8,356,097 were classified as vehicles with 2 axles and 6 tires and 2,617,118 were classified as "Truck, combination." There were approximately 7,929,724 motorcycles in the US in 2009. [4] According to cumulative data[1]by the Federal Highway Administration (FHW A) the number of motor vehicles has also increased steadily since 1960, only stagnating once in 1997 and declining from 1990 to 1991. Otherwise the number of motor vehicles has been rising by an estimated 3.69 million each year since 1960 with the largest annual growth between 1998 and 1999 as well as between 2000 and 2001 when the number of motor vehicles in the United States increased by eight million.[1]Since the study by the FHA the number of vehicles has increased by approximately eleven million, one of the largest recorded increases. The largest percentage increase was between the years of 1972and 1973 when the number of cars increased by 5.88%.Age of vehicles in operationIn the year 2001, the National Automobile Dealers Association conducted a study revealing the average age of vehicles in operation in the US. The study found that of vehicles in operation in the US, 38.3% were older than ten years, 22.3% were between seven and ten years old, 25.8% were between three and six years old and 13.5% were less than two years old. According to this study the majority of vehicles, 60.6%, of vehicles were older than seven years in 2001.[5] This relatively high age of automobiles in the US might be explained by unaffordable prices for comparable new replacement vehicles and a corresponding gradual decline in sales figures since 1998.[6] Also, many Americans own three or more vehicles. The low marginal cost of registering and insuring additional older vehicles means many vehicles that are rarely used are still given full weight in the statistics.The median and mean age of automobiles has steadily increased since 1969. In 2007 the overall median age for automobiles was 9.4 years, a significant increase over 1990 when the median age of vehicles in operation in the US was 6.5 years and 1969 when the mean age for automobiles was 5.1 years.[7] Of all body styles, pick-up trucks had the highest meanage in 2001 (9.4 years), followed by cars with a mean age of 8.4 years and van with a mean age of 7.0 years. As SUVs are part of arelatively new consumer trend originating mostly in the 1990s, SUVs had the lowest mean age of any body style in the US (6.1 years). The average recreational vehicle was even older with a mean age of 12.5. For all body styles the mean vehicle age increased fairly steadily from 1969 to 2001.[7] In March 2009, RL Polk released a study conducted between 2007 to 2008 which indicated that the median age of passenger cars in operation in the US increased to 9.4 years, and that the median age for light trucks increased from 7.1 years in 2007 to 7.5 years in 2008.SalesIn the year 2009, about 5.5 million new passenger cars were sold in the United States[6] according to the U.S. Department of Transportation. This figure “Includes domestic and impor ted vehicles." (Department of Transportation) The number of vehicles sold in the US has been decreasing at a gradual yet continuous rate since 1999, when nearly 8.7 million vehicles were sold in the US. Looking back at history however, reveals that such decline is only part of normal market trends and most likely only a temporary affair. Overall, 1985 was a record year with cars sales totaling just over eleven million.[6] While imports have been gaining ground in terms of units sold during the 2000s and have regained roughly the same market share they held in 1992, the sales of domestic vehicles are still more than double those of imported vehicles. It should be noted, however that the US Bureau of Transportation Statistics "Includes carsproduced in Canada and Mexico" as domestic vehicles as both countries are part of the North American Free Trade Agreement (NAFTA), thus including many cars by Asian and European manufacturers - many V olkswagens are made in Mexico, Toyotas in Canada, also. In 2006 the sales of vehicles made in NAFTA states totaled 5.5 million, while the sale of imported vehicles totaled 2.2 million. 923,000 vehicles were imported from Japan, making it the greatest exporter of vehicles to the US. Germany was the second largest exporter of vehicles to the US, with 534,000 units exported to the US in 2006. Imports from all other nations, except Germany and Japan, totaled 729,000.[8]美国的客运车辆From Wikipedia, the free encyclopedia这篇⽂章的中⽴性是有争议的。

建筑工程灌注桩中英文对照外文翻译文献

建筑工程灌注桩中英文对照外文翻译文献

(文档含英文原文和中文翻译)中英文资料外文翻译文献Bored pileSummaryDrill ( flushing, dig ) pile from the nineteen sixties, beginning in Henan province Nanyang region since the development application, because of its many advantages, has been widely used in soft soil, loess, including soil, expansive soil and other special types of foundation and industrial, civil, municipal, railway, highway, port andother types of Engineering practice. And precast pile, bored pile construction of no noise, no vibration, on the surrounding buildings and small environmental impact, pile diameter, buried deep, large bearing capacity. China's drilling pile maximum diameterof pile has reached 4000mm, maximum deep pile has reached 104m, and the steel pipe pile maximum diameter of 1200mm, the largest pile of prestressed concrete pipe pile with deep 83m, maximum diameter of 1300mm, the biggest pile depth 40m.Along with our country socialist construction is booming, with high-rise buildings, large span bridges on the rise, the bearing capacity of pile foundation with higher requirements. Large diameter bored pile therefore gets rapid development, pile length and pile diameter also do bigger more. However, in the existing various methods of pile, bored pile has many advantages and is widely used in construction, but it is hard to avoid the impact of mud, which not only reduces the bearing capacityof pile expectations, but also caused a serious waste of materials. Manual hole digging pile is difficult to achieve greater depth, its bearing capacity is also difficult to just as one wishes. In view of bored pile in this situation, how to improve the pile construction technology level, make the input material to be more reasonable to use,so as to greatly increase the bearing capacity of single pile in engineering field, have become hot issues in recent years.The introduction of bored cast-in-place pilePerfusion pile refers to the construction site by mechanical drilling, steel pipe soil compaction or human mining method in the foundation pile hole in form, and onits inner placed reinforcement cage, concrete made with different pile, drilling method, grouting pile can be divided again for cast-in-place pile, bored pile and digging several types of piles. Bored pile by pile into the definition and classification of a kindof pile.The characteristics of bored cast-in-place pile1、And sinking pile of hammering method, construction noise and vibration is smaller2、To construct than the precast pile of large diameter of pile3、In all kinds of ground can be used4、The construction quality of the pile bearing capacity influence5、Because the concrete is in the mud perfusion is difficult to control, so the quality of concreteBored pile construction methodPercussion drilling, punching grabbing drilling and rotary drilling hole can adoptslurry wall construction method. The construction process is: site formation, slurry preparation, buried tube and laying work platform rig and positioning, drilling, hole cleaning and inspection of hole quality and lower steel cage, underwater concrete perfusion to pull out the barrel to check quality. Construction sequence:(1) the construction preparationConstruction preparation comprise: selecting drill, drill, layout. Construction of bored pile drilling rig is the main equipment, according to the geological conditions and various drilling machine applied to select conditions(2) drilling machine installation and positioningIstallation of drilling machine based if not stable, easy to produce in the drilling machine construction, pile and pile inclined inclined eccentric and other adverse effects, therefore requires the installation of foundation stability. On the formation of softer, sloping ground, be bulldozed, the pad plate or tie reinforcement.In order to prevent the pile position allowed, construction is very important to the set the center position and the correct installation of drilling machine, the rig drilling machine, first use of the power drill and near the cage with the drill pipe, moving roughly position, and then Jack rack jacking, accurate positioning, so that the lifting pulley, drill bit or fixed drill hole cards and casing center in a vertical line, in order to ensure the verticality of the drilling rig. Drilling position deviation is not more than 2cm. Aligned with the pile location, with sleeper flat drill beam at the top of the tower, and symmetrical to the drill axis pulling cable wind rope.(3) buried tubeUnderground water level below the Kong Bitu under the hydrostatic pressure to the hole collapse, and even the phenomenon of flow of sand. If you can keep the borehole wall high underground water head, increase the hole hydrostatic pressure, to prevent collapse hole, hole wall. Casing in addition to play this role, at the same time, good isolation of surface water, ground, protecting the orifice pile hole drill guide fixed(4) slurry preparationDrilling mud is composed of water, clay ( bentonite ) and an additive composition. A floating drilling waste slag cooling the drill bit, drill, lubrication, increasing hydrostatic pressure, and in the hole wall to form a slurry, partition within the bore to prevent seepage, the hole collapse effect. Modulation of the drilling mud and circulating purifying mud, should be based on the drilling method and formation conditions to determine the slurry consistency, slurry consistency should beconsidered stratigraphic changes or operational requirements of motor control, the mud is too thin, small, poor effect of elimination of slag wall; slurry is too thick will weaken bit impact function, reduce drilling speed.(5) drillingThe borehole is a key working procedure, in construction must strictly according to the operating requirements, in order to ensure drilling quality, attention must be given to the hole quality, must be opposite for this good midline and verticality, and pressed well casing. Must pay attention in the construction are continuously added and pumping the slurry slag ( impact type ), but also at any time to check whether there is deviation phenomenon into hole. Using the impulse or clamshell type drilling machine construction, soil due to vibration and impact near the adjacent hole stability. So the drilled hole should be timely cleaning hole, decentralization and pouring underwater concrete reinforcing cage. Drill order should also be practical to plan, should not only guarantee a pile hole construction does not affect the last pile hole drill, and the moving distance of not too far and mutual interference.(6) the hole cleaningThe drilling depth, diameter, location and shape of hole is directly related to the quality of pile and pile body black. Therefore, in addition to drilling process close observation supervision, to meet the design requirements in drilling hole depth, with deep, hole, pore shape, pore size and other inspections. In the end hole inspection in full compliance with the design requirements, shall immediately proceed to bottom hole cleaning, avoid it too long that mud settling, caused by borehole collapse. For friction pile when the hole wall is easy to collapse, in underwater concrete perfusion before the sediment thickness of not more than 30cm; when the hole wall is not easy to collapse, not more than 20cm. For the post, in water or shoot the breeze, sediment thickness less than 5cm. Hole cleaning method is to use different and flexible application of drilling rig. Usually you can use normal circulation rotary drilling rig, reverse circulation rotary machine vacuum suction machine and slag pumping cylinder hole cleaning. The mud suction machine hole cleaning, required equipment, convenient operation, hole cleaning is thorough, but unstable in the soil should be used cautiously. Its principle is to use compressor generates high pressure air into a suction dredge pipeline will mud blowing.(7) pouring underwater concreteAfter finishing hole, can be prefabricated reinforcement cage hanging vertically into the hole, positioning to be fixed, and then using a perfusion catheter concrete,pouring concrete don't interrupt, or prone to the phenomenon of broken pile.Effect of bored pile bearing performance factorsIn the construction process, construction machinery, due to geological effects, often caused by pile soil ( weak layers ), the thickness of 0.2 ~ 0.5m, thick and up to 1m. Especially in soft soil with mud, drilling, hole bottom sediment is inevitable; even after careful cleaning hole, the hole cleaning after and before concrete, will precipitate some sediment, and in drilling process, widespread presence on the hole wall and the hole bottom soil disturbance. All of these affect the bearing capacity of bored pileplay.Pile static load test show that, bored pile end bearing capacity of only the ultimate load of the 15% ~ 35%, the side resistance and tip resistance of the existing synchronization phenomenon. Give full play to the role of lateral friction resistance is only a few millimeters of the displacement of pile top, to give full play to the role of tip resistance of pile diameter, needed to reach 10% - 30% of the displacement of pile top. Such a large displacement in engineering is not allowed. Pile side friction resistance to damage, and the damage limit is reached, and the end resistance cannot get sufficient play, its potential is great. This is the ultimate bearing capacity of bored pile is not to cause.Research shows that, at the bottom of pile soil exists, not only affects the tip resistance of the play, also make the side friction resistance loss. The existence of weak interlayer of pile, the pile body and the soil friction between the nature of the change, the friction between pile and soil by load transfer, is very bad. This is the bored pile bearing not tall benefit is another reason.Improve the bearing capacity of Bored PilesAccording to the bearing capacity of bored pile is not to cause analysis, engineering and some improving bearing capacity of pile foundation of the method, mostly around the elimination of pile bottom sediment, a pile of weak interlayer.(1)pre loading method in advance of the pile bottom for preloading, the pile soil compaction, improve the bearing capacity of piles. But time-consuming, costly, and not easy to implement.(2) extending end bearing area. The belled pile, in the past used in engineering is more, but on the bottom of pile soil are still incapable of action.(3) sand lining pile technology : the method for cast-in-place pile, construction, use double sleeve around the pile in sand filling, become sand set of about 3 ~ 10cm,sand set can improve the lateral wall of the friction resistance of pile.(4)Grouting technology:routing technology grouting method can be divided into the first grouting and grouting method. The first method of grouting is drilled in the Kong Zhuangcheng hole and before concrete grouting, the nozzle pipe into the hole bottom is inserted into the soil, spraying slurry, so that at the bottom of pile soil mixed with cement, then pile concrete.Pile capCap refers to bear, distribution from pier to carry the load on pile top, set toconnect all the top of the pile of reinforced concrete platform.Cap is piles and columns or piers contact part. The root cap, even ten piles are linked with the formation of the pile foundation. Cap for high pile cap and pile caps: low pile cap generally buried or partially buried in the earth, high pile cap generally above the ground or water. High rise pile cap having a free length, the surrounding supporting body to withstand horizontal load. Pile stress situation is extremely unfavorable. Pile internal force and displacement under the action of external force than the same level of low pile cap to be big, the stability is poor because of low pile cap.High rise pile cap is generally used for port, wharf, marine engineering and bridge engineering. Low pile cap is generally used in industrial and civil buildings. Pile head generally into platform 0.1 meters, and a reinforced anchor into the cap. Platform to build on the columns or piers, forming a complete power transmissionsystem.In recent years due to the large diameter bored pile, pile rigidity, strength is big, so high pile in bridge foundation construction has been widely used.灌注桩概述钻(冲、挖)孔灌注桩,从20世纪60年代初在河南省南阳地区研制应用以来,因其具有众多的优点,因其具有众多的优点,已广泛应用于包括软土、已广泛应用于包括软土、已广泛应用于包括软土、黄土、黄土、黄土、膨胀土等特殊土在内的各膨胀土等特殊土在内的各类地基和工业、民用、市政、铁路、公路、港口等各类工程实践中。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

论文低功耗无线烟雾中的家庭火灾报警系统Juan Aponte Luis 1, Juan Antonio Gómez Galán 2,* and Javier Alcina Espigado 1 1OnTech Security LLC, C/Hispano Aviación, 7-9, 41300 La Rinconada, Sevilla, Spain; E-Mails: juan.aponte@ontech.es (J.A.L.); javier.alcina@ontech.es(J.A.E.)2Department Electronic Engineering, Computer Systems, and Automatics, University of Huelva, Ctra Huelva, La Rábida, s/n, 21819 Huelva, Spain*Author to whom correspondence should be addressed; E-Mail: jgalan@diesia.uhu.es; Tel.: +34-95921-7650; Fax: +34-95921-7348.Academic Editor: Ingolf WillmsReceived: 28 April 2015 / Accepted: 17 August 2015 / Published: 21 August 2015摘要:国内环境中一种新的传感火灾探测装置。

火灾探测器由几个传感器组合而成,不仅可以检测烟雾,还可以区分不同类型的烟雾。

此功能避免假警报和不同情况的警告。

在硬件和软件方面进行了功耗优化,提供了近五年的高度自治。

从设备收集的数据通过无线通信传送到基站,为低成本和紧凑的设计提供了广阔的应用前景。

关键词:火灾探测器;烟雾传感器;无线传感器网络;低功耗1. 简介家庭防火检测是一个非常热门的问题,因此很多的心血都倾注在大多数发达国家的自动检测系统的设计。

火灾报警系统应可靠并能及时通知建筑物内居住者有关火灾指示的存在,如烟雾或高温。

由于其早期的火灾探测能力、快速响应时间和相对低的成本,火灾探测器通常是作为一个烟雾传感器来使用。

用于火灾探测的其他种类是基于气体传感器温度传感器或。

使用单一传感器的火灾探测器,例如烟雾传感器,由于温度变化就会引起的更大的误报率。

烟雾传感器的原理红外(红外)的光由于烟雾的存在下,进入到一个小型空间,光电二极管或者什么器件。

因此,烟雾传感器的灵敏度也取决于环境温度,虽然这种效果在高性能设备被取消掉。

然而,结合了几种类型传感器的火灾探测器构成了一个更有效的火灾报警系统。

传统的方法是建立在有线系统里面,如总线,由于其关键应用程序的安全性高。

近年来总线网络在扩展性和维护有很大的改善,有一个低成本的解决方案和空间的灵活性的无线系统因此也变得更具吸引力。

无线传感器网络要求传感器节点体积小,方便部署和有限的功耗,方便电池供电的操作。

无线火灾系统必须保证射频通信的功能性和安全性,避免误报通知。

此外,该系统必须自身具备检测故障的组件,物理损坏或企图破坏等功能,从而促进维护,进而减少不必要的成本。

2.遥感设备简介开发的传感装置允许将所收集的数据发送到基站做进一步的无线网络部署。

基站作为传感器节点和用户之间的网关,还开发了一个移动应用程序,用于火灾报警的时候实时通知用户。

无线传感器网络收集并分析家庭火灾的传感数据,并准确地触发火灾报警。

从而满足小尺寸和无线节点的低功耗需求。

高数量的传感节点的部署也有必要匹配一个更低成本的解决方案。

家庭火灾的早期检测,系统必须执行不同的参数测量,从而得出结论。

例如模拟传感器测量烟雾,一氧化碳(一氧化碳)和温度。

图1显示了所开发的设备的构建块。

它由一个微控制器,一个短距离的无线收发器,一个电池,一个共同的传感器,一个烟雾传感器,温度传感器,电容式触摸按钮,一个红色的颜色和一个蜂鸣器组成。

整个系统被一个特定的保护盒保护。

图 1. 感测节点的构建模块火灾探测装置的设计建立在2个印刷电路板上。

其中最简单的一种火灾探测电路组成应该包括一个触摸按钮和一个蜂鸣器用来处理和控制,主电路板指示灯,数据处理和无线数据用来传输。

主电路板是传感装置中最重要的一部分,因为它测量所有的烟雾、气体和温度,并将它们发送到控制中心作为报警信号。

适当的能量处理要求电子传感器具备低功耗的微控制器,空调接口和收发器。

单片机芯片选择的是pic24fj128ga306(钱德勒,亚利桑那州,美国)。

无线模块与交换数据的IEEE 802.15.4标准通过无线基站节点在868兆赫以下。

该基站节点作为传感器网络和用户(或办公室,如果系统是用于工业应用)之间的网关。

电源由一节AA型锂cr123-3v 1600 mAh电池构成。

一个拥有自主产权的传感装置,从硬件的设计,再到作出选择,都必须确保功能齐全,低功耗。

此外,该软件还集成了节能模式。

电池能够持续5年左右。

无线模块:基于芯片的无线模块是一个低功耗设备可用的mrf89xa收发器。

mcp9700a温度传感器是一种低成本、低功耗的模拟传感器,不需要额外的信号调理电路。

电压输出直接连接到ADC输入单片机。

该传感器用于测量应用程序的温度变化。

烟雾传感器:这种传感器拥有光电检测烟雾技术和光散射原理运作。

使用一束由sfh4551发光二极管(LED)发出的光从欧司朗(雷根斯堡,Baviera,德国)在暗室,防止接收驱动(从一个sfh2500欧司朗光电二极管)检测由黑材料由于光的吸收光的通道。

在该腔室中,当烟雾粒子进入光路径时,光照到粒子上,并被反射到光敏装置上,以提醒微控制器。

图2显示了腔室的几何形状。

图 2. 烟雾传感器室图3显示了烟雾传感器的电路方案。

这种拓扑结构是非常具有吸引力的,因为它需要的组件比传统方式[ 18 ]少。

光电二极管在零偏压产生最精确的线性操作和低噪声。

此外,这种配置是更适合于精密应用的切换速度。

该光电二极管的电流被转换成一个可用的电压,只使用一个运算放大器作为电流-电压转换器[ 19 ]。

反馈电阻设置的原因,是为了满足所需的灵敏度范围。

补偿电容器也被放置在反馈回路中。

一氧化碳传感器:这种传感器测量一氧化碳浓度通常相当于一个家庭火灾中产生的材料的燃烧浓度。

从费加罗的tgs5342传感器(箕面、大阪、日本)已被选中是因为它具有体积小,寿命长,长期稳定性好,精度高。

它可以在很宽的温度范围内检测到高达1%的浓度。

这种传感器能够区分烟雾从水蒸汽里面产生或燃烧木材产生的。

图4显示了基于一个基本的互阻放大器测量电路的CO传感器。

2.1.传感器数据采集微控制器管理无线传感装置,并控制来自特定传感器的数据采集,信号处理,数据管理和通信。

模拟传感器的输出信号转换为二进制值,使用模拟量的微控制器的数字转换器。

图5显示了一个详细的主电路板的开发装置图像。

温度传感器,微控制器,发光二极管和无线模块是在板的顶部,而烟雾传感器,一氧化碳(一氧化碳)传感器在电池的底部。

图 5. 火灾探测装置主板.(一)顶部;(乙)底部.图6显示了所设计的电磁屏蔽,以避免射频电路和相邻电路之间的干扰问题。

图6. 无线射频模块电磁屏蔽很少有商业烟雾传感器或火灾探测器,包括CO传感器。

开发该装置包括该元件和呈现紧凑的解决方案。

在作者看来,这是最小的烟雾传感器还包括CO传感器。

关于二板,它的功能是捕捉传感器上的顶部面板上的电容式触摸按键脉动的事件,以及管理使用不同的蜂鸣器听起来警告烟/火存在的用户。

此板包括蜂鸣器,电容性触摸板,篡改传感器和控制这些元件的电路,如图7的篡改传感器检测的情况下,或企图破坏的开口。

图 7. 第二款板子. (a) 上面; (b) 底部.触摸按钮允许用户打开或关闭无线火灾感应装置。

Atmel的单键AT42QT1010-TSHR传感器(圣何塞,加利福尼亚,美国)被选定为触摸检测,其生成触摸中断能力。

这导致功率消耗和自治规格显著改善,因为,微控制器不需要连续检查按钮的状态;它可以从它的低功耗状态由触摸传感器所产生的中断来唤醒。

图8示出了用于触摸板和蜂鸣器电路方案中的电容性传感器的电路。

图8:电容性传感器电路图8(a)用于触摸板电容传感器的方案; (二)蜂鸣器电路原理。

2.2 软件在固件级别,一个特定的协议栈已经开发RF通信。

上由制造商建议的MiWi协议进行初步试验在高延迟的条款提供不稳定的结果,有时阻挡要求处理器的立即关注(的MiWi用于集中忙等待循环)的硬件的一部分。

此外,加密算法已被实施,以改进RF通信的安全性。

所开发的协议栈使用由中断处理,以尽量减少延迟和微控制器的忙等待周期一个SPI控制器。

该收发器的控制器可以在后台工作,所以它不会浪费忙等待的周期,所以没有延迟。

所设计的叠层中的层制成,使得每个层可以是其它的独立使用,从而使不同级别的功能的去耦,这导致更大的代码的可移植性,更好地适应新的硬件和更容易的调试和测试。

通过提出的MiWi相同的寻址方案一直保持和支持簇树网络拓扑结构(由LLC层提供)。

该设计已经完成考虑尽可能减少延迟和处理器的使用,几乎忙等待周期。

因此,整个设计是基于中断的SPI控制器。

所有的功能都实现为每个状态被中断时,前面的操作结束触发状态机。

异步事件,如新的消息或网络中断的接收的通信完成得益于一组添加到每个层的API回调。

得益于此,虽然发展已经比在堆叠的情况下,稍微更复杂,因为所用的MiWi,它已经降低到最低限度的处理器使用率,以及协议栈上的设备的其它部件几乎不影响。

功率消耗也已降低,因为在CPU更自由的时间。

传感节点还包括引导程序的功能。

我们的目标是双重的:首先,它允许远程更新维护节省成本的固件。

其次,我们可以验证执行测试应用合适的硬件操作。

然后,我们就可以完成测试阶段和编程加载最终的应用程序。

在软件层面,引导加载程序已经需要定义内存结构和通信协议和数据共享。

在硬件层面,外部闪存已被列入存储新固件加载,防止其丢失,或微控制器的内存可能已损坏,由于电源故障或通讯在远程更新过程失败。

2.3.包装设计图9和10示出了包设计用来保护该传感装置从环境。

这与穿孔以允许烟雾和气体的入口的金属侧的筒状盒,并设置有电容式触摸键的前面板。

包不影响响应时间,传感器的精度和数据的传输。

传感器封装的最终尺寸为70毫米,直径为30毫米的高度。

图9.包装设计图10.详细整个无线烟感系统的制造。

3. 结果所提出的硬件和软件解决方案进行了验证和设备,例如,范围,灵活性和健壮性的功能性能特性进行了评估。

火灾检测装置结合了三种类型的传感器:烟,温度和一氧化碳,从而降低假警报由于水蒸汽,鼻烟烟雾等的数火检测处理,直接在烟的外观进行的,从而允许定位火灾的部位并能准确地对火焰蔓延。

相关文档
最新文档