重庆交通大学毕业设计中英文翻译

合集下载

毕设外文翻译 (英文原文+翻译)

毕设外文翻译 (英文原文+翻译)

英文翻译:PARTⅠ各种光纤接入技术Optical Fiber Technology With Various Access1 光网络主流1.1 光纤技术光纤生产技术已经成熟,现在大批量生产,广泛应用于今天的零色散波长λ0=1.3μm的单模光纤,而零色散波长λ0=1.55μm的单模光纤已开发并已进入实用阶段,这是非常小的1.55μm的波长衰减,约0.22dB/km,它更适合长距离大容量传输,是首选的长途骨干传输介质。

目前,为了适应不同的线路和局域网的发展要求,已经制定了一个非分散纤维,低色散斜率光纤,大有效面积光纤,水峰光纤等新型光纤。

长波光学研究人员研究认为,传输距离可以达到数千公里的理论,可以实现无中继传输距离,但它仍然是阶段理论。

1.2 光纤放大器1550nm波长掺铒(ER)的光纤放大器(EDFA),掺铒数字,模拟和相干光通信中继器可以以不同的速率传输光纤放大器,也可以发送特定波长的光信号。

在从模拟信号转换成数字信号、从低到高比特率比特率的光纤网络升级中,系统采用光复用技术的扩大,他们都不必改变掺铒放大器电路和设备。

掺铒放大器可作为光接收机前置放大器,后置放大器的光发射机和放大器的补偿光源装置。

1.3 宽带接入不同的环境中企业和住宅客户提供了多种宽带接入解决方案。

接入系统主要完成三大功能:高速传输,复用/路由,网络的扩展。

目前,接入系统的主流技术,ADSL 技术可以双绞铜线传输经济每秒几兆比特的信息,即支持传统的语音服务,而且还支持面向数据的因特网接入位,理事会结束的ADSL多路复用访问的数据流量,路由的分组网络,语音流量将传送到PSTN,ISDN或其它分组网络。

电缆调制解调器在HFC网络提供高速数据通信,将带宽分为上行和下行信道同轴电缆渠道,它可以提供挥发性有机化合物的在线娱乐,互联网接入等服务,同时还提供PSTN业务。

固定无线接入系统如智能天线和接收机的无线接入系统使用了许多高新技术,是一个以创新的方式接入的技术,作为目前仍滞留在今后进一步探索实践的方式最不确定的接入技术。

毕业设计的论文中英翻译

毕业设计的论文中英翻译

Anti-Aircraft Fire Control and the Development of IntegratedSystems at SperryT he dawn of the electrical age brought new types of control systems. Able to transmit data between distributed components and effect action at a distance, these systems employed feedback devices as well as human beings to close control loops at every level. By the time theories of feedback and stability began to become practical for engineers in the 1930s a tradition of remote and automatic control engineering had developed that built distributed control systems with centralized information processors. These two strands of technology, control theory and control systems, came together to produce the large-scale integrated systems typical of World War II and after.Elmer Ambrose Sperry (I860-1930) and the company he founded, the Sperry Gyroscope Company, led the engineering of control systems between 1910 and 1940. Sperry and his engineers built distributed data transmission systems that laid the foundations of today‟s command and control systems. Sperry‟s fire control systems included more than governors or stabilizers; they consisted of distributed sensors, data transmitters, central processors, and outputs that drove machinery. This article tells the story of Sperry‟s involvement in anti-aircraft fire control between the world wars and shows how an industrial firm conceived of control systems before the common use of control theory. In the 1930s the task of fire control became progressively more automated, as Sperry engineers gradually replaced human operators with automatic devices. Feedback, human interface, and system integration posed challenging problems for fire control engineers during this period. By the end of the decade these problems would become critical as the country struggled to build up its technology to meet the demands of an impending war.Anti-Aircraft Artillery Fire ControlBefore World War I, developments in ship design, guns, and armor drove the need for improved fire control on Navy ships. By 1920, similar forces were at work in the air: wartime experiences and postwar developments in aerial bombing created the need for sophisticated fire control for anti-aircraft artillery. Shooting an airplane out of the sky is essentially a problem of “leading” the target. As aircraft developed rapidly in the twenties, their increased speed and altitude rapidly pushed the task of computing the lead out of the range of human reaction and calculation. Fire control equipment for anti-aircraft guns was a means of technologically aiding human operators to accomplish a task beyond their natural capabilities.During the first world war, anti-aircraft fire control had undergone some preliminary development. Elmer Sperry, as chairman of the Aviation Committee of the Naval Consulting Board, developed two instruments for this problem: a goniometer,a range-finder, and a pretelemeter, a fire director or calculator. Neither, however, was widely used in the field.When the war ended in I918 the Army undertook virtually no new development in anti-aircraft fire control for five to seven years. In the mid-1920s however, the Army began to develop individual components for anti-aircraft equipment including stereoscopic height-finders, searchlights, and sound location equipment. The Sperry Company was involved in the latter two efforts. About this time Maj. Thomas Wilson, at the Frankford Arsenal in Philadelphia, began developing a central computer for firecontrol data, loosely based on the system of “director firing” that had developed in naval gunn ery. Wilson‟s device resembled earlier fire control calculators, accepting data as input from sensing components, performing calculations to predict the future location of the target, and producing direction information to the guns.Integration and Data TransmissionStill, the components of an anti-aircraft battery remained independent, tied together only by telephone. As Preston R. Bassett, chief engineer and later president of the Sperry Company, recalled, “no sooner, however, did the components get to the point of functioning satisfactorily within themselves, than the problem of properly transmitting the information from one to the other came to be of prime importance.”Tactical and terrain considerations often required that different fire control elements be separated by up to several hundred feet. Observers telephoned their data to an officer, who manually entered it into the central computer, read off the results, and telephoned them to the gun installations. This communication system introduced both a time delay and the opportunity for error. The components needed tighter integration, and such a system required automatic data communications.In the 1920s the Sperry Gyroscope Company led the field in data communications. Its experience came from Elmer Spe rry‟s most successful invention, a true-north seeking gyro for ships. A significant feature of the Sperry Gyrocompass was its ability to transmit heading data from a single central gyro to repeaters located at a number of locations around the ship. The repeaters, essentially follow-up servos, connected to another follow-up, which tracked the motion of the gyro without interference. These data transmitters had attracted the interest of the Navy, which needed a stable heading reference and a system of data communication for its own fire control problems. In 1916, Sperry built a fire control system for the Navy which, although it placed minimal emphasis on automatic computing, was a sophisticated distributed data system. By 1920 Sperry had installed these systems on a number of US. battleships.Because of the Sperry Company‟s experience with fire control in the Navy, as well as Elmer Sperry‟s earlier work with the goniometer and the pretelemeter, the Army approached the company for help with data transmission for anti-aircraft fire control. To Elmer Sperry, it looked like an easy problem: the calculations resembled those in a naval application, but the physical platform, unlike a ship at sea, anchored to the ground. Sperry engineers visited Wilson at the Frankford Arsenal in 1925, and Elmer Sperry followed up with a letter expressing his interest in working on the problem. He stressed his company‟s experience with naval problems, as well as its recent developments in bombsights, “work from the other end of the pro position.” Bombsights had to incorporate numerous parameters of wind, groundspeed, airspeed, and ballistics, so an anti-aircraft gun director was in some ways a reciprocal bombsight . In fact, part of the reason anti-aircraft fire control equipment worked at all was that it assumed attacking bombers had to fly straight and level to line up their bombsights. Elmer Sperry‟s interests were warmly received, and in I925 and 1926 the Sperry Company built two data transmission systems for the Army‟s gun directors.The original director built at Frankford was designated T-1, or the “Wilson Director.” The Army had purchased a Vickers director manufactured in England, but encouraged Wilson to design one thatcould be manufactured in this country Sperry‟s two data tran smission projects were to add automatic communications between the elements of both the Wilson and the Vickers systems (Vickers would eventually incorporate the Sperry system into its product). Wilson died in 1927, and the Sperry Company took over the entire director development from the Frankford Arsenal with a contract to build and deliver a director incorporating the best features of both the Wilson and Vickers systems. From 1927 to 193.5, Sperry undertook a small but intensive development program in anti-aircraft systems. The company financed its engineering internally, selling directors in small quantities to the Army, mostly for evaluation, for only the actual cost of production [S]. Of the nearly 10 models Sperry developed during this period, it never sold more than 12 of any model; the average order was five. The Sperry Company offset some development costs by sales to foreign govemments, especially Russia, with the Army‟s approval 191.The T-6 DirectorSperry‟s modified version of Wilson‟s director was designated T-4 in development. This model incorporated corrections for air density, super-elevation, and wind. Assembled and tested at Frankford in the fall of 1928, it had problems with backlash and reliability in its predicting mechanisms. Still, the Army found the T-4 promising and after testing returned it to Sperry for modification. The company changed the design for simpler manufacture, eliminated two operators, and improved reliability. In 1930 Sperry returned with the T-6, which tested successfully. By the end of 1931, the Army had ordered 12 of the units. The T-6 was standardized by the Army as the M-2 director.Since the T-6 was the first anti-aircraft director to be put into production, as well as the first one the Army formally procured, it is instructive to examine its operation in detail. A technical memorandum dated 1930 explained the theory behind the T-6 calculations and how the equations were solved by the system. Although this publication lists no author, it probably was written by Earl W. Chafee, Sperry‟s director of fire control engineering. The director was a complex mechanical analog computer that connected four three-inch anti-aircraft guns and an altitude finder into an integratedsystem (see Fig. 1). Just as with Sperry‟s naval fire control system, the primary means of connection were “data transmitters,” similar to those that connected gyrocompasses to repeaters aboard ship.The director takes three primary inputs. Target altitude comes from a stereoscopic range finder. This device has two telescopes separated by a baseline of 12 feet; a single operator adjusts the angle between them to bring the two images into coincidence. Slant range, or the raw target distance, is then corrected to derive its altitude component. Two additional operators, each with a separate telescope, track the target, one for azimuth and one for elevation. Each sighting device has a data transmitter that measures angle or range and sends it to the computer. The computer receives these data and incorporates manual adjustments for wind velocity, wind direction, muzzle velocity, air density, and other factors. The computer calculates three variables: azimuth, elevation, and a setting for the fuze. The latter, manually set before loading, determines the time after firing at which the shell will explode. Shells are not intended to hit the target plane directly but rather to explode near it, scattering fragments to destroy it.The director performs two major calculations. First, pvediction models the motion of the target and extrapolates its position to some time in the future. Prediction corresponds to “leading” the target. Second, the ballistic calculation figures how to make the shell arrive at the desired point in space at the future time and explode, solving for the azimuth and elevation of the gun and the setting on the fuze. This calculation corresponds to the traditional artillery man‟s task of looking up data in a precalculated “firing table” and setting gun parameters accordingly. Ballistic calculation is simpler than prediction, so we will examine it first.The T-6 director solves the ballistic problem by directly mechanizing the traditional method, employing a “mechanical firing table.” Traditional firing tables printed on paper show solutions for a given angular height of the target, for a given horizontal range, and a number of other variables. The T-6 replaces the firing table with a Sperry ballistic cam.” A three-dimensionally machined cone shaped device, the ballistic cam or “pin follower” solves a pre-determined function. Two independent variables are input by the angular rotation of the cam and the longitudinal position of a pin that rests on top of the cam. As the pin moves up and down the length of the cam, and as the cam rotates, the height of the pin traces a function of two variables: the solution to the ballistics problem (or part of it). The T-6 director incorporates eight ballistic cams, each solving for a different component of the computation including superelevation, time of flight, wind correction, muzzle velocity. air density correction. Ballistic cams represented, in essence, the stored data of the mechanical computer. Later directors could be adapted to different guns simply by replacing the ballistic cams with a new set, machined according to different firing tables. The ballistic cams comprised a central component of Sperry‟s mechanical computing technology. The difficulty of their manufacture would prove a major limitation on the usefulness of Sperry directors.The T-6 director performed its other computational function, prediction, in an innovative way as well. Though the target came into the system in polar coordinates (azimuth, elevation, and range), targets usually flew a constant trajectory (it was assumed) in rectangular coordinates-i.e. straight andlevel. Thus, it was simpler to extrapolate to the future in rectangular coordinates than in the polar system. So the Sperry director projected the movement of the target onto a horizontal plane, derived the velocity from changes in position, added a fixed time multiplied by the velocity to determine a future position, and then converted the solution back into polar coordinates. This method became known as the “plan prediction method”because of the representation of the data on a flat “plan” as viewed from above; it was commonly used through World War II. In the plan prediction method, “the actual movement of the target is mechanically reproduced on a small scale within the Computer and the desired angles or speeds can be measured directly from the movements of these elements.”Together, the ballistic and prediction calculations form a feedback loop. Operators enter an estimated “time of flight” for the shell when they first begin tracking. The predictor uses this estimate to perform its initial calculation, which feeds into the ballistic stage. The output of the ballistics calculation then feeds back an updated time-of-flight estimate, which the predictor uses to refine the initial estimate. Thus “a cumulative cycle of correction brings the predicted future position of the target up to the point indicated by the actual future time of flight.”A square box about four feet on each side (see Fig. 2) the T-6 director was mounted on a pedestal on which it could rotate. Three crew would sit on seats and one or two would stand on a step mounted to the machine. The remainder of the crew stood on a fixed platform; they would have had to shuffle around as the unit rotated. This was probably not a problem, as the rotation angles were small. The direc tor‟s pedestal mounted on a trailer, on which data transmission cables and the range finder could be packed for transportation.We have seen that the T-6 computer took only three inputs, elevation, azimuth, and altitude (range), and yet it required nine operators. These nine did not include the operation of the range finder, which was considered a separate instrument, but only those operating the director itself. What did these nine men do?Human ServomechanismsTo the designers of the director, the operato rs functioned as “manual servomechanisms.”One specification for the machine required “minimum dependence on …human element.‟ The Sperry Company explained, “All operations must be made as mechanical and foolproof as possible; training requirements must visualize the conditions existent under rapid mobilization.” The lessons of World War I ring in this statement; even at the height of isolationism, with the country sliding into depression, design engineers understood the difficulty of raising large numbers of trained personnel in a national emergency. The designers not only thought the system should account for minimal training and high personnel turnover, they also considered the ability of operators to perform their duties under the stress of battle. Thus, nearly all the work for the crew was in a “follow-the-pointer”mode: each man concentrated on an instrument with two indicating dials, one the actual and one the desired value for a particular parameter. With a hand crank, he adjusted the parameter to match the two dials.Still, it seems curious that the T-6 director required so many men to perform this follow-the-pointer input. When the external rangefinder transmitted its data to the computer, it appeared on a dial and an operator had to follow the pointer to actually input the data into the computing mechanism. The machine did not explicitly calculate velocities. Rather, two operators (one for X and one for Y) adjusted variable-speed drives until their rate dials matched that of a constant-speed motor. When the prediction computation was complete, an operator had to feed the result into the ballistic calculation mechanism. Finally, when the entire calculation cycle was completed, another operator had to follow the pointer to transmit azimuth to the gun crew, who in turn had to match the train and elevation of the gun to the pointer indications.Human operators were the means of connecting “individual elements” into an integrated system. In one sense the men were impedance amplifiers, and hence quite similar to servomechanisms in other mechanical calculators of the time, especially Vannevar Bush‟s differential analyzer .The term “manual servomechanism”itself is an oxymoron: by the conventional definition, all servomechanisms are automatic. The very use of the term acknowledges the existence of an automatic technology that will eventually replace the manual method. With the T-6, this process was already underway. Though the director required nine operators, it had already eliminated two from the previous generation T-4. Servos replaced the operator who fed back superelevation data and the one who transmitted the fuze setting. Furthermore, in this early machine one man corresponded to one variable, and the machine‟s requirement for operators corresponded directly to the data flow of its computation. Thus the crew that operated the T-6 director was an exact reflection of the algorithm inside it.Why, then, were only two of the variables automated? This partial, almost hesitating automation indicates there was more to the human servo-motors than Sperry wanted to acknowledge. As much as the company touted “their duties are purely mechanical and little skill or judgment is required on the part of the operators,” men were still required to exercise some judgment, even if unconsciously. The data were noisy, and even an unskilled human eye could eliminate complications due to erroneous or corrupted data. The mechanisms themselves were rather delicate and erroneous input data, especially if it indicated conditions that were not physically possible, could lock up or damage the mechanisms. Theoperators performed as integrators in both senses of the term: they integrated different elements into a system.Later Sperry DirectorsWhen Elmer Sperry died in 1930, his engineers were at work on a newer generation director, the T-8. This machine was intended to be lighter and more portable than earlier models, as well as less expensive and “procurable in quantities in case of emergency.” The company still emphasized the need for unskilled men to operate the system in wartime, and their role as system integrators. The operators were “mechanical links in the apparatus, thereby making it possible to avoid mechanical complication which would be involved by the use of electrical or mechanical servo motors.” Still, army field experience with the T-6 had shown that servo-motors were a viable way to reduce the number of operators and improve reliability, so the requirements for the T-8 specified that wherever possible “electrical shall be used to reduce the number of operators to a minimum.” Thus the T-8 continued the process of automating fire control, and reduced the number of operators to four. Two men followed the target with telescopes, and only two were required for follow-the-pointer functions. The other follow-the-pointers had been replaced by follow-up servos fitted with magnetic brakes to eliminate hunting. Several experimental versions of the T-8 were built, and it was standardized by the Army as the M3 in 1934.Throughout the remain der of the …30s Sperry and the army fine-tuned the director system in the M3. Succeeding M3 models automated further, replacing the follow-the-pointers for target velocity with a velocity follow-up which employed a ball-and-disc integrator. The M4 series, standardized in 1939, was similar to the M3 but abandoned the constant altitude assumption and added an altitude predictor for gliding targets. The M7, standardized in 1941, was essentially similar to the M4 but added full power control to the guns for automatic pointing in elevation and azimuth. These later systems had eliminated errors. Automatic setters and loaders did not improve the situation because of reliability problems. At the start of World War II, the M7 was the primary anti-aircraft director available to the army.The M7 was a highly developed and integrated system, optimized for reliability and ease of operation and maintenance. As a mechanical computer, it was an elegant, if intricate, device, weighing 850 pounds and including about 11,000 parts. The design of the M7 capitalized on the strength of the Sperry Company: manufacturing of precision mechanisms, especially ballistic cams. By the time the U.S. entered the second world war, however, these capabilities were a scarce resource, especially for high volumes. Production of the M7 by Sperry and Ford Motor Company as subcontractor was a “real choke” and could not keep up with production of the 90mm guns, well into 1942. The army had also adopted an English system, known as the “Kerrison Director” or M5, which was less accurate than the M7 but easier to manufacture. Sperry redesigned the M5 for high-volume production in 1940, but passed in 1941.Conclusion: Human Beings as System IntegratorsThe Sperry directors we have examined here were transitional, experimental systems. Exactly for that reason, however, they allow us to peer inside the process of automation, to examine the displacement of human operators by servomechanisms while the process was still underway. Skilled asthe Sperry Company was at data transmission, it only gradually became comfortable with the automatic communication of data between subsystems. Sperry could brag about the low skill levels required of the operators of the machine, but in 1930 it was unwilling to remove them completely from the process. Men were the glue that held integrated systems together.As products, the Sperry Company‟s anti-aircraft gun directors were only partially successful. Still, we should judge a technological development program not only by the machines it produces but also by the knowledge it creates, and by how that knowledge contributes to future advances. Sperry‟s anti-aircraft directors of the 1930s were early examples of distributed control systems, technology that would assume critical importance in the following decades with the development of radar and digital computers. When building the more complex systems of later years, engineers at Bell Labs, MIT, and elsewhere would incorporate and build on the Sperry Company‟s experience,grappling with the engineering difficulties of feedback, control, and the augmentation of human capabilities by technological systems.在斯佩里防空炮火控和集成系统的发展电气时代的到来带来了新类型的控制系统。

我的翻译原文

我的翻译原文

ORIGINAL PAPEREggshell crack detection based on acoustic response and support vector data description algorithmHao Lin ÆJie-wen Zhao ÆQuan-sheng Chen ÆJian-rong Cai ÆPing ZhouReceived:21May 2009/Revised:27August 2009/Accepted:28August 2009/Published online:22September 2009ÓSpringer-Verlag 2009Abstract A system based on acoustic resonance and combined with pattern recognition was attempted to dis-criminate cracks in eggshell.Support vector data descrip-tion (SVDD)was employed to solve the classification problem due to the imbalanced number of training samples.The frequency band was between 1,000and 8,000Hz.Recursive least squares adaptive filter was used to process the response signal.Signal-to-noise ratio of acoustic impulse response was remarkably enhanced.Five charac-teristics descriptors were extracted from response fre-quency signals,and some parameters were optimized in building model.Experiment results showed that in the same condition SVDD got better performance than con-ventional classification methods.The performance of SVDD model was achieved with crack detection level of 90%and a false rejection level of 10%in the prediction set.Based on the results,it can be concluded that the acoustic resonance system combined with SVDD has significant potential in the detection of cracked eggs.Keywords Eggshell ÁCrack ÁDetection ÁAcoustic resonance ÁSupport vector data descriptionIntroductionIn the egg industry,the presence of cracks in eggshells is one of the main defects of physical quality.Cracked eggsare very vulnerable to bacterial infections leading to health hazards [1].It mostly results in significant economic loss in the egg industry.Recent research shows that it is possible to detect cracks in eggshells using acoustic response analysis [2–5].Supervised pattern recognition models were also employed to discriminate intact and cracked eggs [6].In these previous researches,training of discrimination models needs a considerable amount of intact egg samples and also corresponding defective ones.However,it is more difficult to acquire sufficient naturally cracked eggs samples than intact ones.Artificial infliction of cracking in eggs is time-consuming and a waste.Moreover,the artificially cracked eggs may not provide completely authentic information on naturally cracked ones.So,the traditional discrimination model shows poor performance when the numbers of sam-ples from the two classes are seriously unbalanced,because the samples of minority group cannot provide sufficient information to support the ultimate decision function.Support vector data description (SVDD),which is inspired by the theory of two-class support vector machine (SVM),is custom-tailored for one-class classification [7].One-class classification is always used to deal with a two-class classification problem,where each of the two classes has a special meaning [8].The two classes in SVDD are target class and outlier class,respectively.Target class is assumed to be sampled well,and many (training)example objects are available.The outlier class can be sampled very sparsely,or can be totally absent.The basic idea of SVDD is to define a boundary around samples of target with a volume as small as possible [9].SVDD has been used to solve the problem of unbalanced samples in the field of machine faults diagnosis,intrusion detection in the network,recog-nition of handwritten digits,face recognition,etc.[10–13].In this work,the algorithm of SVDD was employed to solve the classification problem of eggs due to imbalancedH.Lin ÁJ.Zhao (&)ÁQ.Chen ÁJ.Cai ÁP.ZhouSchool of Food and Biological Engineering,Jiangsu University,212013Zhenjiang,People’s Republic of Chinae-mail:zjw-205@;zhao_jiewen@ H.Line-mail:linhaolt794@Eur Food Res Technol (2009)230:95–100DOI 10.1007/s00217-009-1145-6number of samples.In addition,recursive least squares (RLS)adaptive filter was used to enhance the signal-to-noise ratio.Some excitation resonant frequency charac-teristics of signals were used as input vectors of SVDD model to discriminate intact and cracked eggs.Materials and methods Samples preparationAll barn egg samples were collected naturally from a poultry farm and they were intensively reared.These eggs were on maximum 3days old when they were measured.As much as 130eggs with intact shells and 30eggs with cracks were measured.The sizes of eggs ranged from peewee to jumbo.Irregular eggs were not incorporated into the data analysis.The cracks,which were 10–40mm long and less than 15-l m wide,were measured by a micrometer.Both,intact and cracked samples,were divided into two subsets.One of them called calibration set was used to build a model,and the other one called prediction set was used to test the robustness of the model.The calibration set contained 120samples;the number of intact and cracked samples were 110and 10,respectively.The remaining 40samples constituted the prediction set,with 20intact eggs and 20cracked ones.Experimental systemA system based on acoustic resonance was developed for the detection of crack in eggshell.The system consists of a product support,a light exciting mechanism,a microphone,signal amplifiers,a personal computer (PC)and software to acquire and analyze the results.A schematic diagram of the system is presented in Fig.1.A pair of rolls made of hard rubber was used to support the eggs,and the shape of the support was focused to normal eggshell surfaces.The excitation set included an electromagnetic driver,an adjustable volt DC power and a light metallic stick.The total mass of the stick was 6g,and its length 6cm.The excitation force is an important factor that affects the magnitude and width of the pulse.The adjustable volt DC power was used to control the excitation force.Based on previous test,the voltage of excitation was set at 30V.In this case,optimal signals were achieved without instrumentation overload.The impacting position was close to the crack in the cracked eggshells,which was placed randomly among intact eggshells.Data acquisition and analysisResponse signals obtained from the microphone were amplified,filtered and captured by a 16-bit data acquisition card.The program of data acquisition was compiled based on LabVIEW8.2software(National Instruments,USA)that allows a fast acquisition and processing of the response signal.The sampling rate was 22.05kHz.The time signal was transformed to a frequency signal by using a 512-point fast Fourier (FFT)transformation.The linear frequency spectrum accepted was transformed to a power spectrum.A band-pass filter was used to preserve the information of the frequency band between 1,000and 8,000Hz,because the features of response signals were legible in this frequency band and the signal-to-noise here was also favorable.Brief introduction of support vector data description (SVDD)SVDD is inspired by the idea of SVM [14,15].It is a method of data domain description also calledone-classFig.1Eggshell crackmeasurement system based on acoustic resonance analysisclassification.The basic idea of SVDD is to envelop samples or objects within a high-dimensional space with the volume as small as possible byfitting a hypersphere around the samples.The sketch map in two dimensions of SVDD is shown in Fig.2.By introducing kernels,this inflexible model becomes much more powerful and can give reliable results when a suitable kernel is used[16]. The problem of SVDD is tofind center a and radius R, which have the minimum volume of hypersphere contain-ing all samples X i.For a data set containing i normal data objects,when one or a few very remote objects are in it,a very large sphere is obtained,which will not represent the data very well.Therefore,we allow for some data points outside the sphere and introduce slack variable n i.As a result,the minimization problem can be denoted in thefollowing form:min LðRÞ¼R2þCX Ni¼1n i;s:t x iÀak k2R2þn i;n i!0ði¼1;2;...;NÞ;9>>>>>=>>>>>;ð1Þwhere the variable C gives the trade-off between simplicity (volume of the sphere)and the number of errors(number of target objects rejected).The above problem is usually solved by introducing Lagrange multipliers and can be transformed into maximizing the following function L with respect to the Lagrange multipliers.For an object x,we definef2ðxÞ¼xÀak k2¼ðxÁxÞÀ2X Ni¼1a iðzÁx iÞþX Ni¼1X Nj¼1a i a jðx iÁx jÞ:ð2ÞThe test objects x is accepted when the distance is smaller than the radius.These objects are called the support objects of the description or the SVs.Objects lying outside the sphere are also called bounded support vectors(BSVs). When a sphere is not always a goodfit for the boundary of data distribution,the inner product(x,y)is generalized by a kernel function k x;yðÞ¼/xðÞ;/yðÞf g;where a mapping/ of the data to a new feature space is applied.With such mapping,Eq.(2)will then becomeL¼P Ni¼1a i kðx i;x iÞÀP Ni¼1P Nj¼1a i a j kðx i;x jÞ;s:t0a i C;P Ni¼1a i¼1and a¼Pia i/ðx iÞ:9>>>=>>>;ð3ÞIn brief,SVDDfirst maps the data which are not linearly separable into a high-dimensional feature space and then describe the data by the maximal margin hypersphere.SoftwareAll data-processing algorithms were implemented with the statistical software Matlab7.1(Mathworks,USA)under Windows XP.SVDD Matlab codes were downloaded from http://www-ict.ewi.tudelft.nl/*davidt/dd_tools.html free of charge.Result and discussionResponse signalsSince the acoustic response was an instantaneous impulse, it was difficult to discriminate between the different response signals of cracked and intact eggs in the time domain.The time domain signals were transformed by FFT to frequency domain signals for the next analysis.Typical power spectra of intact egg and cracked egg are shown in Fig.3,and the areas under the spectral envelope for the intact eggs were smaller than that of the cracked eggs.For the intact eggs,the peak frequencies were prominent, generally found in the middle place(3,500–5,000Hz).In contrast,the peak frequencies of cracked eggs were dis-perse and not prominent.Adaptive RLSfilteringSince the detection of cracked eggshells is based on acoustic response measurement,it is vulnerably interfered by the surrounding noise.This fact is reinforced by the much damped behaviors of agro-products[17].Therefore, response signal should be processed to remove noise in further analysis.Adaptive interference canceling is a standard approach to remove environmental noise[18,19].The RLS is a popular algorithm in thefield of adaptive signal processing. In adaptive RLSfiltering,the coefficients are adjusted from sample to sample to minimize the mean square error(MSE) between a measured noisy scalar signal and itsmodeledvalue from the filter [20,21].A scalar,real output signal,y k ,is measured at the discrete time k ,in response to a set of scalar input signals X k ði Þ;i ¼1;2;...;n ;where n is an arbitrary number of filter taps.For this research,n is set to the number of degrees of freedom to ensure conformity of the resulting filter matrices.The input and the output sig-nals are related by the simple regression model:y k ¼X n À1i ¼0w ði ÞÁx k ði Þþe k :ð4Þwhere e k represents measurement error and w (i )represents the proportion that is contained in the primary scalar signal y k .The implementation of the RLS algorithm is optimized by exploiting the inversion matrix lemma and provides fast convergence and small error rates [22].System identification of a 32-coefficient FIR filter combined with adaptive RLS filtering was used to process the signals.The forgetting factor was 1,and the vector of initial filter coefficients was 0.Figure 4shows the fre-quency signals before and after adaptive RLS filtering.Variable selectionBased on the differences of frequency domain response signals from intact and cracked eggs,five characteristic descriptors were extracted from the response frequency signals as the inputs of the discrimination model.These are shown in Table 1.Parameter optimization in SVDD modelThe basic concept of SVDD is to map nonlinearly the original data X into a higher-dimensional feature space.The transformation into a higher-dimensional space is implemented by a kernel function [23].So,selection of kernel function has a high influence on the performance of the SVDD model.Several kernel functions have been proposed for the SVDD classifier.Not all kernel functions are equally useful for the SVDD.It has been demonstrated that Gaussian kernel results in tighter description and gives a good performance under general smoothness assumptions [24].Thus,Gaussian kernel was adopted in this study.To obtain a good performance,the regularization parameter C and the kernel function r have to be opti-mized.Parameter C determines the trade-off between minimizing the training error and minimizing model complexity.By using Gaussian kernel,the data description transforms from a solid hyper-sphere to a Parzen density estimator.An appropriate selection with width parameter r of Gaussian kernel is important to the density estimation of target objects.There is no systematic methodology for the optimization of these parameters.In this study,the procedure of opti-mization was carried out in two search steps.First,a comparatively large step length was attempted to search optimal value of parameters.The favorable results of the model were found with values of C between 0.005and 0.1,and values of r between 10and 500.Therefore,a much smaller step length was employed for further searching these parameters.In the second search step,50parameter r values with the step of 10(r =10,20–500)and 20parameter C values with the step of 0.005(C =0.005,0.01–1)were tested simultaneously in the building model.Identification results of SVDD model influenced by values of r and C are shown in Fig.5.The optimal model was achieved when r was equal to 420and C was equal to 0.085or 0.09.Here,the identification rates of intactandFig.3Typical response frequency signal ofeggsFig.4Frequency signals before and after adaptive RLS filteringcracked eggs were both 90%in the prediction set.Fur-thermore,it was found that the performance of the SVDD model could not be improved by smaller search parison of discrimination modelsConventional two-class linear discrimination analysis (LDA)model and SVM model were used comparatively to classify intact and cracked eggs.Gaussian kernel was recommended as the kernel function of the SVM model.Parameters of SVM model were also optimized as in SVDD.Table 2shows the optimal results from three dis-crimination models in the prediction set.Identification rates of intact eggs were both 100%in the LDA and SVM models,but 50and 35%for cracked eggs,respectively.In other words,at least 50%of cracked eggs could not be identified in conventional discrimination model.However,detection of cracked eggs is the task we focus on.The identification rates of intact and cracked eggs were both 90%in the SVDD pared with conventional two-class discrimination models,SVDD model showed its superior performance in the discrimination of cracked eggs.LDA is a linear and parametric method with discrimi-nating character.In terms of a set of discriminant functions,the classifier is said to assign an unknown example X to thecorresponding class [25].In the case of conventional LDA classification,the ultimate decision function is based on sufficient information support from two-class training samples.In general,such classification does not pay enough attention to the samples in minority class in building model.It is possible to obtain an inaccurate estimation of the centroid between the two classes.Conventional LDA clas-sification always poorly describes the specific class with scarce training samples.Therefore,it is often unpractical to solve the classification problem using tradition LDA clas-sifier,in case of imbalanced number in training samples.The basic concept of SVM is to map the original data X into a higher-dimensional feature space and find the ‘optimal’hyperplane boundary to separate the two classes [26].In SVM classification,the ‘optimal’boundary is defined as the most distant hyperplane from both sets,which is also called the ‘middle point’between the clas-sification sets.This boundary is expected to be the optimal classification of the sets,since it is the best isolated from the two sets [27].The margin is the minimal distance from the separating hyperplane to the closest data points [28].In general,when the information support from both positive and negative training sets are sufficient and equal,an appropriate separating hyperplane can be obtained.How-ever,when the samples from one class are insufficient to support the separating hyperplane,it will result in the hyperplane being excessively close to this class.As a result,most of the unknown sets may be recognized as the other class.Therefore,compared with other discrimination models,SVM showed poorest performance in discrimi-nating cracked eggs.Differing from conventional classification-based app-roach,SVDD is an approach for one-class classification.ItTable 1Frequencycharacteristics selection and expressionSome Low frequency band:1,000–3,720Hz,Middlefrequency band:3,720–7,440HzVariables Resonance frequency characteristics Expression X1Value of the area of amplitudeX 1¼P512i ¼0PiX2Value of the standard deviation of amplitude X 2¼ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiðPi ÀP Þq=nX3Value of the frequency band of maximum amplitude X 3¼Index max ðPi ÞX4Mean of top three frequency amplitude values X 4¼Max 1:3ðPi Þ=3X5Ratio of amplitude values of middle frequency bands to low frequency bandX 5¼P 200i ¼1Pi P 400i ¼201Pi200Fig.5Identification rates of SVDD models with different values ofparameter r and CTable 2Comparison of results from three discrimination models ModelIdentification rates in the prediction set (%)Intact eggsCracked eggs LDA 10050SVM 10035SVDD9090focuses mainly on normal or target objects.SVDD can handle cases with only a few outlier objects.The advantage of SVDD is that the target class can be any one of two training classes.The selection of the target class depends on the reliability of the information provided from training samples.In general,the class containing more samples may provide sufficient information,and it can be selected as target class[29].Furthermore,SVDD can adapt to the real shape of samples andfindflexible boundary with a mini-mum volume by introducing kernel function.The boundary is described by a few training objects,the support vectors. It is possible to replace normal inner products with kernel functions and obtain moreflexible data descriptions[30]. Width parameter r can be set to give the desired number of support vectors.In addition,extra data on the form of outlier objects can be helpful to improve the performance of the SVDD model.ConclusionsDetection of crack in eggshell based on acoustic impulse resonance was attempted in this work.The SVDD method was employed for solving classification problem where the samples of cracked eggs were not sufficient.The results indicated that detection of crack in eggshell based on the acoustic impulse resonance was feasible,and the SVDD model showed its superior performance in contrast to conventional two-class discrimination models.It can be concluded that SVDD is an excellent method of classifi-cation problem with imbalanced numbers.It is a promising method that uses acoustic resonance technique combined with SVDD to detect cracked eggs.Some relative ideas would be attempted for further improvement of the per-formance of SVDD model in our future work,such as follows:(1)introduce new kernel functions,which can help to obtain a moreflexible boundary;(2)try more methods for selection of parameters to obtain the optimal ones,since parameters of kernel functions are closely related to the tightness of the constructed boundary and the target rejection rate,and appropriate parameters are important to improve the performance of SVDD models;(3)investigate the contribution of abnormal targets to the calibration model and develop a robust model,which has an excellent ability to deal with abnormal targets.Acknowledgments This work is a part of the National Key Tech-nology R&D Program of China(Grant No.2006BAD11A12).We are grateful to the Web site http://www-ict.ewi.tudelft.nl/*davidt/ dd_tools.html,where we downloaded SVDD Matlab codes free of charge.References1.Lin J,Puri VM,Anantheswaran RC(1995)Trans ASAE38(6):1769–17762.Cho HK,Choi WK,Paek JK(2000)Trans ASAE43(6):1921–19263.De Ketelaere B,Coucke P,De Baerdemaeker J(2000)J Agr EngRes76:157–1634.Coucke P,De Ketelaere B,De Baerdemaeker J(2003)J SoundVib266:711–7215.Wang J,Jiang RS(2005)Eur Food Res Technol221:214–2206.Jindal VK,Sritham E(2003)ASAE Annual International Meet-ing,USA7.Tax DMJ,Duin RPW(1999)Pattern Recognit Lett20:1191–11998.Pan Y,Chen J,Guo L(2009)Mech Syst Signal Process23:669–6819.Lee SW,Park JY,Lee SW(2006)Patten Recognit39:1809–181210.Podsiadlo P,Stachowiak GW(2006)Tribol Int39:1624–163311.Sanchez-Hernandeza C,Boyd DS,Foody GM(2007)Ecol Inf2:83–8812.Liu YH,Lin SH,Hsueh YL,Lee MJ(2009)Expert Syst Appl36:1978–199813.Cho HW(2009)Expert Syst Appl36:434–44114.Tax DMJ,Duin RPW(2001)J Mach Learn Res2:155–17315.Tax DMJ,Duin RPW(2004)Mach Learn54:45–6616.Guo SM,Chen LC,Tsai JHS(2009)Pattern Recognit42:77–8317.De Ketelaere B,Maertens K,De Baerdemaeker J(2004)MathComput Simul65:59–6718.Adall T,Ardalan SH(1999)Comput Elect Eng25:1–1619.Madsen AH(2000)Signal Process80:1489–150020.Chase JG,Begoc V,Barroso LR(2005)Comput Struct83:639–64721.Wang X,Feng GZ(2009)Signal Process89:181–18622.Djigan VI(2006)Signal Process86:776–79123.Bu HG,Wang J,Huang XB(2009)Eng Appl Artif Intell22:224–23524.Tao Q,Wu GW,Wang J(2005)Pattern Recognit38:1071–107725.Xie JS,Qiu ZD(2007)Pattern Recognit40:557–56226.Devos O,Ruckebusch C,Durand A,Duponchel L,Huvenne JP(2009)Chemom Intell Lab Syst96:27–3327.Liu X,Lu WC,Jin SL,Li YW,Chen NY(2006)Chemom IntellLab Syst82:8–1428.Chen QS,Zhao JW,Fang CH,Wang DM(2007)SpectrochimActa Pt A Mol Biomol Spectrosc66:568–57429.Huang WL,Jiao LC(2008)Prog Nat Sci18:455–46130.Foody GM,Mathur A,Sanchez-Hernandez C,Boyd DS(2006)Remote Sens Environ104:1–14。

毕业设计中英文翻译【范本模板】

毕业设计中英文翻译【范本模板】

英文The road (highway)The road is one kind of linear construction used for travel。

It is made of the roadbed,the road surface, the bridge, the culvert and the tunnel. In addition, it also has the crossing of lines, the protective project and the traffic engineering and the route facility。

The roadbed is the base of road surface, road shoulder,side slope, side ditch foundations. It is stone material structure, which is designed according to route's plane position .The roadbed, as the base of travel, must guarantee that it has the enough intensity and the stability that can prevent the water and other natural disaster from corroding.The road surface is the surface of road. It is single or complex structure built with mixture。

The road surface require being smooth,having enough intensity,good stability and anti—slippery function. The quality of road surface directly affects the safe, comfort and the traffic。

毕业设计英文翻译(英文)

毕业设计英文翻译(英文)

Industrial Power Plants and Steam SystemSteam power plants comprise the major generating and process steam sources throughout the world today. Internal-combustion engine and hydro plants generate less electricity and steam than power plants. For this reason we will give our initial attention in this book to steam power plants and their design application.In the steam power field two major types of plants sever the energy needs of customer-industrial plants for factories and other production facilities-and central-station utility plants for residential, commercial, industrial demands. Of these two types of plants, the industrial power plant probably has more design variations than the utility plant. The reason for this is that the demands of industrial tend to be more varied than the demands of the typical utility customer.To assist the power-plant designer in understanding better variations in plant design, industrial power plants are considered first in this book. And to provide the widest design variables, a power plant serving several process operation and all utility is considered.In the usual industrial power plant, a steam generation and distribution system must be capable of responding to a wide range of operating conditions, and often must be more reliable than the plants electrical system. The system design is often the last to be settled but the first needed for equipment procurement and plant startup. Because of these complications the power plant design evolves slowly, changing over the life of a project.Process steam loadsSteam is a source of power and heating, and may be involved in process reaction. Its applications include serving as a stripping, fluidizing, agitating , atomizing, ejector-motive and direct-heating steam. Its quantities, Pressure Levels and degrees of superheat are set by such process needs.As reaction steam, it becomes a part of the process kinetics, as in H2, ammonia and coal-gasification plants. Although such plants may generate all the steam needed. steam from another source must be provided for startup and backup.The second major process consumption of steam is for indirect heating, such as in distillation-tower reboilers , amine-system reboilers, process heaters, piping tracing and building heating. Because the fluids in these applications generally do not need to be above 350F,steam is a convenient heat source.Again, the quantities of steam required for the services are set by the process design of the facility. There are many options available to the process designer in supplying some of these low-level heat requirements, including heat-exchange system , and circulating heat-transfer-fluid systems, as well as system and electricity. The selection of an option is made early in the design stage and is based predominantly on economic trade-off studies.Generating steam from process heat affords a means of increasing the overall thermal efficiency of a plant. After providing for the recovery of all the heat possible via exchanges, the process designer may be able to reduce cooling requirements by making provisions for the generation of low-pressure(50-150 psig)steam. Although generation at this level may be feasible from a process-design standpoint, the impact of this on the overall steam balance must be considered, because low-pressure steam is excessive in most steam balances, and the generation of additional quantities may worsen the design. Decisions of this type call close coordination between the process and utility engineers.Steam is often generated in the convection section of fired process heaters in order to improve a plant’s thermal efficiency. High-pressure steam can be generated in the furnace convection section of process heater, which have radiant heat duty only.Adding a selective –catalytic-reduction unit for the purpose of lowing NOx emissions may require the generation of waste-heat steam to maintain correct operating temperature to the catalytic-reduction unit.Heat from the incineration of waste gases represents still another source of process steam. Waste-heat flues from the CO boilers of fluid-catalytic crackers and from fluid-coking units, for example, are hot enough to provide the highest pressure level in a steam system.Selecting pressure and temperature levelsThe selecting of pressure and temperature levels for a process steam system is based on:(1)moisture content in condensing-steam turbines,(2)metallurgy of the system,(3)turbine water rates,(4)process requirements ,(5)water treatment costs, and(6)type of distribution system.Moisture content in condensing-steam turbines---The selection of pressure and temperature levels normally starts with the premise that somewhere in the system there will be a condensing turbine. Consequently, the pressure and temperature of the steam must be selected so that its moisture content in the last row of turbine blades will be less than 10-13%. In high speed, a moisture content of 10%or less is desirable. This restriction is imposed in order to minimize erosion of blades by water particles. This, in turn, means that there will be a minimum superheat for a given pressure level, turbine efficiency and condenser pressure for which the system can be designed.System mentallurgy- A second pressure-temperature concern in selecting the appropriate steam levels is the limitation imposed by metallurgy. Carbon steel flanges, for example, are limited to a maximum temperature of 750F because of the threat of graphite (carbides) precipitating at grain boundaries. Hence, at 600 psig and less, carbon-steel piping is acceptable in steam distribution systems. Above 600 psig, alloy piping is required. In a 900- t0 1,500-psig steam system, the piping must be either a r/2 carbon-1/2 molybdenum or a l/2 chromium% molybdenum alloyTurbine water rates - Steam requirements for a turbine are expressed as water rate, i.e., lb of steam/bph, or lb of steam/kWh. Actual water rate is a function of two factors: theoretical water rate and turbine efficiency.The first is directly related to the energy difference between the inlet and outlet of a turbine, based on the isentropic expansion of the steam. It is, therefore, a function of the turbine inlet and outlet pressures and temperatures.The second is a function of size of the turbine and the steam pressure at the inlet, and of turbine operation (i.e., whether the turbine condenses steam, or exhausts some of it to an intermediate pressure level). From an energy stand point, the higher the pressure and temperature, the higher the overall cycle efficiency. _Process requirements - When steam levels are being established, consideration must be given to process requirements other than for turbine drivers. For example, steam for process heating will have to be at a high-enough pressure to prevent process fluids from leaking into the steam. Steam for pipe tracing must be at a certain minimum pressure so that low-pressure condensate can be recovered.Water treatment costs - The higher the steam pressure, the costlier the boiler feedwater treatment. Above 600 psig, the feedwater almost always must be demineralized; below 600 psig, soft,ening may be adequate. It may have to be of high quality if the steam is used in the process, such as in reactions over a catalyst bed (e.g., in hydrogen production).Type of distribution system - There are two types of systems: local, as exemplified by powerhouse distribution; and complex, by wluch steam is distributed to many units in a process plant. For a small local system, it is not impractical from a cost standpoint for steam pressures to be in the 600-1,500-psig range. For a large system, maintaining pressures within the 150-600-psig range is desirable because of the cost of meeting the alloy requirements for higher-pressure steam distribution system.Because of all these foregoing factors, the steam system in a chemical process complex or oil refinery frequently ends up as a three-level arrangement. The highest level, 600 psig, serves primarily as a source of power. The intermediate level, 150 psig, is ideally suitable for small emergency turbines, tracing off the plot, and process heating. The low level, normally 50 psig, can be used for heating services, tracing within the plot, and process requirements. A higher fourth level normally not justified, except in special cases as when alarge amount ofelectric power must be generated.Whether or not an extraction turbine will be included in the process will have a bearing on the intermediate-pressure level selected, because the extraction pressure should be less than 50% of the high-pressure level, to take into account the pressure drop through the throttle valve and the nozzles of the high-pressure section of' the turbine.Drivers for pumps and compressorsThe choice between a steam and an electric driver for a particular pump or compressor depends on a number of things, including the operational philosophy. In the event of a power failure, it must be possible to shut down a plant orderly and safely if normal operation cannot be continued. For an orderly and safe shutdown, certain services must be available during a power failure: (1) instrument air, (2) cooling water, (3) relief and blow down pump out systems, (4) boiler feedwater pumps, (5) boiler fans, (6) emergency power generators, and (7) fire water pumps.These services are normally supplied by steam or diesel drivers because a plant's steam or diesel emergency system is considered more reliable than an electrical tie-line.The procedure for shutting down process units must be analyzed for each type of processplant and specific design. In general, the following represent the minimum services for which spare pumps driven by steam must be provided: column reflux, bottoms and purge-oil circulation, and heater charging. Most important is to maintain cooling; next, to be able to safely pump the plant's inventory into tanks.Driver selection cannot be generalized; a plan and procedure must be developed for each process unit.The control required for a process is at times another consideration in the selection of a driver. For example, a compressor may be controlled via flow or suction pressure. The ability to vary driver speed, easily obtained with a steam turbine, may be basis for selecting a steam driver instead of a constant-speed induction electric motor. This is especially important when the molecular weight of the gas being compressed may vary, as in catalytic-cracking and catalytic-reforming processes.In certain types of plants, gas flow must be maintained to prevent uncontrollable high-temperature excursions during shutdown. For example, hydrocrackers are purged of heavy hydrocarbon with recycle gas to prevent the exothermic reactions from producing high bed temperatures. Steam-driven compressors can do this during a power failure.Each process operation must be analyzed from such a safety viewpoint when selecting drivers for critical equipment. The size of a relief and blowdown system can be reduced by installing steam drivers. In most cases, the size of such a system is based on a total power failure. If heat-removal powered by steam drivers, the relief system can be smaller. For example, a steam driver will maintain flow in the pump-around circuit for removing heat from a column during a power failure, reducing the relief load imposed on the flare system.Equipment support services (such as lubrication and sea-oil systems for compressors) that could be damaged during a loss of power should also be powered by steam drivers.Driver size can also be a factor. An induction electric motor requires large starting currents - typically six times the normal load. The drop in voltage caused by the startup of such a motor imposes a heavy transient demand on the electrical distribution system. For this reason, drivers larger than 10,000 hp are normally steam turbines, although synchronous motors as large as 25,000 hp are used.The reliability of life-support facilities - e.g., building heat, potable water, pipe tracing, emergency lighting-during power failures is of particular concern mates. In such a case, at least one boiler should be equipped with steam-driven auxiliaries to provide these services.Lastly, steam drivers are also selected for the purpose of balancing steam systems and avoiding large amounts of letdown between steam levels. Such decisions regarding drivers are made after the steam balances have been refined and the distribution system has been fully defined. There must be sufficient flexibility to allow balancing the steam system under all operating conditions.Selecting steam driversAfter the number of steam drivers and their services have been established, the utility, or process engineer will estimate the steam consumption for making the steam balance.The standard method of doing this is to use the isentropic expansion of steam correeted for turbine efficiency.Actual steam consumption by a turbine is determined via:SR = (TSR)(bhp)/EHere, SR = actual steam rate, lb/h; TSR = theoretical steam rate, lb/hr/bhp ; bhp = turbine brake horsepower; and E = turbine efficiency.When exhaust steam can be used for process heating, the highest thermodynamic efficiency can be achieved by means of backpressure turbines. Large drivers, which are of high efficiency and require low theoretical steam rates, are normally supplied by the high-pressure header, thus minimizing steam consumption.Small turbines that operate only in emergencies can be allowed to exhaust to atmosphere. Although their water rates are poor, the water lost in short-duration operations may not represent a significant cost. Such turbines obviously play a small role in steam balance planning.Constructing steam balancesAfter the process and steam-turbine demands have been established, the next step is to construct a steam balance for the chemical complex or oil refinery. A sample balance is shown in Fig. 1-4. It shows steam production and consumption, the header systems, letdown stations, and boiler plant. It illustrates a normal (winter) case.It should be emphasized that there is not one balance but a series, representing a variety of operating modes. The object of the balances is to determine the design basis for establishing boiler she, letdown stations and deaerator capacities, boiler feedwater requirements, and steam flows in various parts of the system.The steam balance should cover the following operating modes: normal, all units operating; winter and summer conditions; shutdown of major units; startup of major units; loss of largest condensate source; power failure with flare in service; loss of large process steam generators; and variations in consumption by large steam users.From 50 t0 100 steam balances could be required to adequately cover all the major impacts on the steam system of a large complex.At this point, the general basis of the steam system design should have been developed by the completion of the following work:1. All significant loads have been examined, with particular attention focused on those for which there is relatively little design freedom - i.e., reboilers, sparing steam for process units, large turbines required because of electric power limitation and for shutdown safety.2. Loads have been listed for which the designer has some liberty in selecting drivers. These selections are based on analyses of cost competitiveness.3. Steam pressure and temperature levels have been established.4. The site plan has been reviewed to ascertain where it is not feasible to deliver steam or recover condensate, because piping costs would be excessive.5. Data on the process units are collected according to the pressure level and use of steam - i.e., for the process, condensing drivers and backpressure drivers.6. After Step 5, the system is balanced by trial-and-error calculations or computerized techniques to determine boiler, letdown, deaerator and boiler feedwater requirements.7. Because the possibility of an electric power failure normally imposes one of the major steam requirements, normal operation and the eventuality of such a failure must both be investigated, as a minimum.Checking the design basisAfter the foregoing steps have been completed, the following should be checked:Boiler capacity - Installed boiler capacity would be the maximum calculated (with an allowance of l0-20% for uncertainties in the balance), corrected for the number of boilers operating (and on standby).The balance plays a major role in establishing normal-case boiler specifications, both number and size. Maximum firing typically is based on the emergency case. Normal firing typically establishes the number of boilers required, because each boiler will have to be shut down once a year for the code-required drum inspection. Full-firing levels of the remaining boilers will be set by the normal steam demand. The number of units required (e.g., three 50% units, four 33%units, etc.) in establishing installed boiler capacity is determined from cost studies. It is generally considered double-jeopardy design to assume that a boiler will be out of service during a power failure.Minimum boiler turndown - Most fuel-fired boilers can be operated down to approximately 20% of the maximum continuous rate. The maximum load should not be expected to be below this level.Differences between normal and maximum loads –If the maximum load results from an emergency (such as power failure), consideration should be given to shedding process steam loads under this condition in order to minimize in- stalled boiler capacity. However, the consequences of shedding should be investigated by the process designer and the operating engineers to ensure the safe operation of the entire process.Low-level steam consumption - The key to any steam balance is the disposition of low-level steam. Surplus low-level steam can be reduced only by including more condensing steam turbines in the system, or devising more process applications for it, such as absorption refrigeration for cooling process streams and ranking-cycle systems for generating power. In general, balancing the supply and consumption of low-level steam is a critical factor in the design of the steam system.Quantity of steam at pressure-reducing stations - Because useful work is not recovered from the steam passing through a pressure-reducing station, such flow should be kept at a minimum. In the Fig. 1-5 150/50-psig station, a flow of only 35,000 lb/h was established as normal for this steam balance case (normal, winter). The loss of steam users on the 50-psig systems should be considered, particularly of the large users, because a shutdown of one may demand that the 150/50-psig station close off beyond its controllable limit. If this happened, the 50-psig header would be out of control, and an immediate-pressure buildup in the header wouldbegin, setting off the safety relief valves.The station's full-open capacity should also be checked to ensure that it can make up any 50-psig steam that may be lost through the shutdown of a single large 50-psig source (a turbine sparing a large electric motor, for example}. It would be undesirable for the station to be sized so that it opens more than 80%. In some cases, range ability requirements may dictate two valves (one small and one large).Intermediate pressure level - If large steam users or suppliers may come on stream or go off steam, the normal(day-to-day) operation should be checked. No such change in normal operation should result in a significant upset (e.g.,relief valves set off, or the system pressure control lost).If a large load is lost, the steam supply should be reduced by the letdown-station. If the load suddenly increases, the 600/150-psig station must be able of supplying the additional steam. If steam generated via the process disappears, the station must be capable of making up theload. If150-psig steam is generated unexpectedly, the 600/150-psig station must be able to handle the cutback.The important point here is that where the steam flow could rise t0 700,000 lb/h, this flow should be reduced by a cutback at the 600/150-psig station, not by an increase in the flow to the lower-pressure level, because this steam would have nowhere to go. The normal (600/150-psig) letdown station must be capable of handling some of the negative load swings, even though, overall, this letdown needs to be kept to a minimum.On the other hand, shortages of steam at the 150-psig level can be made up relatively easily via the 600/150-psig station. Such shortages are routinely small in quantity or duration, or both-(startup, purging, electric drive maintenance, process unit shutdown, etc.)High-pressure level - Checking the high-pressure level is generally more straightforward because rate control takes place directly at the boilers. Firing can be increased or lowered to accommodate a shortage or surplus.Typical steam-balance casesThe Fig. 1-4 steam balance represents steady-state condition, winter operation, all process units operating, and no significant unusual demands for steam.An analysis similar to the foregoing might also be required for the normal summertime case, in which a single upset must not jeopardize control but the load may be less (no tank heating, pipe tracing, etc.)The balance representing an emergency (e.g., loss of electric power) is significant. In this case, the pertinent test point is the system's ability to simply weather the upset, not to maintain normal, stable operation. The maximum relief pressure that would develop in any of the headers represents the basis for sizing relief valves. The loss of boiler feed water or condensate return, or both, could result in a major upset, or even a shutdown.Header pressure control during upsetsAt the steady-state conditions associated with the multiplicity of balances, boiler capacity can be adjusted to meet user demands. However, boiler load cannot be changed quickly to accommodate a sharp upset. Response rate is typically limited to 20% of capacity per minute. Therefore, other elements must be relied on to control header pressures during transient conditions.The roles of several such elements in controlling pressures in the three main headers during transient conditions are listed in Table l-3. A control system having these elements will result in a steam system capable of dealing with the transient conditions experienced in moving from one balance point to another.Tracking steam balancesBecause of schedule constraints, steam balances and boiler size are normally established early in the design stage. These determinations are based on assumptions regarding turbine efficiencies, process steam generated in waste-heat furnaces, and other quantities of steam that depend on purchased equipment. Therefore, a sufficient number of steam balances should be tracked through the design period to ensure that the equipment purchased will satisfy the original design concept of the steam system.This tracking represents an excellent application for a utility data-base system and a system linear programming model. During the course of the mechanical design of a large "grass roots" complex, 40 steam balances were continuously updated for changes in steam loads via such an application.Cost tradeoffsTo design an efficient but least-expensive system, the designer ideally develops a total minimum-cost curve – which incorporates all the pertinent costs related to capital expenditures, installation, fuel, utilities, operations and maintenance-and performs a cost study of the final system. However, because the designer is under the constraint of keeping to a project schedule, major, highly expensive equipment must be ordered early in the project, when many key parts of the design puzzle are not available (e.g., a complete load summary, turbine water rates, equipment efficiencies and utility costs).A practical alternative is to rely on comparative-cost estimates, as are conventionally used in assisting with engineering decision points. This approach is particularly useful in making early equipment selections when fine-tuning is not likely to alter decisions, such as regarding the number of boilers required, whether boilers should be shop-fabricated or field-erected, and the practicality of generating steam from waste heat or via cogeneration.The significant elements of a steam-system cost-comparative study are costs for: equipment and installation; ancillaries (i.e., miscellaneous items required to support the equipment,such as additional stacks, upgraded combustion control, more extensive blowdown facilities, etc.); operation(annual); maintenance (annual); and utilities.The first two costs may be obtained from in-house data or from vendors. Operational and maintenance costs can be factored from the capital cost for equipment based on an assessment of the reliability of the purchased equipment.Utility costs are generally the most difficult to establish at an early stage because sources frequently depend on the site of the plant. Some examples of such costs are: purchased fuel gas - $5.35/million Btu, raw water - $0.60/1,000 gal, electricity - $0.07{kWh, and demineralized boiler feedwater -$1.50/1,000 gal. The value of steam at the various pressureLevels can be developed [5J.Let it be further assumed that the emergency balance requires 2,200,000 lb/h of steam (all boilers available). Listed in Table 1 4 are some combinations of boiler installations that meet the design conditions previously stipulated.Table l-4 indicates that any of the several combinations of power-boiler number and size could meet both normal and emergency demand. Therefore, a comparative-cost analysis would be made to assist in making an early decision regarding the number and size of the power boilers.(Table l-4 is based on field-erected, industrial-type boiler Conventional sizing of this type of boiler might range from 100,000 lb/h through 2,000,000 lb/h for each.)An alternative would be the packaged boiler option (although it does not seem practical at this load level. Because it is shop-fabricated, this type of boiler affords a significant saving in terms of field installation cost. Such boilers are available up to a nominal capacity of 100,000 lb/h, with some versions up t0 250,000 lb7h.Selecting turbine water rate i.e., efficiency) represents another major cost concern. Beyond the recognized payout period (e.g., 3 years), the cost of drive steam can be significant in comparison with the equipment capital cost. The typical 30% efficiency ofthe medium-pressure backpressure turbine can be boosted significantly.Driver selections are frequently made with the help of cost-tradeoff studies, unless overriding considerations preclude a drive medium. Electric pump drives are typically recommended on the basis of such studies.Steam tracing has long been the standard way of winterizing piping, not only because of its history of successful performance but also because it is an efficient way to use low-pressure steam.Design consideratonsAs the steam system evolves, the designer identifies steam loads and pressure levels, locates steam loads, checks safety aspects, and prepares cost-tradeoff studies, in order to provide low-cost energy safely, always remaining aware of the physical entity that will arise from the design.How are design concepts translated into a design document? And what basic guidelines will ensure that the physical plant will represent what was intended conceptually?Basic to achieving these ends is the piping and instrument diagram (familiar as the P&ID). Although it is drawn up primarily for the piping designers benefit, it also plays a major role in communicating to the instrumentation designer the process-control strategy, as well as in conveying specialty information to electrical, civil, structural, mechanical and architectural engineers. It is the most important document for representing the specification of the steam。

重庆交通大学工程造价外文翻译

重庆交通大学工程造价外文翻译

本科毕业设计(论文)外文翻译译文题目:建筑招投标与赢者诅咒:博弈论方法学院:经济与管理学院专业:工程造价学生姓名:**学号:************指导教师:***完成时间:2017年4月5日译自:Muaz O.Ahmed1;Islam H. El-adaway, M.ASCE2; Kalyn T. Coatney3;and Mohamed S. Eid4. Construction Bidding and the Winner’s Curse: Game Theory Approach[J],Construction Engineering and Management,2016,142(2).建筑招投标与赢者诅咒:博弈论方法Muaz O.Ahmed1;Islam H. El-adaway, M.ASCE2; Kalyn T. Coatney3;and Mohamed S. Eid4美国密西西比州立大学土木与环境工程系摘要:在建筑业中,竞争投标一直是承包商选择的一种方法。

由于施工的真实成本直到项目完工后才知道,所以逆向选择是一个重大问题。

逆向选择是当合同的赢家低估了项目的真实成本,从而中标承包商很有可能赚取负或至少低于正常利润。

赢家的诅咒是当中标人提交一个被低估的出价,因此诅咒被选中承担项目。

在多阶段招标环境下,分包商由总承包商雇用,胜利者的诅咒可能会复合。

在一般情况下,承包商遭受赢者的诅咒,因为各种各样的原因包括项目成本估计不准确;新的承包商进入建筑市场;在建筑行业的衰退的情况下减少损失;在建筑市场激烈的竞争;差的机会成本,从而影响承包商的行为;以及要赢该项目然后弥补订单变更、索赔和其他机制而带来的损失。

本文通过博弈论方法旨在分析并减少潜在的施工招投标中赢家诅咒的影响。

为此,作者确定在两个共同的施工招标环境的赢家诅咒的程度,即单级招标和多级招标。

我们的目标是比较上述两个施工招标环境,并确定如何学习从过去的投标决策和经验可以减轻赢家的诅咒。

毕业设计英文翻译

毕业设计英文翻译

INTRODUCTION
The Yong Jong Grand Bridge is located at the west coast of Korea, and was constructed to connect the new international airport at Yong Jong Island, Inchon, and the mainland. The bridge is 4.4 km long and is composed of three different bridge types—a suspension bridge (550 m), a truss bridge (2,250 m), and steel box bridges. The Grand Bridge (Gil and Cho 1998) has double decks; it will carry six highway lanes on the upper deck, and four highway lanes and dual tracks of a railway on the lower deck. The approach truss bridges are double-deck, Warren truss type bridges. The truss bridges are three-span continuous with a length of 125 m for each span. The width of the truss bridge is 36.1 m.
shoe and dead load of the neighboring truss bridges. The selfanchored suspension bridge typically has limited space for the main cable anchorage, which is located at the stiffening truss, so the air spinning method, which can contain more wires per strand than the parallel wire strand method, is employed to erect the main cables.

毕业设计中英文翻译

毕业设计中英文翻译

Integrated circuitAn integrated circuit or monolithic integrated circuit (also referred to as IC, chip, or microchip) is an electronic circuit manufactured by the patterned diffusion of trace elements into the surface of a thin substrate of semiconductor material. Additional materials are deposited and patterned to form interconnections between semiconductor devices.Integrated circuits are used in virtually all electronic equipment today and have revolutionized the world of electronics. Computers, mobile phones, and other digital appliances are now inextricable parts of the structure of modern societies, made possible by the low cost of production of integrated circuits.IntroductionICs were made possible by experimental discoveries showing that semiconductor devices could perform the functions of vacuum tubes and by mid-20th-century technology advancements in semiconductor device fabrication. The integration of large numbers of tiny transistors into a small chip was an enormous improvement over the manual assembly of circuits using discrete electronic components. The integrated circuit's mass production capability, reliability, and building-block approach tocircuit design ensured the rapid adoption of standardized ICs in place of designs using discrete transistors.There are two main advantages of ICs over discrete circuits: cost and performance. Cost is low because the chips, with all their components, are printed as a unit by photolithography rather than being constructed one transistor at a time. Furthermore, much less material is used to construct a packaged IC than to construct a discrete circuit. Performance is high because the components switch quickly and consume little power (compared to their discrete counterparts) as a result of the small size and close proximity of the components. As of 2006, typical chip areas range from a few square millimeters to around 350 mm2, with up to 1 million transistors per mm2.TerminologyIntegrated circuit originally referred to a miniaturized electronic circuit consisting of semiconductor devices, as well as passive components bonded to a substrate or circuit board.[1] This configuration is now commonly referred to as a hybrid integrated circuit. Integrated circuit has since come to refer to the single-piece circuit construction originally known as a monolithic integrated circuit.[2]InventionEarly developments of the integrated circuit go back to 1949, when the German engineer Werner Jacobi (Siemens AG) filed a patent for an integrated-circuit-like semiconductor amplifying device showing five transistors on a common substrate arranged in a 2-stage amplifier arrangement. Jacobi disclosed small and cheap hearing aids as typical industrial applications of his patent. A commercial use of his patent has not been reported.The idea of the integrated circuit was conceived by a radar scientist working for the Royal Radar Establishment of the British Ministry of Defence, Geoffrey W.A. Dummer (1909–2002). Dummer presented the idea to the public at the Symposium on Progress in Quality Electronic Components in Washington, D.C. on May 7, 1952.[4] He gave many sympodia publicly to propagate his ideas, and unsuccessfully attempted to build such a circuit in 1956.A precursor idea to the IC was to create small ceramic squares (wafers), each one containing a single miniaturized component. Components could then be integrated and wired into a tridimensional or tridimensional compact grid. This idea, which looked very promising in 1957, was proposed to the US Army by Jack Kilby, and led to the short-lived Micro module Program. However, as the project was gaining momentum, Jack Kilby came up with a new, revolutionary design: the IC.Newly employed by Texas Instruments, Jack Kilby recorded his initial ideas concerning the integrated circuit in July 1958, successfully demonstrating the first working integrated example on September 12, 1958.In his patent application of February 6, 1959, Jack Kilby described his new device as ―a body of semiconductor material ... wherein all the components of the electronic circuit are completely integrated.‖Jack Kilby won the 2000 Nobel Prize in Physics for his part of the invention of the integrated circuit.Jack Kilby's work was named an IEEE Milestone in 2009.Noyce also came up with his own idea of an integrated circuit half a year later than Jack Kilby. His chip solved many practical problems that Jack Kilby's had not. Produced at Fairchild Semiconductor, it was made of silicon, whereas Jack Kilby chip was made of germanium. GenerationsIn the early days of integrated circuits, only a few transistors could be placed on a chip, as the scale used was large because of the contemporary technology, and manufacturing yields were low by today's standards. As the degree of integration was small, the design was done easily. Over time, millions, and today billions of transistors could be placed on one chip, and to make a good design became a task to be planned thoroughly. This gave rise to new design methods.SSI, MSI and LSIThe first integrated circuits contained only a few transistors. Called "small-scale integration" (SSI), digital circuits containing transistors numbering in the tens for example, while early linear ICs such as the Plessey SL201 or the Philips TAA320 had as few as two transistors. The term Large Scale Integration was first used by IBM scientist Rolf Landauer when describing the theoretical concept, from there came the terms for SSI, MSI, VLSI, and ULSI.SSI circuits were crucial to early aerospace projects, and aerospace projects helped inspire development of the technology. Both the Minuteman missile and Apollo program needed lightweight digital computers for their inertial guidance systems; the Apollo guidance computer led and motivated the integrated-circuit technology,while the Minuteman missile forced it into mass-production. The Minuteman missile program and various other Navy programs accounted for the total $4 million integrated circuit market in 1962, and by 1968, U.S. Government space and defense spending still accounted for 37% of the $312 million total production. The demand by the U.S. Government supported the nascent integrated circuit market until costs fell enough to allow firms to penetrate the industrial and eventually the consumer markets. The average price per integrated circuit dropped from $50.00 in1962 to $2.33 in 1968.[13] Integrated circuits began to appear in consumer products by the turn of the decade, a typical application being FMinter-carrier sound processing in television receivers.The next step in the development of integrated circuits, taken in the late 1960s, introduced devices which contained hundreds of transistors on each chip, called "medium-scale integration" (MSI).They were attractive economically because while they cost little more to produce than SSI devices, they allowed more complex systems to be produced using smaller circuit boards, less assembly work (because of fewer separate components), and a number of other advantages.Further development, driven by the same economic factors, led to "large-scale integration" (LSI) in the mid 1970s, with tens of thousands of transistors per chip.Integrated circuits such as 1K-bit RAMs, calculator chips, and the first microprocessors, that began to be manufactured in moderate quantities in the early 1970s, had under 4000 transistors. True LSI circuits, approaching 10,000 transistors, began to be produced around 1974, for computer main memories and second-generation microprocessors.VLSIThe final step in the development process, starting in the 1980s and continuing through the present, was "very large-scale integration" (VLSI). The development started with hundreds of thousands of transistors in the early 1980s, and continues beyond several billion transistors as of 2009. Multiple developments were required to achieve this increased density. Manufacturers moved to smaller design rules and cleaner fabrication facilities, so that they could make chips with more transistors and maintain adequate yield. The path of process improvements was summarized by the International Technology Roadmap for Semiconductors (ITRS). Design tools improved enough to make it practical to finish these designs in a reasonable time. The more energy efficient CMOS replaced NMOS and PMOS, avoiding a prohibitive increase in power consumption. Better texts such as the landmark textbook by Mead and Conway helped schools educate more designers, among other factors.In 1986 the first one megabit RAM chips were introduced, which contained more than one million transistors. Microprocessor chips passed the million transistor mark in 1989 and the billion transistor mark in 2005.[14] The trend continues largely unabated, with chips introduced in 2007 containing tens of billions of memory transistors.[15]ULSI, WSI, SOC and 3D-ICTo reflect further growth of the complexity, the term ULSI that stands for "ultra-large-scale integration" was proposed for chips of complexityof more than 1 million transistors.Wafer-scale integration (WSI) is a system of building very-large integrated circuits that uses an entire silicon wafer to produce a single "super-chip". Through a combination of large size and reduced packaging, WSI could lead to dramatically reduced costs for some systems, notably massively parallel supercomputers. The name is taken from the term Very-Large-Scale Integration, the current state of the art when WSI was being developed.A system-on-a-chip (SoC or SOC) is an integrated circuit in which all the components needed for a computer or other system are included on a single chip. The design of such a device can be complex and costly, and building disparate components on a single piece of silicon may compromise the efficiency of some elements. However, these drawbacks are offset by lower manufacturing and assembly costs and by a greatly reduced power budget: because signals among the components are kept on-die, much less power is required (see Packaging).A three-dimensional integrated circuit (3D-IC) has two or more layers of active electronic components that are integrated both vertically and horizontally into a single circuit. Communication between layers useson-die signaling, so power consumption is much lower than in equivalent separate circuits. Judicious use of short vertical wires can substantially reduce overall wire length for faster operation.Advances in integrated circuitsAmong the most advanced integrated circuits are the microprocessors or "cores", which control everything from computers and cellular phones to digital microwave ovens. Digital memory chips and ASICs are examples of other families of integrated circuits that are important to the modern information society. While the cost of designing and developing a complex integrated circuit is quite high, when spread across typically millions of production units the individual IC cost is minimized. The performance of ICs is high because the small size allows short traces which in turn allows low power logic (such as CMOS) to be used at fast switching speeds.ICs have consistently migrated to smaller feature sizes over the years, allowing more circuitry to be packed on each chip. This increased capacity per unit area can be used to decrease cost and/or increase functionality—see Moore's law which, in its modern interpretation, states that the number of transistors in an integrated circuit doubles every two years. In general, as the feature size shrinks, almost everything improves—the cost per unit and the switching power consumption godown, and the speed goes up. However, ICs with nanometer-scale devices are not without their problems, principal among which is leakage current (see subthreshold leakage for a discussion of this), although these problems are not insurmountable and will likely be solved or at least ameliorated by the introduction of high-k dielectrics. Since these speed and power consumption gains are apparent to the end user, there is fierce competition among the manufacturers to use finer geometries. This process, and the expected progress over the next few years, is well described by the International Technology Roadmap for Semiconductors (ITRS).In current research projects, integrated circuits are also developed for sensoric applications in medical implants or other bioelectronic devices. Particular sealing strategies have to be taken in such biogenic environments to avoid corrosion or biodegradation of the exposed semiconductor materials.[16] As one of the few materials well established in CMOS technology, titanium nitride (TiN) turned out as exceptionally stable and well suited for electrode applications in medical implants.[17][18] ClassificationIntegrated circuits can be classified into analog, digital and mixed signal (both analog and digital on the same chip).Digital integrated circuits can contain anything from one to millions of logic gates, flip-flops, multiplexers, and other circuits in a few square millimeters. The small size of these circuits allows high speed, low power dissipation, and reduced manufacturing cost compared with board-level integration. These digital ICs, typically microprocessors, DSPs, and micro controllers, work using binary mathematics to process "one" and "zero" signals.Analog ICs, such as sensors, power management circuits, and operational amplifiers, work by processing continuous signals. They perform functions like amplification, active filtering, demodulation, and mixing. Analog ICs ease the burden on circuit designers by having expertly designed analog circuits available instead of designing a difficult analog circuit from scratch.ICs can also combine analog and digital circuits on a single chip to create functions such as A/D converters and D/A converters. Such circuits offer smaller size and lower cost, but must carefully account for signal interference.ManufacturingFabricationRendering of a small standard cell with three metal layers (dielectric has been removed). The sand-colored structures are metal interconnect, with the vertical pillars being contacts, typically plugs of tungsten. The reddish structures are poly-silicon gates, and the solid at the bottom is the crystalline silicon bulk.Schematic structure of a CMOS chip, as built in the early 2000s. The graphic shows LDD-Misfit's on an SOI substrate with five materialization layers and solder bump for flip-chip bonding. It also shows the section for FEOL (front-end of line), BEOL (back-end of line) and first parts of back-end process.The semiconductors of the periodic table of the chemical elements were identified as the most likely materials for a solid-state vacuum tube. Starting with copper oxide, proceeding to germanium, then silicon, the materials were systematically studied in the 1940s and 1950s. Today, silicon monocrystals are the main substrate used for ICs although someIII-V compounds of the periodic table such as gallium arsenide are used for specialized applications like LEDs, lasers, solar cells and the highest-speed integrated circuits. It took decades to perfect methods of creating crystals without defects in the crystalline structure of the semiconducting material.Semiconductor ICs are fabricated in a layer process which includes these key process steps:∙Imaging∙Deposition∙EtchingThe main process steps are supplemented by doping and cleaning.∙Integrated circuits are composed of many overlapping layers, each defined by photolithography, and normally shown in different colors.Some layers mark where various dopants are diffused into thesubstrate (called diffusion layers), some define where additional ions are implanted (implant layers), some define the conductors(poly-silicon or metal layers), and some define the connectionsbetween the conducting layers (via or contact layers). All components are constructed from a specific combination of these layers.∙In a self-aligned CMOS process, a transistor is formed wherever the gate layer (poly-silicon or metal) crosses a diffusion layer.∙Capacitive structures, in form very much like the parallel conducting plates of a traditional electrical capacitor, are formedaccording to the area of the "plates", with insulating material between the plates. Capacitors of a wide range of sizes are common on ICs.∙Meandering stripes of varying lengths are sometimes used to form on-chip resistors, though most logic circuits do not need any resistors.The ratio of the length of the resistive structure to its width, combined with its sheet resistivity, determines the resistance.∙More rarely, inductive structures can be built as tiny on-chip coils, or simulated by gyrators.Since a CMOS device only draws current on the transition between logic states, CMOS devices consume much less current than bipolar devices.A random access memory is the most regular type of integrated circuit; the highest density devices are thus memories; but even a microprocessor will have memory on the chip. (See the regular array structure at the bottom of the first image.) Although the structures are intricate – with widths which have been shrinking for decades – the layers remain much thinner than the device widths. The layers of material are fabricated much like a photographic process, although light waves in the visible spectrum cannot be used to "expose" a layer of material, as they would be too large for the features. Thus photons of higher frequencies (typically ultraviolet) are used to create the patterns for each layer. Because each feature is so small, electron microscopes are essential tools for a process engineer who might be debugging a fabrication process.Each device is tested before packaging using automated test equipment (ATE), in a process known as wafer testing, or wafer probing. The wafer is then cut into rectangular blocks, each of which is called a die. Each good die (plural dice, dies, or die) is then connected into a package using aluminum (or gold) bond wires which are welded and/or thermosonic bonded to pads, usually found around the edge of the die. After packaging, the devices go through final testing on the same or similar ATE used during wafer probing. Industrial CT scanning can also be used. Test cost can account for over 25% of the cost of fabrication on lower cost products, but can be negligible on low yielding, larger, and/or higher cost devices.As of 2005, a fabrication facility (commonly known as a semiconductor fab) costs over $1 billion to construct,[19] because much of the operation is automated. Today, the most advanced processes employ the following techniques:∙The wafers are up to 300 mm in diameter (wider than a common dinner plate).∙Use of 32 nanometer or smaller chip manufacturing process. Intel, IBM, NEC, and AMD are using ~32 nanometers for their CPU chips.IBM and AMD introduced immersion lithography for their 45 nmprocesses[20]∙Copper interconnects where copper wiring replaces aluminium for interconnects.∙Low-K dielectric insulators.∙Silicon on insulator (SOI)∙Strained silicon in a process used by IBM known as strained silicon directly on insulator (SSDOI)∙Multigate devices such as trin-gate transistors being manufactured by Intel from 2011 in their 22 nim process.PackagingIn the late 1990s, plastic quad flat pack (PQFP) and thin small-outline package (TSOP) packages became the most common for high pin count devices, though PGA packages are still often used for high-end microprocessors. Intel and AMD are currently transitioning from PGA packages on high-end microprocessors to land grid array (LGA) packages.Ball grid array (BGA) packages have existed since the 1970s. Flip-chip Ball Grid Array packages, which allow for much higher pin count than other package types, were developed in the 1990s. In an FCBGA package the die is mounted upside-down (flipped) and connects to the packageballs via a package substrate that is similar to a printed-circuit board rather than by wires. FCBGA packages allow an array of input-output signals (called Area-I/O) to be distributed over the entire die rather than being confined to the die periphery.Traces out of the die, through the package, and into the printed circuit board have very different electrical properties, compared to on-chip signals. They require special design techniques and need much more electric power than signals confined to the chip itself.When multiple dies are put in one package, it is called SiP, for System In Package. When multiple dies are combined on a small substrate, often ceramic, it's called an MCM, or Multi-Chip Module. The boundary between a big MCM and a small printed circuit board is sometimes fuzzy. Chip labeling and manufacture dateMost integrated circuits large enough to include identifying information include four common sections: the manufacturer's name or logo, the part number, a part production batch number and/or serial number, and a four-digit code that identifies when the chip was manufactured. Extremely small surface mount technology parts often bear only a number used in a manufacturer's lookup table to find the chip characteristics.The manufacturing date is commonly represented as a two-digit year followed by a two-digit week code, such that a part bearing the code 8341 was manufactured in week 41 of 1983, or approximately in October 1983. Legal protection of semiconductor chip layoutsLike most of the other forms of intellectual property, IC layout designs are creations of the human mind. They are usually the result of an enormous investment, both in terms of the time of highly qualified experts, and financially. There is a continuing need for the creation of new layout-designs which reduce the dimensions of existing integrated circuits and simultaneously increase their functions. The smaller an integrated circuit, the less the material needed for its manufacture, and the smaller the space needed to accommodate it. Integrated circuits are utilized in a large range of products, including articles of everyday use, such as watches, television sets, washing machines, automobiles, etc., as well as sophisticated data processing equipment.The possibility of copying by photographing each layer of an integrated circuit and preparing photomasks for its production on the basis of the photographs obtained is the main reason for the introduction of legislation for the protection of layout-designs.A diplomatic conference was held at Washington, D.C., in 1989, which adopted a Treaty on Intellectual Property in Respect of Integrated Circuits (IPIC Treaty). The Treaty on Intellectual Property in respect of Integrated Circuits, also called Washington Treaty or IPIC Treaty (signed at Washington on May 26, 1989) is currently not in force, but was partially integrated into the TRIPs agreement.National laws protecting IC layout designs have been adopted in a number of countries.Other developmentsIn the 1980s, programmable logic devices were developed. These devices contain circuits whose logical function and connectivity can be programmed by the user, rather than being fixed by the integrated circuit manufacturer. This allows a single chip to be programmed to implement different LSI-type functions such as logic gates, adders and registers. Current devices called field-programmable gate arrays can now implement tens of thousands of LSI circuits in parallel and operate up to 1.5 GHz (Anachronism holding the speed record).The techniques perfected by the integrated circuits industry over the last three decades have been used to create very small mechanical devices driven by electricity using a technology known asmicroelectromechanical systems. These devices are used in a variety of commercial and military applications. Example commercial applications include DLP projectors, inkjet printers, and accelerometers used to deploy automobile airbags.In the past, radios could not be fabricated in the same low-cost processes as microprocessors. But since 1998, a large number of radio chips have been developed using CMOS processes. Examples include Intel's DECT cordless phone, or Atheros's 802.11 card.Future developments seem to follow the multi-coremulti-microprocessor paradigm, already used by the Intel and AMD dual-core processors. Intel recently unveiled a prototype, "not for commercial sale" chip that bears 80 microprocessors. Each core is capable of handling its own task independently of the others. This is in response to the heat-versus-speed limit that is about to be reached using existing transistor technology. This design provides a new challenge to chip programming. Parallel programming languages such as theopen-source X10 programming language are designed to assist with this task.集成电路集成电路或单片集成电子电路(也称为IC、集成电路片或微型集成电路片)是一种电子电路制作的图案扩散微量元素分析在基体表面形成一层薄的半导体材料。

毕业设计论文中英文翻译要求

毕业设计论文中英文翻译要求

毕业设计论文中英文翻译要求Graduation Thesis Translation RequirementsEnglish translation of Graduation Thesis:1. Accuracy: The English translation of the Graduation Thesis should accurately reflect the content and meaning of the original Chinese text. It should convey the same ideas and arguments as presented in the original text.2. Clarity: The translation should be clear and easy to understand. The language used should be appropriate and the sentences should be well-structured.3. Grammar and Syntax: The translation should follow the rules of English grammar and syntax. There should be no grammatical errors or awkward sentence constructions.4. Vocabulary: The translation should make use of appropriate vocabulary that is relevant to the topic of the Graduation Thesis. Technical terms and concepts should be accurately translated.5. Style: The translation should maintain the academic style and tone of the original Chinese text. It should use formal language and avoid colloquial or informal expressions.6. References: If the Graduation Thesis includes citations or references, the English translation should accurately reflectthese citations and references. The formatting of citations and references should follow the appropriate style guide.7. Proofreading: The English translation should be thoroughly proofread to ensure there are no spelling or punctuation errors. It should also be reviewed for any inconsistencies or inaccuracies.Minimum word count: The English translation of the Graduation Thesis should be at least 1200 words. This requirement ensures that the translation adequately captures the main points and arguments of the original text.It is important to note that there may be specific guidelines or requirements provided by your academic institution or supervisor for the translation of your Graduation Thesis. Please consult these guidelines and follow them accordingly.。

毕业设计中英文翻译

毕业设计中英文翻译

Bridge Waterway OpeningsIn a majority of cases the height and length of a bridge depend solely upon the amount of clear waterway opening that must be provided to accommodate the floodwaters of the stream. Actually, the problem goes beyond that of merely accommodating the floodwaters and requires prediction of the various magnitudes of floods for given time intervals. It would be impossible to state that some given magnitude is the maximum that will ever occur, and it is therefore impossible to design for the maximum, since it cannot be ascertained. It seems more logical to design for a predicted flood of some selected interval ---a flood magnitude that could reasonably be expected to occur once within a given number of years. For example, a bridge may be designed for a 50-year flood interval; that is, for a flood which is expected (according to the laws of probability) to occur on the average of one time in 50 years. Once this design flood frequency, or interval of expected occurrence, has been decided, the analysis to determine a magnitude is made. Whenever possible, this analysis is based upon gauged stream records. In areas and for streams where flood frequency and magnitude records are not available, an analysis can still be made. With data from gauged streams in the vicinity, regional flood frequencies can be worked out; with a correlation between the computed discharge for the ungauged stream and the regional flood frequency, a flood frequency curve can be computed for the stream in question. Highway CulvertsAny closed conduit used to conduct surface runoff from one side of a roadway to the other is referred to as a culvert. Culverts vary in size from large multiple installations used in lieu of a bridge to small circular or elliptical pipe, and their design varies in significance. Accepted practice treats conduits under the roadway as culverts. Although the unit cost of culverts is much less than that of bridges, they are far more numerous, normally averaging about eight to the mile, and represent a greater cost in highway. Statistics show that about 15 cents of the highway construction dollar goes to culverts, as compared with 10 cents for bridge. Culvert design then is equally as important as that of bridges or other phases of highway and should be treated accordingly.Municipal Storm DrainageIn urban and suburban areas, runoff waters are handled through a system of drainage structures referred to as storm sewers and their appurtenances. The drainage problem is increased in these areas primarily for two reasons: the impervious nature of the area creates a very high runoff; and there is little room for natural water courses. It is often necessary to collect the entire storm water into a system of pipes and transmit it over considerable distances before it can be loosed again as surface runoff. This collection and transmission further increase the problem, since all of the water must be collected with virtually no ponding, thus eliminating any natural storage; and though increased velocity the peak runoffs are reached more quickly. Also, the shorter times of peaks cause the system to be more sensitive to short-duration, high-intensity rainfall. Storm sewers, like culverts and bridges, are designed for storms of various intensity –return-period relationship, depending upon the economy and amount of ponding that can be tolerated.Airport DrainageThe problem of providing proper drainage facilities for airports is similar in many ways to that of highways and streets. However, because of the large and relatively flat surface involved the varying soil conditions, the absence of natural water courses and possible side ditches, and the greater concentration of discharge at the terminus of the construction area, some phases of the problem are more complex. For the average airport the overall area to be drained is relatively large and an extensive drainage system is required. The magnitude of such a system makes it even more imperative that sound engineeringprinciples based on all of the best available data be used to ensure the most economical design. Overdesign of facilities results in excessive money investment with no return, and underdesign can result in conditions hazardous to the air traffic using the airport.In other to ensure surfaces that are smooth, firm, stable, and reasonably free from flooding, it is necessary to provide a system which will do several things. It must collect and remove the surface water from the airport surface; intercept and remove surface water flowing toward the airport from adjacent areas; collect and remove any excessive subsurface water beneath the surface of the airport facilities and in many cases lower the ground-water table; and provide protection against erosion of the sloping areas. Ditches and Cut-slope DrainageA highway cross section normally includes one and often two ditches paralleling the roadway. Generally referred to as side ditches these serve to intercept the drainage from slopes and to conduct it to where it can be carried under the roadway or away from the highway section, depending upon the natural drainage. To a limited extent they also serve to conduct subsurface drainage from beneath the roadway to points where it can be carried away from the highway section.A second type of ditch, generally referred to as a crown ditch, is often used for the erosion protection of cut slopes. This ditch along the top of the cut slope serves to intercept surface runoff from the slopes above and conduct it to natural water courses on milder slopes, thus preventing the erosion that would be caused by permitting the runoff to spill down the cut faces.12 Construction techniquesThe decision of how a bridge should be built depends mainly on local conditions. These include cost of materials, available equipment, allowable construction time and environmental restriction. Since all these vary with location and time, the best construction technique for a given structure may also vary. Incremental launching or Push-out MethodIn this form of construction the deck is pushed across the span with hydraulic rams or winches. Decks of prestressed post-tensioned precast segments, steel or girders have been erected. Usually spans are limited to 50~60 m to avoid excessive deflection and cantilever stresses , although greater distances have been bridged by installing temporary support towers . Typically the method is most appropriate for long, multi-span bridges in the range 300 ~ 600 m ,but ,much shorter and longer bridges have been constructed . Unfortunately, this very economical mode of construction can only be applied when both the horizontal and vertical alignments of the deck are perfectly straight, or alternatively of constant radius. Where pushing involves a small downward grade (4% ~ 5%) then a braking system should be installed to prevent the deck slipping away uncontrolled and heavy bracing is then needed at the restraining piers.Bridge launching demands very careful surveying and setting out with continuous and precise checks made of deck deflections. A light aluminum or steel-launching nose forms the head of the deck to provide guidance over the pier. Special teflon or chrome-nickel steel plate bearings are used to reduce sliding friction to about 5% of the weight, thus slender piers would normally be supplemented with braced columns to avoid cracking and other damage. These columns would generally also support the temporary friction bearings and help steer the nose.In the case of precast construction, ideally segments should be cast on beds near the abutments and transferred by rail to the post-tensioning bed, the actual transport distance obviously being kept to the minimum. Usually a segment is cast against the face of the previously concerted unit to ensure a good fit when finally glued in place with an epoxy resin. If this procedure is not adopted , gaps of approximately 500mm shold be left between segments with the reinforcements running through andstressed together to form a complete unit , but when access or space on the embankment is at a premium it may be necessary to launch the deck intermittently to allow sections to be added progressively .The correponding prestressing arrangements , both for the temporary and permanent conditions would be more complicated and careful calculations needed at all positions .The pricipal advantage of the bridge-launching technique is the saving in falsework, especially for high decks. Segments can also be fabricated or precast in a protected environment using highly productive equipment. For concrete segment, typically two segment are laid each week (usually 10 ~ 30 m in length and perhaps 300 to 400 tonnes in weight) and after posttensioning incrementally launched at about 20 m per day depending upon the winching/jacking equipment.Balanced Cantiulever ConstructionDevelopment in box section and prestressed concrete led to short segment being assembled or cast in place on falsework to form a beam of full roadway width. Subsequently the method was refined virtually to eliminate the falsework by using a previously constructed section of the beam to provide the fixing for a subsequently cantilevered section. The principle is demonsrated step-by-step in the example shown in Fig.1.In the simple case illustrated, the bridge consists of three spans in the ratio 1:1:2. First the abutments and piers are constructed independently from the bridge superstructure. The segment immediately above each pier is then either cast in situ or placed as a precast unit .The deck is subsequently formed by adding sections symmetrically either side.Ideally sections either side should be placed simultaneously but this is usually impracticable and some inbalance will result from the extra segment weight, wind forces, construction plant and material. When the cantilever has reached both the abutment and centre span,work can begin from the other pier , and the remainder of the deck completed in a similar manner . Finally the two individual cantilevers are linked at the centre by a key segment to form a single span. The key is normally cast in situ.The procedure initially requires the first sections above the column and perhaps one or two each side to be erected conventionally either in situ concrete or precast and temporarily supported while steel tendons are threaded and post-tensioned . Subsequent pairs of section are added and held in place by post-tensioning followed by grouting of the ducts. During this phase only the cantilever tendons in the upper flange and webs are tensioned. Continuity tendons are stressed after the key section has been cast in place. The final gap left between the two half spans should be wide enough to enable the jacking equipment to be inserted. When the individual cantilevers are completed and the key section inserted the continuity tendons are anchored symmetrically about the centre of the span and serve to resist superimposed loads, live loads, redistribution of dead loads and cantilever prestressing forces.The earlier bridges were designed on the free cantilever principle with an expansion joint incorporated at the center .Unfortunately,settlements , deformations , concrete creep and prestress relaxation tended to produce deflection in each half span , disfiguring the general appearance of the bridge and causing discomfort to drivers .These effects coupled with the difficulties in designing a suitable joint led designers to choose a continuous connection, resulting in a more uniform distribution of the loads and reduced deflection. The natural movements were provided for at the bridge abutments using sliding bearings or in the case of long multi-span bridges, joints at about 500 m centres.Special Requirements in Advanced Construction TechniquesThere are three important areas that the engineering and construction team has to consider:(1) Stress analysis during construction: Because the loadings and support conditions of the bridge are different from the finished bridge, stresses in each construction stage must be calculated to ensurethe safety of the structure .For this purpose, realistic construction loads must be used and site personnel must be informed on all the loading limitations. Wind and temperature are usually significant for construction stage.(2) Camber: In order to obtain a bridge with the right elevation, the required camber of the bridge at each construction stage must be calculated. It is required that due consideration be given to creep and shrinkage of the concrete. This kind of the concrete. This kind of calculation, although cumbersome, has been simplified by the use of the compiters.(3) Quality control: This is important for any method construction, but it is more so for the complicated construction techniques. Curing of concrete, post-tensioning, joint preparation, etc. are detrimental to a successful structure. The site personnel must be made aware of the minimum concrete strengths required for post-tensioning, form removal, falsework removal, launching and other steps of operations.Generally speaking, these advanced construction techniques require more engineering work than the conventional falsework type construction, but the saving could be significant.大桥涵洞在大多数情况中桥梁的高度和跨度完全取决于河流的流量,桥梁的高度和跨度必须能够容纳最大洪水量.事实上,这不仅仅是洪水最大流量的问题,还需要在不同时间间隔预测不同程度的水灾。

毕业设计外文翻译原文

毕业设计外文翻译原文

CLUTCHThe engine produces the power to drive the vehicle. The drive line or drive train transfers the power of the engine to the wheels. The drive train consists of the parts from the back of the flywh eel to the wheels. These parts include the clutch, th e transmission, the drive shaft, and the final drive assembly (Figure 8-1).The clutch which includes the flywheel, clutch disc, pressure plate, springs, pressure plate cover and the linkage necessary to operate the clutch is a rotating mechanism between t he engine and the transmission (Figure 8-2). It operates through friction which comes from contact between the parts. That is the reason why the clutch is called a friction mechanism. After engagement, the clutch must continue to transmit all the engine torque to the transmission depending on the friction without slippage. The clutch is also used to disengage the engine from the drive train whenever the gears in the transmission are being shifted from one gear ratio to another.To start the engine or shift the gears, the driver has to depress the clutch pedal with the purpose of disengagement the transmission from the engine. At that time, the driven members connected to the transmission input shaft are either stationary or rotating at a speed that is slower or faster than the driving members connected to the engine crankshaft. There is no spring pressure on the clutch assembly parts. So there is no friction between the driving members and driven members. As the driver lets loose the clutch pedal, spring pre ssure increases on the clutch parts. Friction between the parts also increases. The pressure exerted by the springs on the driven members is controlled by the driver through the clutch pedal and linkage. The positive engagement of the driving and driven members is made possible by the friction between the surfaces of the members. When full spring pressure is applied, the speed of the driving and driven members should be the same. At themoment, the clutch must act as a solid coupling device and transmit al l engine power to the transmission, without slipping.However, the transmission should be engaged to the engine gradually in order to operate the car smoothly and minimize torsional shock on the drive train because an engine at idle just develops little power. Otherwise, the driving members are connected with the driven members too quickly and the engine would be stalled.The flywheel is a major part of the clutch. The flywheel mounts to the engine’s crankshaft and transmits engine torque to the clutch assembly. The flywheel, when coupled with the clutch disc and pressure plate makes and breaks the flow of power from the engine to the transmission.The flywheel provides a mounting location for the clutch assembly as well. When the clutch is applied, the flyw heel transfers engine torque to the clutch disc. Because of its weight, the flywheel helps to smooth engine operation. The flywheel also has a large ring gear at its outer edge, which engages with a pinion gear on the starter motor during engine cranking.The clutch disc fits between the flywheel and the pressure plate. The clutch disc has a splined hub that fits over splines on the transmission input shaft. A splined hub has grooves that match splines on the shaft. These splines fit in the grooves. Thus, t he two parts are held together. However, back-and-forth movement of the disc on the shaft is possible. Attached to the input shaft, At disc turns at the speed of the shaft.The clutch pressure plate is generally made of cast iron. It is round and about the same diameter as the clutch disc. One side of the pressure plate is machined smooth. This side will press the clutch disc facing are against the flywheel. The outer side has various shapes to facilitate attachment of spring and release mechanisms. The two primary types of pressure plate assemblies are coil spri ng assembly and diaphragmspring (Figure 8-3).In a coil spring clutch the pressure plate is backed by a number of coil springs and housed with them in a pressed-steel cover bolted to the flywheel. The springs push against the cover. Neither the driven plate nor the pressure plate is connected rigidly to the flywh eel and both can move either towards it or away. When the clutch pedal is depressed a thrust pad riding on a carbon or ball thrust bearing i s forced towards the flywheel. Levers pivoted so that they engage with the thrust pad at one end and the pressure plate at the other end pull the pressure plate ba ck against its springs. This releases pressure on the driven plate disconnecting the gearbox from the engine (Figure 8-4).Diaphragm spring pressure plate assemblies are widely used in most modern cars. The diaphragm spring is a single thin sheet of metal which yields when pressure is applied to it. When pressure is removed the metal springs back to its original shape. The centre portion of the diaphragm spring is slit into numerous fingers that act as release levers. When the clutch assembly rotates with the engine these weights are flung outwards by centrifugal forces and cause the levers to pre ss against the pressure plate. During disengagement of the clutch the fingers are moved forward by the release bearing. The spring pivots over the fulcrum ring and its outer rim moves away from the flywheel. The retracting spring pulls the pressure plate a way from the clutch plate thus disengaging the clutch (Figure 8-5).When engaged the release bearing and the fingers of the diaphragm spring move towards the transmission. As the diaphragm pivots over the pivot ring its outer rim forces the pressure plate against the clutch disc so that the clutch plate is engaged to the flywheel.The advantages of a diaphragm type pres sure plate assembly are its compactness, lower weight, fewer moving parts, less effort to engage, reduces rotational imbalance by providin g a balanced force around the pressure plate and less chances of clutch slippage.The clutch pedal is connected to the disengagement mechanism either by a cable or, more com monly, by a hydraulic system. Either way, pushing the pedal down operates the dise ngagement mechanism which puts pressure on the fingers of the clutch diaphragm via a release bearing and causes the diaphragm to release the clutch plate. With a hydraulic mechanism, the clutch pedal arm operates a piston in the clutch master cylinder. Thi s forces hydraulic fluid through a pipe to the clutch release cylinder where another piston operates the clutch disengagement mechanism. The alternative is to link the clutch pedal to the disengagement mechanism by a cable.The other parts including the cl utch fork, release bearing, bell-housing, bell housing cover, and pilot bushing are needed to couple and uncouple the transmission. The clutch fork, which connects to the linkage, actually operates the clutch. The release bearing fits between the clutch fork and the pressure plate assembly. The bell housing covers the clutch assembly. The bell housing c over fastens to the bottom of the bell housing. This removable cover allows a mechanic to inspect the clutch without removing the transmission and bell housing. A pilot bushing fits into the back of th e crankshaft and holds the transmission input shaft.A Torque ConverterThere are four components inside the very strong housing of the torque converter:1. Pump;2. Turbine;3. Stator;4. Transmission fluid.The housing of the torque converter is bolted to the flywheel of the engine, so it turns at what ever speed the engine is running at. The fins that make up the pump of the torque converter are at tached to the housing, so they also turn at the same speed a s the engine. The cutaway below shows how everything is connected inside the torque converter (Figure 8-6).The pump inside a torque converter is a type of centrifugal pump. As it spins, fluid is flung to the outside, much as the spin cycle of a washing machine flings water and clothes to the outside of the wash tub. As fluid is flung to the outside, a vacuum is created that draws more fluid in at the center.The fluid then enters the blades of the turbine, which is connected to the transmission. The turbin e causes the transmission to spin, which basically moves the car. The blades of the turbine are curved. This means that the fluid, which enters the turbine from the outside, has to change direction before it exits the center of the turbine. It is this directional change that causes the turbine to spin.The fluid exits the turbine at the center, moving in a different direction than when it entered. The fluid exits the turbine moving opposite the direction that the pump (and engine) is turning. If the fluid were allowed to hit the pump, it would slow the engine down, wasting power. This is why a torque converter has a stator.The stator resides in the very center of the torque converter. Its job is to redirect the fluid returning from the turbine before it hits the pump again. This dramatically increases the efficiency of the torque converter.The stator has a very aggressive blade design that almost completely reverses the direction of the fluid. A one-way clutch (inside the stator) connects the stator to a fixed shaft in the transmission. Because of this arrangement, the stator cannot spin with the fluid - i tc a n s p i n o n l y i n t h e o p p o s i t ed i re c t i o n,f o r c i ng th e f l ui d t oc h a n g ed i re c t i o n a s i t h i t s t h e s t a t o r b l a d e s.Something a little bit tricky happens when the car gets moving. There is a point, around 40 mph (64 kph), at which both the pump and the turbine are spinning at almost the same speed (the pump alwaysspins slightly faster). At this point, the fluid returns from the turbine, entering the pump already moving in the same direction as the pump, so the stator is not needed.Even though the turbine changes the direction of the fluid and flings it out the back, the fluid still ends up moving in the direction that the turbine is spinning because the turbin e is spinning faster in one direction than the fluid is being pumped in the other direction. If you were standing in the back of a pickup moving at 60 mph, and you threw a ball out the back of that pickup at 40 mph, the ball would still be going forward at 20 mph. This is similar to what happens in the tur bine: The fluid is being flung out the back in one direction, but not as fast as it was going to start with in the other direction.At these speeds, the fluid actually strikes the back sides of the stator blades, causing the stator to freewheel on its one-way clutch so it doesn’t hinder the fluid moving through it.Benefits and Weak PointsIn addition to the very important job of allowing a car come to a complete stop without stalling the engine; the torqu e converter actually gives the car more torque when you accelerate out of a Stop. Modern torque converters can multiply the torque of the engine by two to three times. This effect only happens when the engine is turning much faster than the transmission.At higher speeds, the transmission catches up to the engine, eventually moving at almost the same speed. Ideally, though, the transmission would move at exactly the same speed as the engine, because this difference in speed wastes power. This is part of th e reason why cars with automatic transmissions get worse gas mileage than cars with manual transmissions.To counter this effect, some cars have a torque converter with alockup clutch. When the two halves of the torque converter get up to speed, this clutch locks them together, eliminating the slip page and improving efficiency.。

毕业设计外文翻译

毕业设计外文翻译

毕业设计外文翻译Newly compiled on November 23, 2020Title:ADDRESSING PROCESS PLANNING AND VERIFICATION ISSUES WITH MTCONNECTAuthor:Vijayaraghavan, Athulan, UC BerkeleyDornfeld, David, UC BerkeleyPublication Date:06-01-2009Series:Precision Manufacturing GroupPermalink:Keywords:Process planning verification, machine tool interoperability, MTConnect Abstract:Robust interoperability methods are needed in manufacturing systems to implement computeraided process planning algorithms and to verify their effectiveness. In this paper wediscuss applying MTConnect, an open-source standard for data exchange in manufacturingsystems, in addressing two specific issues in process planning and verification. We use data froman MTConnect-compliant machine tool to estimate the cycle time required for machining complexparts in that machine. MTConnect data is also used in verifying the conformance of toolpaths tothe required part features by comparing the features created by the actual tool positions to therequired part features using CAD tools. We demonstrate the capabilities of MTConnect in easilyenabling process planning and verification in an industrial environment.Copyright Information:All rights reserved unless otherwise indicated. Contact the author or original publisher for anynecessary permissions. eScholarship is not the copyright owner for deposited works. Learn moreADDRESSING PROCESS PLANNING AND VERIFICATION ISSUESWITH MTCONNECTAthulan Vijayaraghavan, Lucie Huet, and David DornfeldDepartment of Mechanical EngineeringUniversity of CaliforniaBerkeley, CA 94720-1740William SobelArtisanal SoftwareOakland, CA 94611Bill Blomquist and Mark ConleyRemmele Engineering Inc.Big Lake, MNKEYWORDSProcess planning verification, machine tool interoperability, MTConnect.ABSTRACTRobust interoperability methods are needed in manufacturing systems to implement computeraided process planning algorithms and to verifytheir effectiveness. In this paper we discuss applying MTConnect, an open-source standardfor data exchange in manufacturing systems, in addressing two specific issues in processplanning and verification. We use data from an MTConnect-compliant machine tool to estimatethe cycle time required for machining complex parts in that machine. MTConnect data is also used in verifying the conformance of toolpaths to the required part features by comparing the features created by the actual tool positions tothe required part features using CAD tools. We demonstrate the capabilities of MTConnect in easily enabling process planning and verificationin an industrial environment.INTRODUCTIONAutomated process planning methods are acritical component in the design and planning of manufacturing processes for complex parts. Thisis especially the case with high speed machining, as the complex interactions betweenthe tool and the workpiece necessitates careful selection of the process parameters and the toolpath design. However, to improve the effectiveness of these methods, they need to be integrated tightly with machines and systems in industrial environments. To enable this, we need robust interoperability standards for data exchange between the different entities in manufacturing systems.In this paper, we discuss using MTConnect – an open source standard for data exchange in manufacturing systems – to address issues in process planning and verification in machining.We discuss two examples of using MTConnect for better process planning: in estimating the cycle time for high speed machining, and in verifying the effectiveness of toolpath planning for machining complex features. As MTConnect standardizes the exchange of manufacturing process data, process planning applications can be developed independent of the specific equipment used (Vijayaraghavan, 2008). This allowed us to develop the process planning applications and implement them in an industrial setting with minimal overhead. The experiments discussed in this paper were developed at UC Berkeley and implemented at Remmele Engineering Inc.The next section presents a brief introduction to MTConnect, highlighting its applicability in manufacturing process monitoring. We then discuss two applications of MTConnect – in computing cycle time estimates and in verifying toolpath planning effectiveness. MTCONNECTMTConnect is an open software standard for data exchange and communication between manufacturing equipment (MTConnect, 2008a). The MTConnect protocol defines a common language and structure for communication in manufacturing equipment, and enables interoperability by allowing access to manufacturing data using standardized interfaces. MTConnect does not define methods for data transmission or use, and is not intended to replace the functionality of existing products and/or data standards. It enhances the data acquisition capabiltiies of devices and applications, moving towards a plug-and-play environment that can reduce the cost of integration. MTConnect is built upon prevalent standards in the manufacturing and software industry, which maximizes the number of tools available for its implementation and provides a high level of interoperability with other standards and tools in these industries.MTConnect is an XML-based standard andmessages are encoded using XML (eXtensibleMarkup Language), which has been usedextensively as a portable way of specifying data interchange formats (W3C, 2008). A machinereadable XML schema defines the format ofMTConnect messages and how the data itemswithin those messages are represented. At thetime of publication, the latest version of the MTConnect standard defining the schema is (MTConnect, 2008b).The MTConnect protocol includes the following information about a device:Identity of a deviceIdentity of all the independent components ofthe deviceDesign characteristics of the deviceData occurring in real or near real-time by thedevice that can be utilized by other devices or applications. The types of data that can beaddressed includes:Physical and actual device design dataMeasurement or calibration dataNear-real time data from the deviceFigure 1 shows an example of a data gatheringsetup using MTConnect. Data is gathered innear-time from a machine tool and from thermal sensors attached to it. The data stored by the MTConnect protocol for this setup is shown inTable 1. Specialized adaptors are used to parsethe data from the machine tool and from thesensor devices into a format that can beunderstood by the MTConnect agent, which inturn organizes the data into the MTConnect XML schema. Software tools can be developed which operate on the XML data from the agent. Sincethe XML schema is standardized, the softwaretools can be blind to the specific configuration ofthe equipment from where the data is gathered. FIGURE 1: MTCONNECT SETUP.TABLE 1:MTCONNECT PROTOCOL INFORMATION FOR MACHINE TOOL IN FIGURE 1.Device identity “3-Axis Milling Machine”Devicecomponents1 X Axis; 1 Y Axis; 1 Z Axis;2 Thermal SensorsDevice designcharacteristicsX Axis Travel: 6”Y Axis Travel: 6”Z Axis Travel: 12”Max Spindle RPM: 24000Data occurringin deviceTool position: (0,0,0);Spindle RPM: 1000Alarm Status: OFFTemp Sensor 1: 90oFTemp Sensor 2: 120oFAn added benefit of XML is that it is a hierarchical representation, and this is exploited by designing the hierarchy of the MTConnect schema to resemble that of a conventional machine tool. The schema itself functions as a metaphor for the machine tool and makes the parsing and encoding of messages intuitive. Data items are grouped based on their logical organization, and not on their physical organization. For example, Figure 2 shows the XML schema associated with the setup shown in Figure 1. Although the temperature sensors operate independant of the machine tool (with its own adaptor), the data from the sensors are associated with specific components of the machine tool, and hence the temperature data is a member of the hierarchy of the machine tool. The next section discusses applying MTConnect in estimating cycle time in high-speed machining.ACCURATE CYCLE TIME ESTIMATESIn high speed machining processes there can be discrepancies between the actual feedrates during cutting and the required (or commanded) feedrates. These discrepancies are dependenton the design of the controller used in the machine tool and the toolpath geometry. While there have been innovative controller designs that minimize the feedrate discrepancy (Sencer,2008), most machine tools used in conventional industrial facilities have commercial off-the-shelf controllers that demonstrate some discrepancies in the feedrates, especially when machining complex geometries at high speeds. There is a need for simple tools to estimate the discrepancy in these machining conditions. Apart from influencing the surface quality of the machined parts, feedrate variation can lead to inaccurate estimates of the cycle time during machining. Accurate estimates of the cycle time is a critical requirement in planning for complex machining operations in manufacturing facilities. The cycle time is needed for both scheduling the part in a job shop, as well as for costing the part. Inaccurate cycle time estimates (especiallywhen the feed is overestimated) can lead to uncompetitive estimates for the cost of the part and unrealistic estimates for the cycle time. Related Workde Souza and Coelho (2007) presented a comprehensive set of experiments to demonstrate feedrate limitations during the machining of freeform surfaces. They identified the causes of feedrate variation as dynamic limitations of the machine, block processing time FIGURE 2: MTCONNECT HIERARCHY.for the CNC, and the feature size in the toolpaths. Significant discrepancies were observed between the actual and commanded feeds when machining with linear interpolation (G01). The authors used a custom monitoring and data logging system to capture the feedrate variation in the CNC controller during machining. Sencer et al. (2008) presented feed scheduling algorithms to minimize the machining time for 5- axis contour machining of sculptured surfaces. The algorithm optimized the profile of the feedrate for minimum machining time, while observing constrains on the smoothness of the feedrate, acceleration and jerk of the machine tool drives. This follows earlier work in minimizing the machining time in 3-axis milling using similar feed scheduling techniques(Altintas, 2003). While these methods are very effective in improving the cycle time of complex machining operations, they can be difficult toapply in conventional factory environments asthey require specialized control systems. The methods we discuss in this paper do notaddress the optimization of cycle time during machining. Instead, we provide simple tools to estimate the discrepancy in feedrates during machining and use this in estimating the cycletime for arbitrary parts.MethodologyDuring G01 linear interpolation the chief determinant of the maximum feedrateachievable is the spacing between adjacentpoints (G01 step size). We focus on G01 interpolation as this is used extensively when machining simultaneously in 3 or more axes.The cycle time for this machine tool to machinean arbitrary part (using linear interpolation) is estimated based on the maximum feedachievable by the machine tool at a given path spacing. MTConnect is a key enabler in this process as it standardizes both data collectionas well as the analysis.The maximum feedrate achievable is estimated using a standardized test G-code program. This program consists of machining a simple shapewith progressively varying G01 path spacings.The program is executed on an MTConnectcompliant machine tool, and the position andfeed data from the machine tool is logged innear-real time. The feedrate during cutting at the different spacings is then analyzed, and amachine tool “calibration” curve is developed, which identifies the maximum feedrate possibleat a given path spacing.FIGURE 3: METHODOLOGY FOR ESTIMATING CYCLE TIME.Conventionally, the cycle time for a giventoolpath is estimated by summing the time takenfor the machine tool to process each block of Gcode, which is calculated as the distancetravelled in that block divided by the feedrate ofthe block. For a given arbitrary part G-code to be executed on a machine tool, the cycle time is estimated using the calibration curve as follows. For each G01 block executed in the program, the size of the step is calculated (this is the distance between the points the machine tool is interpolating) and the maximum feedrate possible at this step size is looked up from the calibration curve. If the maximum feedrate is smaller than the commanded feedrate, this line of the G-code is modified to machine at the (lower) actual feedrate, if the maximum feedrate is greater, then the line is left unmodified. This is performed for all G01 lines in the program, and finally, the cycle time of the modified G-code program is estimated the conventional way. This methodology is shown in Figure 3. The next section discusses an example applying this methodology on a machine tool.ResultsWe implemented the cycle time estimation method on a 3-axis machine tool with a conventional controller. The calibration curve of this machine tool was computed by machining a simple circular feature at the following linear spacings: ”, ”, ”, ”,”, ”, ”, ”, ”. Weconfirmed that the radius of the circle (that is, the curvature in the toolpath) had no effect on the feedrate achieved by testing with circular features of radius ”, ”, and ”, andobserving the same maximum feedrate in all cases. Table 2 shows the maximum achievable feedrate at each path spacing when using a circle of radius 1”. We can see from the table that the maximum feedrate achievable is a linear function of the path spacing. Using a linear fit, the calibration curve for this machine tool can be estimated. Figure 4 plots the calibration curve for this machine tool. The relationship between the feedrate and the path spacing is linear asthe block processing time of the machine tool controller is constant at all feedrates. The block processing time determines the maximumfederate achievable for a given spacing as it isthe time the machine tool takes to interpolateone block of G-code. As the path spacing (or interpolatory distance) linearly increases, thespeed at which it can be interpolated alsoincreases linearly. The relationship for the datain Figure 4 is:MAX FEED (in/min) = 14847 * SPACING (in)TABLE 2: MAXIMUM ACHIEVABLE FEEDRATE AT VARYING PATH SPACINGSpacing Maximum Feedrate”””””””””We also noticed that the maximum feedrate for agiven spacing was unaffected by thecommanded feedrate, as long as it was lesserthan the commanded feedrate. This means thatit was adequate to compute the calibration curveby commanding the maximum possible feedratein the machine tool.FIGURE 4: CALIBRATION CURVE FOR MACHINE TOOL.Using this calibration curve, we estimated thecycle time for machining an arbitrary feature inthis machine tool. The feature we used was a3D spiral with a smoothly varying path spacing,which is shown in Figure 5. The spiral path isdescribed exclusively using G01 steps andinvolves simultaneous 3-axis interpolation. Thepath spacing of the G-code blocks for thefeature is shown in Figure 6.FIGURE 5: 3D SPIRAL FEATURE.FIGURE 6: PATH SPACING VARIATION WITH GCODE LINE FOR SPIRAL FEATURE.Figure 7 shows the predicted feedrate based onthe calibration curve for machining the spiralshape at 100 inches/min, compared to the actualfeedrate during machining. We can see that the feedrate predicted by the calibration curvematches very closely with the actual feedrate.We can also observe the linear relationship between path spacing and maximum feedrate by comparing figures 6 and 7.FIGURE 7: PREDICTED FEEDRATE COMPARED TO MEASURED FEEDRATE FOR SPIRAL FEATURE AT 100 IN/MIN.FIGURE 8: ACTUAL CYCLE TIME TO MACHINE SPIRAL FEATURE AT DIFFERENT FEEDRATES. The cycle time for machining the spiral atdifferent commanded feedrates was alsoestimated using the calibration curve. Figure 8 shows the actual cycle time taken to machinethe spiral feature at different feedrates. Noticehere that the trend is non-linear – an increase infeed does not yield a proportional decrease incycle time – implying that there is somefeedrate discrepancy at high feeds. Figure 9 compares the theoretical cycle time to machineat different feedrates to the actual cycle time andthe model predicted cycle time. We can see thatthe model predictions match the cycle times very closely (within 1%). Significant discrepancies are seen between the theoretical cycle time and the actual cycle time when machining at high feed rates. These discrepancies can be explained bythe difference between the block processingtime for the controller, and the time spent oneach block of G-Code during machining. At high feedrates, the time spent at each block is shorterthan the block processing time, so the controller slows down the interpolation resulting in a discrepancy in the cycle time.These results demonstrated the effectiveness of using the calibration curve to estimate feed, and ultimately apply in estimating the cycle time.This method can be extrapolated to multi-axis machining by measuring the feedrate variationfor linear interpolation in specific axes. We canalso specifically correlate feed in one axis to the path spacing instead of the overall feedrate. FIGURE 9: ACTUAL OBSERVED CYCLE TIMESAND PREDICTED CYCLE TIMES COMPARED TO THE NORMALIZED THEORETICAL CYCLE TIMES FOR MACHINING SPIRAL FEATURE AT DIFFERENT FEEDRATES.TOOL POSITION VERIFICATIONMTConnect data can also be used in verifying toolpath planning for the machining of complex parts. Toolpaths for machining complex featuresare usually designed using specialized CAM algorithms, and traditionally the effectiveness ofthe toolpaths in creating the required partfeatures are either verified using computer simulations of the toolpath, or by surfacemetrology of the machined part. The formerapproach is not very accurate, as the toolpath commanded to the machine tool may not matchthe actual toolpath travelled during machining.The latter approach, while accurate, tends to betime consuming and expensive, and requires the analysis and processing of 3D metrology data(which can be complex). Moreover, errors in the features of a machined part are not solely due to toolpath errors, and using metrology data fortoolpath verification may obfuscate toolpatherrors with process dynamics errors. In aprevious work we discussed a simple way toverify toolpath planning by overlaying the actualtool positions against the CAM generated tool positions (Vijayaraghavan, 2008). We nowdiscuss a more intuitive method to verify the effectiveness of machining toolpaths, wheredata from MTConnect-compliant machine toolsis used to create a solid model of the machined features to compare with the desired features.Related WorkThe manufacturing community has focussed extensively on developing process planning algorithms for the machining of complex parts.Elber (1995) in one of the earliest works in thefield, discussed algorithms for toolpathgeneration for 3- and 5-axis machining. Wrightet al. (2004) discussed toolpath generationalgorithms for the finish machining of freeform surfaces; the algorithms were based on thegeometric properties of the surface features. Vijayaraghavan et al. (2009) discussed methodsto vary the spacing of raster toolpaths and tooptimize the orientation of workpieces infreeform surface machining. The efficiency ofthese methods were validated primarily bymetrology and testing of the machined part. MethodologyTo verify toolpath planning effectiveness, we logthe actual cutting tool positions during machiningfrom an MTConnect-compliant machine tool,and use the positions to generate a solid modelof the machined part. The discrepancy infeatures traced by the actual toolpath relative tothe required part features can be computed by comparing these two solid models. The solidmodel of the machined part from the toolpositions can be obtained as follows:Create a 3D model of the toolCreate a 3D model of the stock materialCompute the swept volume of the tool as ittraces the tool positions (using logged data)Subtract the swept volume of the tool from thestock materialThe remaining volume of material is a solidmodel of the actual machined part.The two models can then be compared using 3D boolean difference (or subtraction) operations.ResultsWe implemented this verification scheme bylogging the cutter positions from an MTConnectcompliant 5-axis machine tool. The procedure toobtain the solid model using the tool positionswas implemented in Vericut. The two modelswere compared using a boolean diff operation in Vericut, which identified the regions in the actual machined part that were different from therequired solid model. An example applying thismethod for a feature is shown in Figure 10.FIGURE 10: A – SOLID MODEL OF REQUIRED PART; B – SOLID MODEL OF PART FROM TOOL POSITIONS SHOWING DISCREPANCIES BETWEEN ACTUAL PART FEATURES AND REQUIRED PART FEATURES. SHADED REGIONSDENOTE ~” DIFFERENCE IN MATERIAL REMOVAL.DISCUSSION AND CONCLUSIONS MTConnect makes it very easy to standardize data capture from disparate sources anddevelop common planning and verification applications. The importance of standardization cannot be overstated here – while it has always been possible to get process data from machine tools, this can be generally cumbersome andtime consuming because different machine tools require different methods of accessing data.Data analysis was also challenging to standardize as the data came in differentformats and custom subroutines were needed to process and analyze data from differentmachine tools. With MTConnect the data gathering and analysis process is standardized resulting in significant cost and time savings. This allowed us to develop the verification tools independent of the machine tools they were applied in. This also allowed us to rapidly deploy these tools in an industrial environment without any overheads (especially from the machine tool sitting idle). The toolpath verification was performed with minimal user intervention on a machine which was being actively used in a factory. The only setup needed was to initially configure the machine tool to output MTConnect-compliant data; since this is a onetime activity, it has an almost negligible impacton the long term utilization of the machine tool. Successful implementations of data capture and analysis applications over MTConnect requires a robust characterization of the data capture rates and the latency in the streaming information. Current implementations of MTConnect are over ethernet, and a data rate of about 10~100Hzwas observed in normal conditions (with no network congestion). While this is adequate for geometric analysis (such as the examples in this paper), it is not adequate for real-time process monitoring applications, such as sensor data logging. More work is needed in developing theMTConnect software libraries so that acceptable data rates and latencies can be achieved.One of the benefits of MTConnect is that it can act as a bridge between academic research and industrial practice. Researchers can developtools that operate on standardized data, whichare no longer encumbered by specific data formats and requirements. The tools can then be easily applied in industrial settings, as the framework required to implement the tools in a specific machine or system is already in place. Greater use of interoperability standards by the academic community in manufacturing research will lead to faster dissemination of research results and closer collaboration with industry. ACKNOWLEDGEMENTSWe thank the reviewers for their valuable comments. MTConnect is supported by AMT –The Association for Manufacturing Technology. We thank Armando Fox from the RAD Lab at UC Berkeley, and Paul Warndorf from AMT for their input. Research at UC Berkeley is supported by the Machine Tool Technology Research Foundation and the industrial affiliates of the Laboratory for Manufacturing and Sustainability. To learn more about the lab’s REFERENCESAltintas, Y., and Erkormaz, K., 2003, “Feedrate Optimization for Spline Interpolation In High Speed Machine Tools”, CIRP Annals –Manufacturing Technology, 52(1), pp. 297-302. de Souza, A. F., and Coelho, R. T., 2007, “Experimental Inv estigation of Feedrate Limitations on High Speed Milling Aimed at Industrial Applications”, Int. J. of Afv. Manuf. Tech, 32(11), pp. 1104–1114.Elber, G., 1995, “Freeform Surface Region Optimization for 3-Axis and 5-Axis Milling”, Computer-Aided Design, 27(6), pp. 465–470. MTConnectTM, 2008b, MTConnectTM Standard, Sencer, B., Altintas, Y., and Croft, E., 2008, “Feed Optimization for Five-axis CNC Machine Tools with Drive Constraints”, Int. J. of Mach.Tools and Manuf., 48(7), pp. 733–745. Vijayaraghavan, A., Sobel, W., Fox, A., Warndorf, P., Dornfeld, D. A., 2008, “Improving Machine Tool Interoperability with Standardized Interface Protocols”, Proceedings of ISFA. Vijayaraghavan, A., Hoover, A., Hartnett, J., and Dornfeld, D. A., 2009, “Improving Endmilli ng Surface Finish by Workpiece Rotation and Adaptive Toolpath Spacing”, Int. J. of Mach. Tools and Manuf., 49(1), pp. 89–98.World Wide Web Consortium (W3C), 2008, “Extensible Markup Language (XML),”Wright, P. K., Dornfeld, D. A., Sundararajan, V., and Misra, D., 2004, “Tool Path Generation for Finish Machining of Freeform Surfaces in the Cybercut Process Planning Pipeline”, Trans. of NAMRI/SME, 32, 159–166.毕业设计外文翻译网址。

重庆交通大学毕业设计(翻译)工作手册

重庆交通大学毕业设计(翻译)工作手册

外国语学院毕业设计(翻译)工作手册题目:专业:年级(班):学号:姓名:指导教师:年月日目录承诺书 (1)外国语学院毕业设计(翻译)任务书 (2)外国语学院毕业设计(翻译)开题报告 (6)外国语学院毕业设计(翻译)任务综述 (9)外国语学院毕业设计(翻译)成果格式要求 (11)外国语学院毕业设计(翻译)指导教师评阅标准 (12)外国语学院毕业设计(翻译)交叉评阅标准 (13)外国语学院毕业设计(翻译)答辩评分标准 (13)外国语学院毕业设计(翻译)答辩记录及综合成绩评定 (14)外国语学院毕业设计(翻译)指导教师工作规范 (16)承诺书为圆满完成本次毕业翻译工作,本人郑重承诺:1.努力学习、刻苦钻研,勤于实践,努力创新,高质量地完成毕业翻译任务及翻译报告;2.在教师指导下,根据选题的要求,拟定工作进度计划,虚心接受教师的指导,认真查阅文献,搜集资料,写好开题报告。

积极配合指导教师和学院的指导和检查,做好毕业翻译;3.严格遵守学校和外国语学院关于毕业(翻译)的管理及相关规定与工作要求,在规定时间内完成相应工作;4.所翻译原文未被他人所翻译,本人为首译;并且在翻译过程中绝不弄虚作假,否则责任自负。

承诺人签名:年月日毕业设计(翻译)任务书翻译题目:(任务起止日期20 年月日~ 20 年月日)外国语院专业 ________ 班学生姓名学号指导教师教研室主任院领导注:1. 此任务书应由指导教师填写。

2. 此任务书最迟必须在毕业设计开始前一周下达给学生。

学生完成毕业设计(翻译)工作进度计划表注:1. 此表由指导教师填写。

2. 此表每个学生一份,作为毕业设计(翻译)检查工作进度之依据;3. 进度安排请用“—”在相应位置画出。

4毕业设计(翻译)阶段工作情况检查表注:1. 此表应由教师认真填写;2. “组织纪律”一栏根据学生具体执行情况如实填写;3. “完成任务情况”一栏按学生是否按进度保质保量完成任务的情况填写;4. 对违纪和不能按时完成任务者,指导教师可根据情节轻重对该生提出警告或不能参加答辩的建议。

毕业设计英文翻译

毕业设计英文翻译
NTRODUCTION Lathe is the oldest machine tool invented, starting with the Egyptian tree lathes. In the Egyptian tree lathe, one end of the rope wound round the workpiece is attached to a flexible branch of a tree while the other end is pulled by the operator, thus giving the rotary motion to the workpiece. This primitive device has evolved over the last two centuries to be one of the most fundamental and versatile machine tools with a large number of uses in all manufacturing shops. The principal form of surface produced in a lathe is the cylindrical surface. This is achieved by rotating the workpiece while the single point cutting tool removes the material by traversing in a direction parallel to the axis of rotation and termed as turning as shown in Fig.4.1.The popularity of the lathe due to the fact that a large variety of surfaces can be produced.

交通毕设外文翻译

交通毕设外文翻译

City Viaduct entrance ramp traffic character studyautor:LI Yingshua,XU HuiThe rapid development of the automobile industry and the relative lag of the road construction have constituted a prominent contradiction all over the world,particularly, in most of large cities. To cope with it,elevated roads have been built in many cities both at home and abroad. However, traffic jams frequentlyappear on elevated roads immediately after the completion of their construction. The awkward situation mainly results from the planning bug or the unsuitable control, apart from drastic increase in transportation demand. Elevated roads, ramps and ground roads areclosely interconnected in three-dimensional urban traffic networks.The mathematical modeling and numerical simulation were conducted for the sections of. elevated roads and for the interaction of elevated roadsand intersections on the ground. After having analyzed the complicated dynamic behavior on the elevated roads, some suggestions were put forward for the transportation planning and management. The contents of the dissertation are as follows:(1)By combining manpower survey with the video recording, a series of field measurementswere conducted on several sections of the elevated road system in Shanghai City and then the main characteristics of composed of the Common Language Runtime (CLR) and a unified set of class libraries. ations were established, which are suitable for the free flow and the congested flow respectively. And thus two important parameters, namely, the free flow speed and the jamming density, were determined. The fundamental diagram obtained from the measured data reveals three distinct traffic phases.(2) With the Wuning Of-fRamp of the Inner Ring Elevated road in Shanghai City as arepresentative case, meticulous observations were carried out on traffic flow at the intersection near the of-framp. And it was found that the squeezing effect of right-turning vehicles from the intersecting main road on the straight motion of vehicles from the of-framp is the main reason of the existing traffic jam. A modified 1- D model. With the modified model, numerical simulation was performed with special attention to the disturbing effect of right-turning vehicles. The results agree quite well with the observed data. The analysis shows that the squeezing effect, which exacerbates with the increasing number of right-turning vehicles, is the principal cause of congested traffic at certain intersec tions. The inappropriate design and construction of ramps in front of busy crossings enhances the congestion. Thus, installing the right-turning traffic lights may bea promising way of solving the problem.(3) There exist severe problems in the transportation on elevated road system in ShanghaiCity, such as frequent congestions or jams on the elevated roads and their ramps. For this reason, measures of controlling the on-ramp traffic with timing signals were suggested in this dissertation. The reasonable timing scheme was recommended for signal controlling. On the basis of an anisotropic hydrodynamic traffic model developed by our research group, a ramp-effect term was introduced in the motion equation and traffic flows on the elevated road sections near the on-ramp were numerically simulated.The results show that signaling control of on-ramp is helpful for the improvement of traffic on the elevated roads. We also found the best timing scheme after comparison among six choices of signaling period.(4) The gear-alternating regulation was first actualized at the interfluent location ofon-ramps in Shanghai elevated roads, which was theoretically studied in this dissertation.Different traffic flow models were established for the cases with and without the alternate running rule based on the FI cellular automaton traffic model. With the models, the traffic behavior at the interfluent location of on-ramp was investigated and some results were concluded. When there are many inflowing vehicles on the elevated road and ramp, the traffic situation on the elevated road with the alternating regulation is much better than that without the regulation; when there are less inflowing vehicles, the elevated road situation keeps unvaried on the whole in the two cases. The vehicles on the elevated road and the on-ramp are easily to move forward with 1: 1 pro-portion in congestion or free flow states and often with 2: 1 proportion in the medium-speed flow.(5) The weaving areas often turned into the bottleneck on the elevated roads. On the basisof the NS cellular automaton traffic model, the weaving section with one-lane main road was simulated and analyzed.For the free traffic flow, weaving operations almost has no influence on the system, even with the weaving length being increased. On the other hand, when the traffic flow is in congested state, weaving conflicts have negative effects on the system. The traffic situation will be improved with the increase of weaving length. Our simulation results suggest that the length of weaving sections need not to be inappropriately increased, and a proper medium value can be chosen to get an optimal traffic situation.Finally, the prospect was briefly reviewed for the future advances in the research of urban traffic flows in China.City expressway on-ramp control are mainly local single ramp Independent control and a handful of ramp coordination control, the current game.The Department of ramp control mostresearch only for independent ramp control research.Investigate. Many city expressway ramp was very close, a lot of ramp.The distance is only 200~ 300 m, the ramp in the operation of traffic.Flow interference is very serious, in this case on the relative.The entrance ramp for independent control is not reasonable. In this paper Shanghai City Expressway actual testing data, the 2 main near.Distance ( distance of less than 450 m ) entrance ramp traffic flow characteristics in study.Based on bus enter or exit operation mechanism,the realistic traffic capacity model of bus stop is established combined with reduced loading areas of bus stop,and then,the standard design mode of bus bay stop is worked out.And according to the different road grade of main trunk road,sub -trunk road and branch road,model and sizes of bus bay stop set at the intersections and road sections are made integrated designs.The design mode has been applied in the Suzhou Industrial Park,which can provide essential reference for urban road design,especially the bus stop.Traffic control means is: focus on the elevated road itself from macroscopical angle measures, for articulation section microscopic insufficient consideration. Because the present our country real time traffic information collection.Technology needs to be further improved, by means of avoiding Ramp Traffic Control Regional congestion queuing and the validity of the restricted. In view of the exit ramp.Link Road and traffic flow characteristics, traffic management has taken a different turn through means of organization, including lane function division, ban and other methods, to hold block queuing and the improvement effect was more timely. But in view of different exit ramp connecting sections, how to adopt the different traffic organization,At present there is no systematic theoretical analysis and research. How to make a continuous flow with stop to organic join, make full use of resources, make the system of the optimal, for elevated road and ground road system construction and operation a special significance.Ramp presence becomes limiting articulation section inlet tract operation important factor, according to the traffic demand and supply balance principle, should ensure that the exit ramp connecting the highway capacity and the ramp and the ground to flow to match, make traffic can timely discharge, system is maintained in a relatively stable state. To ban the articulation section, traffic operation mode and the general intersection similarity, this does not do the special analysisTraffic control measures that is by setting the signal lamp, the conflict points distribution of traffic flow in different time periods, so as to achieve at the same elevation pavement to eliminate conflict point; stereo cross measure that is by setting the engineering facilities, the conflicting traffic arrangement in different elevation on the road, in order to achieve various time range to eliminate the conflict point; roundabout measures that is by setting the large center island, interwoven into the ring road and vehicle running around the island, will intersect the traffic stream is converted to interlaced stream, at various time within the same elevation on the road to eliminate the conflict points, far leading circuitous measures its practice is actually the conflict is converted to interlaced, disappear in addition to conflict point.City Road in intersection traffic flow, has been generally take the measures to eliminate, reduce conflict point. According to interweave, confluent, diverting traffic impact on the relatively few studies and countermeasures. If the city road traffic volume is not within the time period or region, interweave, shunt, the effect of relatively in acceptable condition, study its impact on traffic and Countermeasure of urgency is not obvious, but as the city increase of traffic, especially the city expressway construction, it is necessary to interweave shunt, confluence, the implications for further study, and research to improve the flow of traffic improvement measures. Based on the characteristics of city road network, its characteristic is generally more developed road network, the relatively small spacing, a lot of old urban road network distance between 200 to about 600 m, mixed traffic flow on road network traffic capacity influence is increasingly serious, with the city expressway construction, elevated expressway in the city built the use of existing road width layout, avoid a large number of residents, this mode of construction in large city, typical example is the Shanghai inner ring viaduct. With the elevated expressway construction, to contact the rapid transit and ground slow traffic, according to the traffic needs to set up the ramp traffic conversion, but according to now run the examples, more traffic problems in the ramp and ground traffic conversion between. In view of the above problems, the article on the city road network space is smaller under the conditions of mixed flow of traffic capacity and the effect of improving measure to undertake preliminary discuss on network distance, the larger is the special study.With the development of City Road, city road builders gradually realize that: in the city road network conditions, especially because of various historical reasons, the planning of road network conditions and causes of expressway construction demand there is a gap between. But the city expressway construction is the city development, transportation development one of the important means to solve, it is necessary to research the road builders in variousadverse conditions, improving the traffic capacity of the road, in the limited hardware construction conditions to provide greater traffic capacity. On the light control, grade separation construction, various research and engineering example, the crossover controlled and improved not only devoted to exploring, concern the improvement of city expressway ramp with surface intersections between matching problem, undertake preliminary discuss, the generalized point, this article referred to the improvement measures not only can be applied to the expressway and the ground city crossing engineering, various influences larger sections, intersections can refer to.。

毕业设计外文文献翻译【范本模板】

毕业设计外文文献翻译【范本模板】

毕业设计(论文)外文资料翻译系别:专业:班级:姓名:学号:外文出处:附件: 1. 原文; 2。

译文2013年03月附件一:A Rapidly Deployable Manipulator SystemChristiaan J。

J。

Paredis, H. Benjamin Brown,Pradeep K. KhoslaAbstract:A rapidly deployable manipulator system combines the flexibility of reconfigurable modular hardware with modular programming tools,allowing the user to rapidly create a manipulator which is custom-tailored for a given task. This article describes two main aspects of such a system,namely,the Reconfigurable Modular Manipulator System (RMMS)hardware and the corresponding control software。

1 IntroductionRobot manipulators can be easily reprogrammed to perform different tasks, yet the range of tasks that can be performed by a manipulator is limited by mechanicalstructure。

Forexample,a manipulator well-suited for precise movement across the top of a table would probably no be capable of lifting heavy objects in the vertical direction. Therefore,to perform a given task,one needs to choose a manipulator with an appropriate mechanical structure.We propose the concept of a rapidly deployable manipulator system to address the above mentioned shortcomings of fixed configuration manipulators。

毕业设计外文翻译原文

毕业设计外文翻译原文

Int J Adv Manuf Technol (2014) 72:277–288DOI 10.1007/s00170-014-5664-3Workpiece roundness profile in the frequency domain: an application in cylindrical plunge grindingAndre D. L. Batako & Siew Y. GohReceived: 21 August 2013 / Accepted: 21 January 2014 / Published online: 14 February 2014# Springer-Verlag London 2014Abstract In grinding, most control strategies are based on the spindle power measurement, but recently, acoustic emission has been widely used for wheel wear and gap elimination. This paper explores a potential use of acoustic emission (AE) to detect workpiece lobes. This was achieved by sectioning and analysing the AE signal in the frequency domain. For the first time, the profile of the ground workpiece was predicted mathematically using key frequencies extracted from the AE signals. The results were validated against actual workpiece profile measurements. The relative shift of the wave formed on the surface of the part was expressed using the wheel- workpiece frequency ratio. A comparative study showed that the workpiece roundness profile could be monitored in the frequency domain using the AE signal during grinding. Keywords Plunge grinding . Roundness . Waviness . Frequency . Acoustic emission1IntroductionGrinding is mostly used as the last stage of a manufacturing process for fine finishing. However, recently, high efficiency deep grinding (HEDG) was introduced as a process that achieves high material removal rates exceeding 1,100 mm3/ mm/s [1–5]. Grinding is mainly used to achieve high dimen- sional and geometrical accuracy. However, in cylindrical plunge grinding, vibration is a key problem in keeping tight tolerances and form accuracy (roundness) of ground parts.Machine tools are designed and installed to have minimum vibration (with anti-vibration pad when required). Neverthe- less, in grinding, the interaction between the wheel and the workpiece generates persistent vibration. This leads to varia- tion of the forces acting in the contact zone, which in turn causes a variation in the depth of cut on the ground workpiece. Consequently, this creates waviness on the circumference of the workpiece. The engendered uneven profile on the work- piece surface leads to a modulation of the grinding conditions of the following successive rotations; this is called workpiece regenerative effect. The building up of this effect can take place in grinding cycles with longer duration. Similar effects occur on the grinding wheel surface; however, the process of the build up is slow [6–9].It is generally difficult to get a grinding wheel perfectly balanced manually, which is acceptable for general purpose grinding. For precision grinding, automatic dynamic wheel balancing devices are used. Though current grinding ma- chines have automatic balancing systems to reduce the out- of-balance of grinding wheels, in actual grinding, forced vi- bration is still caused by the dynamically unbalanced grinding wheels [10]. This is because any eccentricity in the rotating grinding wheel generates a vibratory motion.The stiffness of the wheel spindle and the tailstock also affects the wheel-workpiece-tailstock subsystem, which oscil- lates due to the interaction of the wheel with the workpiece. In practice, the generated force vibration is hard to eliminate completely. This type of vibration has greater influence on the formation of the workpiece profile. During the grinding process, the out-of-balance of the wheel behaves as a sinusoi- dal waveform that is i mprinted on t he workpiece s urface. T his, as in a previous case, leads to the variation of depth of cut andA.D.L.Ba t ako(*):S.Y.GohAMTReL, The General Engineering Research Institute, Liverpool John Moores University (LJMU), Byrom Street, Liverpool L3 3AF, UKe-mail: a.d.batako@ creates low-frequency lobes around the workpiece, and this is the key target of the study presented here.Other factors such as grinding parameters have to be taken into consideration in the study of grinding vibration becausethese aspects affect the stability of the process. This is because the resulting workpiece profile is the combined effect of different type of vibration in grinding [7, 11]. The studies carried out by Inasaki, Tonou and Yonetsu showed that the grinding parameters have a strong influence on the amplitude and growth rate of the workpiece and wheel regenerative vibration [12].The actual measurement of the workpiece profile is an integral part of the manufacturing process due to the uncertain-ty in wheel wear and the complexity of the grinding process. Contactless measurement and contact stylus systems were developed to record the variations of the workpiece size and roundness. However, these techniques can be used as post-process checking as it is limited to a particular set-up and must be used without the disturbance of the cutting fluid in a clean air-conditioned environment with stable t emperature [13–16].In the industry, random samples from batches are usually inspected after the grinding process. Any rejection of parts or sometimes batches increases the manufacturing time and cost. Therefore, it becomes important to develop online monitoring systems to cut down inspection time and to minimise rejected parts in grinding. Some of the existing monitoring systems in grinding are based on the wheel spindle power. However, sen-sors such as acoustic emission and accelerometers are also used to gather information of the grinding process for different appli-cation. Dornfeld has given a comprehensive view of the appli-cation of acoustic emission (AE) sensors in manufacturing [17]. Most reported applications of AE in grinding are for gap elim-ination, touch dressing and thermal burn detection [18–21].In cylindrical grinding processes, the generated chatter vibration causes the loss of form and dimensional accuracy of ground workpieces. The effect of vibration induces the formation of lobes on the workpiece surface, which are usu- ally detected using roundness measurement equipment. High- precision parts with tight tolerance are increasingly in demand and short cycle times put pressure on manufacturing process- es. This leads to the need for developing in-process roundness monitoring systems for cylindrical grinding processes.The potential of using acoustic emission to detect the formation of lobes on a workpiece during a cylindrical plunge grinding process is investigated in this work. The aim is to extract the workpiece roundness profile from the acoustic emission signal in the frequency domain. The extracted fre- quencies are compared with actual measurement in frequency domain, i.e. harmonic components. T he key frequencies o f the harmonic content are used to predict the expected profile on the ground p art.2The study of acoustic emission plunge grindingAE is an elastic wave that is generated when the workpiece is under the loading action of the cutting grits due to the interfacial and internal frictional and structural modification. The wave generated is transmitted from the contact zone through the components of the machine structure [22, 23]. In grinding processes, the main source of the AE signal is the mechanical stress applied by the wheel on the workpiece in the grinding zone [24]. The chipping action of the abrasive grits on the workpiece surface generates a multitude of acous- tic waves, which are transmitted to the sensor through the centres and the tailstock of machine. The machining condition is reflected in the signal through the magnitude of the acoustic emission, which varies with the intensity of the cutting, e.g. rough, medium or fine grinding. The key information of the machining process and its condition is buried in the AE signal. To extract any information of interest from the AE signals, it is important to identify the frequency bandwidth and study the signal in details.Susic and Grabec showed that intensive changes of AE signal relate to the grinding condition, thus the ground surface roughness could be estimated based on the measured signal with a profile correlation function [25]. A strong chatter vibration in grinding is also reflected in the recorded RMS AE signal. As vibration could generate the waviness on the workpiece, hence, the AE signal was also used to study the roundness profile [26]. A comprehensive study of the chatter vibration, wheel surface and workpiece quality in cylindrical plunge grinding based on the AE signal was carried out recently [27].In roundness measurement systems, the roundness of the part is also given as harmonic components. Generally, the frequency span given by the measurement machine is of low frequency—500 Hz and below. This is because the roundness profile deals with the waviness but not with the surface roughness that is always of higher frequency. Fricker [8] and Li and Shin [28] also indicated parts profile of frequency below 300 Hz. Part roundness profile is expressed in undula- tion per revolution. Therefore, lower frequency components are mainly targeted by t he measurement equipment, b ut higher frequency components tends to ride on top of lower carriers. In most cases, the provided frequency profile is in the range of 300 Hz [8, 28]. Therefore, this work studies the AE signal along the grinding process using the fast Fourier transform (FFT) with a particular focus on frequencies below 300 Hz. This allowed for a direct comparison between the results from this investigation and the actual roundness measurements.Figure 1 illustrates the equipment used in this study, where (a) is the configuration of the grinding machine with the location of the sensors and (b) is the roundness measurement machine. To improve signal transmission, the coating of the tailstock was removed from the location of the sensors as shown in this figure.During this study, observations of the shape of the recorded AE and the signal of spindle power indicated that there are three main phases in a typical cylindrical plunge grindingFig. 1 Experimental equipment: a grinding machine and sensors config- uration, b Talyrond 210 roundness measurement systemcycle, i.e. before grinding, actual grinding and dwell. In this work, the words ―dwell‖and ―dwelling‖are used to describe the ―spark out‖phase where the infeed stops and the grinding wheel enters a dwelling stage. For short notation, ―dwell‖is used in most figures.First phase (before grinding): at the beginning of the process, the grinding wheel approaches the workpiece in a fast infeed without any physical contact between the wheel and the workpiece.Second phase (actual grinding): when the grinding wheel gets very close to the workpiece, the rapid feed changes to the programmed infeed value then the grinding wheel gradually gets into contact with the workpiece. The phase starts with the first contact of the wheel with the part and runs until the targeted diameter is reached.Third phase (dwell or spark out): when the target diam- eter is reached, the infeed stops and the wheel stays in contact with the part. The duration of the dwelling pro- cess varies depending on the grinding conditions and is intended to remove the leftover material on the part due to mechanical and thermal deflection and to reduce the outof roundness. The grinding wheel retracts from the work- piece at the end of the programmed spark out (dwell).In this study, the power and AE signals were recorded simultaneously; however, the acceleration of the tailstock was also recorded for further investigation. The recorded signals are illustrated in Fig. 2 with a delimitation of the three phases.In addition, the actual grinding phase was subdivided to introduce the notion of ―grinding-in‖, ―steady grinding‖and ―pre-dwell‖ as depicted in Fig. 3. The steady grinding ends with a pre-dwell period. There is a transition state between the grinding in and the steady grinding states; this is where the cutting process starts entering the steady state. This is illus- trated by an ellipse in Fig. 2. During the grinding-in, the depth of cut increases from zero to a constant value per revolution, then the steady-state grinding runs under a constant depth of cut. The pre-dwell section is not an obvious technological phase, rather it is a tool used in this study.To aid the signal processing techniques, especially the fast Fourier transform and Yule Walker methods, a referenced sampling was introduced using an RPM pickup (see work- piece rotation in Fig. 3). Recording the workpiece rotation simultaneously with the AE signal helped portioning the signal to reduce processing time and to study time-varying process in the grinding.3Simulation and modellingWorkpiece responseIn this investigation, it was necessary to filter out from the recorded signals the frequencies of other parts of the grinding machine, especially the natural frequency of the workpiece. Fig. 2 Recorded power and acoustic emission signals with process phasesφ ¼ δ.2π.βð2ÞConsequently, the dynamics of the waviness Ω formed with time t at the surface of the part can be expressed as follows: Ω ¼ sin ðωt þ φÞð3ÞTherefore, the equation of the wave generated by the wheel at the surface of the workpiece was derived as follows:. Ω ¼ sin ωt þ 2πδ.βð4Þ3.3 Simulation of the workpiece profileFig. 3 Typical AE signal for one full grinding cycle with RPM outputTherefore, the workpiece response was studied using finite element analysis (FEA) and an experimental impact test to identify its natural frequency. The result of this study is depicted in Fig. 4, where it is seen that the natural frequency of the workpiece is 1,252 Hz. The outputs of the impact test and the FEA are in good agreement and show that the natural frequency of the workpiece is over 1 kHz; consequently, it will not appear in the range of low frequencies of interest.Process modellingDesignating the wheel rotational frequency by fs and the workpiece rotational frequency by fw , the ratio of these two entities was expressed as follows: β ¼ f s = f wð1ÞThe notion of frequency ratio (β) helps understanding the generation of the workpiece profile as it relates key process During the roundness measurement process, the machine uses a single trace of the stylus on the workpiece circumference to generate the profile of the workpiece. Here, the stylus is in direct contact with the measured part.However, in this study, an attempt is made for the first time to predict the final workpiece profile using process signatures extracted from the recorded signals. The link between the prediction model and the grinding process is the sensor, which collects the signal from the entire process. Therefore, the model predicts an average workpiece profile in contrary to measuring machine which gives only a single trace on the part. The procedure of capturing and extracting process sig- nature is schematically illustrated in Fig. 5.The procedure works as follows: throughout the grinding process, the acoustic emission, vibration and RPM sensor record the signals. The signals are processed using various techniques (e.g. FFT) to obtain the system response in the frequency domain. The model extracts process-inherent key dominant frequencies, and uses these frequencies and their respective amplitudes to generate the expected profile of the workpiece.The following expression in Eq. (5) is used to predict the final profile of the ground part. parameters and defines the fundamental harmonic, which naffects the part profile. In this study, it was found that the wheel-workpiece frequency ratio has a direct effect on the ∏ ¼ X i ¼1 ½αi cos ð2π t f i Þ] þ rand ðt Þ ð5Þ workpiece roundness as it constitutes the fundamental har- monic for this specific machining configuration.During the grinding process, there is a relative lag between the grinding wheel and the workpiece due to the difference in their rotational frequencies. This difference (δ) is numerically equal to the decimal part of the frequency ratio. This causes the currently forming wave to creep, with reference to the wave formed in the previous revolution of the part. By ex- pressing the wheel angular speed as ω, and the decimal part of the frequency ratio in Eq. 1 as δ, the relative shift φ of the wave on the workpiece surface was defined as follows: Where f i is the i th dominant frequency with an amplitude ofαi , and t is the time. rand (t ) is the added random noise to incorporate the randomness of grits cutting actions.4 Experimental workIn this investigation, the response of the machine tool was studied at different stages, namely idle, running by switching its components one by one and recording the signal from oneFig. 4 Workpiece response: a experiment and b FEAsingle location, and finally in operation conditions while grinding. This allowed identifying and discriminating fre- quency components belonging to the machine tools structure and those frequencies induced by noise and interference from nearby operating machineries.An analogue to digital (A/D) converter (NI 6110) was used to record the analogue signals from the power of the motor, the acceleration and the acoustic emission sensors through the tailstock. This A/D device had four channels with a sampling rate up to 5 MS/s per channel, providing a total sampling rate of 20 MS/s. This device allowed for a simultaneous four-channel sampling of analogue inputs voltage in the range of ±5 mV to42 V. The LabView software was used to control the data acquisition process during the experiments. To iden- tify the most suitable sampling, the signals were recorded at various sampling rates. The recorded signal was proc- essed using MATLAB.Sets of workpiece batches were ground using rough, medium and fine infeed. The grinding wheel speed was 35 m/s, and the workpiece was rotating at 100 rpm. In this experiment, a dwell or spark out of 10 s was applied to all grinding cycles. In total, 220 μm of material was removed from each part.The ground parts were allowed to settle down for 24 h at 19±1 °C in a temperature controlled room before the measure- ments were taken. The roundness profile of the ground parts were measured using a Talyrond 210 illustrated in Fig. 1. AFig. 5 Modelling pseudo- algorithmtypical measured workpiece roundness is illustrated in Fig. 6, where (a) is the roundness profile and (b) is the corresponding linear profile obtained by dissecting the round profile and expanding it in a line.5ResultsIn this work, various signal processing techniques were used to study the recorded signals as described above. In order to extract t he i nformation o f t he workpiece r oundness p rofile, t he acoustic emission signal was scrutinised using the above- mentioned partitioning technique. Each portion of the signal was analysed using the FFT and Yule Walker (YW) methods. However, short-time Fourier transform (STFT) and the continuous wavelet transform (CWT) were able to handle full grinding cycle signals.Figures 7 and 8 illustrate a typical power spectrum using the STFT and CWT of a full grinding cycle. Similar outputs were obtained with the FFT and YW methods; however, these last two methods required signal partitioning due to the com- putational window s ize.Figure 8 illustrates the three phases of a grinding cycle, where the frequency spectrum is given with time span along the grinding cycle. It is seen in this picture as in Fig. 7 that process-inherent f requencies a ppeared o nly d uring t he ―actual grinding‖ phase and partially in the ―dwell or spark out‖ period. Comparing STFT and CWT, it is observed that the STFT (Fig. 7) provided an aggregate frequency spectrum, when the CWT resolved each individual frequency (Fig. 8). This improved resolution allows identifying the birth of lobesFig. 6 Typical workpiece roundness measurement using Talyrond 210: a roundness profile, b corresponding linear profilein time within the grinding cycle. It opens an opportunity in studying the workpiece profile in the frequency domain. It is seen in both figures that the parasitic 50 Hz can be well discriminated f rom t he p rocess f requency. V arious f requencies up to 500 Hz that characterise the workpiece prolife are picked up in the actual grinding phase. Frequencies that dominate the spectrum towards the end of the actual grinding will poten- tially form and reside on the final part profile.Figure 9 presents the results of AE, where the sectioned signal was analysed in the frequency domain using FFT toextract the frequencies of interest. This picture displays a waterfall plot of the frequency spectrum in each phase of the grinding cycle. This study focused on the detection of process - inherent frequencies, with less attention to the actual value of the magnitude as it is subject of another investigation. It is observed in the ―before grinding ‖ section of the signal that nothing happens in the frequency domain; hence, no frequen- cy peaks were detected. In the ―grinding -in ‖ section, once the wheel hits the workpiece, several frequency peaks appear in the signal, characterising special events in the grindingFig. 7 Full grinding cycle AE signal frequency spectrum using STFTFig. 8 Full grinding cycle AE signal frequency spectrum using CWTFig. 9 Waterfall plot offrequency spectrum of a full grinding cycleprocess. The amplitudes of these frequencies increase as the process evolves into ―actual grinding ‖ due to the cutting intensity and diminish towards the end of the grinding cycle (spark out ). In this figure, the transition between the grinding phases can be observed from the variations in the frequencies and their amplitudes along the progression of the process. During the actual grinding, due to the generated vibration and the increase of the material removal, high peaks were detected. Less frequency components and small amplitude in the dwell period is due to reduced grains activities because there is no actual infeed of the wheel and the workpiece enters a relaxation stage while the wheel removes only leftover material c aused b y wheel/workpiece d eflection. Comparing the detected frequencies in the AE signal with the off-line measurement, it was identified that there is a factor of 0.6 between the two set of results in this particular test. This factor varies depending on machining configuration. Using this factor, all the detected harmonics from the frequency analysis were correlated to those from roundness measure- ment. Figure 12 gives a sample of comparative results for fine infeed grinding, showing the detected frequencies and their corresponding measured harmonics.For example, multiplying the frequency 54 in Fig. 10 (fre- quency analysis) by this factor (0.6) provides the value 32.4 (33), which is the harmonic detected by the measurement machine in Fig. 11. It is seen that the major lobes (33) formedFig. 10 Frequency content of the signal in dwell phase (fine infeed))Fig. 11 Harmonic profile from the actual measurement (fine infeed)Fig. 12 Extracted harmonics and measured values (fine infeed)on the ground workpiece in Fig. 12 as well as the other components, i.e. 48, 82, 115 and 148, were clearly detected in the AE signal as 78, 136,190 and 248 Hz.It is worth mentioning that the actual magnitude of the power spectrum of detected frequencies is not in the scope of the work presented here. This is because this work focused on the detection of process-inherent frequencies in order to develop a control strategy to improve the round- ness of the part. The control strategy and the actual magni- tude are considered in the next phase, where the system will be calibrated.The study of the frequencies in fine infeed showed that during the dwelling period (spark out), the amplitude of the detected harmonics decreases drastically. It is observed that in spark out (dwell), the amplitude of 243 Hz which was dom- inating throughout the cycle dropped and led to 54 Hz to become the dominate in the last phase in Fig. 10. This section carries important information of the final workpiece profile. The number of lobes formed on the workpiece is now defined as a product of the extracted frequencies with the defined factor. This holds true for any frequency detected and for given machining parameters configuration. The factor of 0.6 given here is adequate for this specific experimental set up used in these particular tests. However, it was identified that this factor varies as a function of process settings. The origin of this factor was identified but not stated here, as it is commercially viable for the companies pursuing further de- velopment of this work.This study confirmed that the final profile of the work- piece is the result of overlapping waves of different fre- quencies as an additive process. This is schematically illus- trated in Fig. 13. In addition, these waves have a relative translation with reference to each other due to a shifting effect caused by the relative creep of the grinding wheel with reference to the rotating workpiece as described in Eqs. (2–4). It is worthwhile stating that this work does not study the roughness which is characterised by high frequen- cy; however, it focused on the formation of lobes which are of lower frequency.This is evidenced in Fig. 14 for rough infeed grinding, where the AE signal was analysed per workpiece revo- lution in the actual steady -state grinding. This figureFig. 13 Additive effect of key harmonics forming a profile on a workpieceFig. 14 Process frequency content in steady-state grinding with rough infeedshows how different frequencies appear or disappear from revolution to revolution due to the shifting and overlapping effect. Also, the amplitude of these frequen- cies varies along the process. However, in fine infeed, it was observed that the process is dominated by two high peaks of 54 and 243 Hz (second and ninth harmonic in Fig. 12), which appeared throughout the full grinding cycle.Using the wave additive property and applying Eq. (5) allowed predicting the expected workpiece profile using the information extracted from the signal in the frequency domain. One of the examples is shown in Fig. 15 in a form of linear profile, which is obtained by dissecting the circu- lar profile and extending it along a line of 360°. The measurement machine chooses an arbitrary point and dis- sects the profile. Here, Fig. 15a is a linear roundness profile of the workpiece obtained from the actual round- ness measurement system, and Fig. 15b is the predicted (simulated) workpiece profile using the frequency compo- nents extracted from the AE signal. The point where the measuring machine cuts the profile and sets the origin of the axis 0.0°is unknown to the machine operator; there- fore, the results in Fig. 15a are shifted relative to Fig. 15b by an unknown value. However, there is a good agreement between these two profiles in terms of the surface undula- tion per revolution. This prediction will be used in the control strategy to improve the part profile well before the process enters the spark out phase.a)Simulate signal54321 0 -1-2-3-4 050 100 150200 250 300 350DegreeFig. 15 Linear workpiece profile: (a) actual measurement; (b) predicted (simulated ) profile6 DiscussionThe results show that the detected major dominant frequency in the AE signal is of importance because it indicates the number of major lobes formed on the workpiece. The other frequency components represent the small peaks on the work- piece surface. Actual workpiece measurements supported this finding. The extracted information of workpiece profile using the techniques presented here provides the room for the de- velopment of a control strategy to improve the workpiece roundness.This study showed that the formation of the workpiece profile is a function of the process parameters where the wheel and the workpiece play the key role. This is because at high rotating speeds, a slight unbalance of the wheel leads to high eccentric force, hence uneven stock removal. This is magni- fied by the effect of regular or irregular imprints on the workpiece surface. The shifting of the wheel relative to the workpiece leads to the generation of various waves on the workpiece. It was observed in fine infeed that the machining conditions are relatively stable; therefore, there were no drastic changes in the AE signal in terms of frequencies and amplitudes. However, in rough infeed with increased depth of cut and longer contact length, there is a tendency to have vibrations at high amplitude. This leads to radical changes in the cutting intensity at a regular pattern and at the pace of the fundamental frequency. An example is observed in Fig. 14 where certain frequencies appear constantly in the last three rotations (see sixth, seventh and eight rotations). If there is no shift between successive rotations, the matching of dominant frequenciesmay cause a beating effect as the wheel and the workpiece would make their contacts at the same points. Consequently, the formed lobes would become more apparent around the workpiece. An example of the beating effect is seen in the AE signal in Fig. 3 where the amplitude of the signal is modulated.Low infeed rate has a small depth of cut, short contact length and provides an increased number of lobes. However, the opposite is true in high infeed rate, where a small number of lobes is generated with higher amplitude. The higher the number of lobes, the smaller the interval between the lobes and the smoother is the profile formed. Thus, the workpiece produced using the rough infeed has higher peak with lower number of lobes, while with the fine infeed, the number of lobes is higher and the peaks are of small heights.7 Conclusions This paper provides some key relations between the process and the acoustic sounds emitted during machining. Process- inherent frequencies were successfully extracted from the AE signal and compared with the information of the measured workpiece profile. The obtained results were verified using data from the actual roundness measurement. A range of grinding parameters was covered and the outcomes correlated well with the measurements. A fundamental frequency ratio was established. A mathematical expression was derived to predict the expected profile of the machined part. The rela- tionship between the frequencies buried in the AE and those A m p l i t u d eb)。

毕业设计外文翻译

毕业设计外文翻译

毕业设计外文翻译Graduation Design – English TranslationIntroductionThe graduation design is a crucial part of a student’s academic journey. It is a project that showcases the knowledge and skills that the student has acquired throughout their studies. The purpose of this translation is to provide an overview of the graduation design and explain its significance.Significance of the Graduation DesignThe graduation design serves as an opportunity for students to apply the theoretical knowledge they have gained in a practical manner. It allows them to put their skills into action and demonstrate their problem-solving abilities. Through the completion of the graduation design, students are equipped with the necessary tools to enter the workforce with confidence.Components of the Graduation DesignThe graduation design typically consists of several key components. Firstly, there is a written report that provides an in-depth analysis of the project. This report outlines the objectives, methodology, results, and conclusions of the graduation design. It also includes a literature review that discusses the existing research related to the topic.In addition to the written report, a presentation is also required aspart of the graduation design. This presentation allows students to communicate their findings to a larger audience. It is an opportunity for students to showcase their ability to effectively present complex information in a clear and concise manner.Furthermore, the graduation design often involves a practical component. This can range from designing and building a prototype to conducting experiments or surveys. The practical component allows students to apply their engineering skills and test their theories in a real-world setting.Evaluation of the Graduation DesignThe graduation design is evaluated based on several criteria. The written report is assessed for its clarity, organization, and depth of analysis. The presentation is evaluated for the student’s ability to effectively communicate their ideas and engage the audience. The practical component is assessed based on the quality and accuracy of the work completed.ConclusionIn conclusion, the graduation design is a significant project that allows students to apply their knowledge and skills in a practical manner. It consists of a written report, a presentation, and a practical component. The completion of the graduation design prepares students for their future careers by equipping them with the necessary tools and abilities.。

毕业设计中外文翻译

毕业设计中外文翻译

毕业设计中外文翻译【篇一:毕业论文外文翻译(中英文)】淮阴工学院毕业设计(论文)外文资料翻译学院:专业:姓名:学号:外文出处:附件:交通工程学院交通运输杨宇 1081501135 ieee 1.外文资料翻译译文;2.外文原文。

注:请将该封面与附件装订成册。

附件1:外文资料翻译译文交通拥堵收费和城市交通系统的可持续发展摘要:城市化和机动化的快速增长,通常有助于城市交通系统的发展,是经济性,环境性和社会可持续性的体现,但其结果是交通量无情增加,导致交通拥挤。

道路拥挤定价已经提出了很多次,作为一个经济措施缓解城市交通拥挤,但还没有见过在实践中广泛使用,因为道路收费的一些潜在的影响仍然不明。

本文首先回顾可持续运输系统的概念,它应该满足集体经济发展,环境保护和社会正义的目标。

然后,根据可持续交通系统的特点,使拥挤收费能够促进经济增长,环境保护和社会正义。

研究结果表明,交通拥堵收费是一个切实有效的方式,可以促进城市交通系统的可持续发展。

一、介绍交通拥堵收费在很久以前就已提出,作为一种有效的措施,来缓解的交通挤塞情况。

交通拥堵收费的原则与目标是通过对选择在高峰拥挤时段的设施的使用实施附加收费,以纾缓拥堵情况。

转移非高峰期一些出行路线,远离拥挤的设施或高占用车辆,或完全阻止一些出行,交通拥堵收费计划将在节省时间和降低经营成本的基础上,改善空气中的质量,减少能源消耗和改善过境生产力。

此计划在世界很多国家和地方都有成功的应用。

继在20世纪70年代初和80年代中期挪威与新加坡实行收费环,在2003年2月伦敦金融城推出了面积收费;直至现在,它都是已经开始实施拥挤收费的大都市圈中一个最知名的例子。

然而,交通拥堵收费由于理论和政治的原因未能在实践中广泛使用。

道路收费的一些潜在的影响尚不清楚,和城市发展的拥塞定价可持续性,需要进一步研究。

可持续发展通常作为运输政策的评估基本目标。

可持续交通的想法已经出现在交通运输部门的可持续发展的概念中,可以定义如下,“可持续发展的交通基础设施和出行政策是服务于经济发展,环境管理和社会公平的多重目标,用这个目标来优化交通运输系统的使用,并达到经济和相关的社会和环境目标,以实现在不牺牲后代的能源的前提下,达到相同的目的。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

Legal Environment for Warranty ContractingIntroductionIn the United State, state highway agencies are under increasing pressure to provide lasting and functional transporting infrastructures rapidly and at an optimum life-cycle cost. To meet the challenge, state highway agencies are expected to pursue innovative practices when programming and executing projects. One area of the innovative practices is the implementation of long-term, performance-based warranties to shift maintenance liabilities to the highway industry. Use of warranties by state highway agencies began in the early-1990s after the Federal Highway Administration’s (FHWA) decision to allow warranty provisions to be included in construction contracts for items over which the contractor had complete control (Bayraktar et al. 2004). Special Experiment Project Number 14(SEP-14) was created to study the effects of this and other new techniques. Over the past decade, some states have incorporated this innovative technique into their existing programs. Projects have ranged from New Mexico’s 20-year warranty for the reconstruction of US550 to smaller scale projects, such as bridge painting and preventative maintenance jobs.These projects have met with varying degrees of success, causing some states to broaden the use of warranties, whereas others have abandoned them completely. Several states have sacrificed time and money to fine tune the use of warranties. However, on a national level, there is still a need for research and the exchange of ideas and best practices. One area that needs further consideration is the legal environment surrounding the use of warranties. Preliminary use in some states has required changes to state laws and agency regulations, as well as the litigation of new issues. This paper will discuss the laws and regulations needed to successfully incorporate warranties into current contracting practices and avoid litigation. The state of Alabama is used as an example of a state considering the use of long-term, performance-based warranties and proposals for laws and regulations will be outlined. This paper persents a flowchart to help an agency determine if a favorable legal environment exists for the use of warranties.Warranty Contracting in Highway ConstructionA warranty in highway construction, like the warranty for a manufactured product, is a guarantee that holds the contractor accountable for the repair and replacement of deficiencies under his or her control for a given period of time. Warranty provisions were prohibited in federal-aid infrastrure projects until the passage of the Intermodal Surface Transportation Efficiency Act in 1991 because warranty provisions could indirectly result in federal aid participation in maintenance costs, which at that time were a federal aid nonparticipating item(FHWA 2004). Under the warranty interim final rule that was published on April 19, 1996, the FHWA allwoed warranty provisions to be applied only to items considered to be within the control of contractors. Ordinary wear and tear, damage caused by others, and routine maintenance remained the responsibility of the state highway agencies(Anderson and Russel 2001). Eleven states participated in the warranty experiment under Special Experiment Project Number 14 referred to as SEP-14, which was created by the FHWA to study the effects of innovative contracting techniques. Warranty contracting was one of the four innovative techniques that FHWA investigated under SEP-14 and the follo-on SEP-15 program.In accordance with the National Cooperative Highway Research Program Synthesis 195(Hancher 1994), a warranty is defined as a guarantee of the integrity of a product and the maker’s responsibility for the repair or replacement of the deficiencies. A warranty is used tospecify the desired performance characeristics of a particular product over a specified period of time and to define who is responsible for the product (Blischke1995). Warranties are typically assigned to the prime contractor, but may be passed down to the paving contractors as pass-through warranties.The warranty approach in highway construction contrasts sharply with traditional highway contracting practices. Under the standard contracting option, the state highway agencies provide a detailed design and decide on the construction processes and materials to be used. Contractors perform the construction and bear no responsibility for future repairs once the project is accepted. Stringent quality control and inspection are necessary to make sure that contractors are complying with the specifications and the design. The warranty approach, usually used with performance-based specifications, changes almost every step in the standard contracting system. The changes go beyond the manner in which projects are bid, awarded, and constructed. Most important, contractors are bound by the warranty and are required to come back to repair and maintain the highway whenever certain threshold values are exceeded. In return for the shift in responsibility, contractors are given the freedom to select construction materials, methods, and even mix designs.Legal assessment framework for warranty contractingAs public sector organizations, state highway agencies must follow state laws and proper project procurement procedures. State legislation impacting state highway agencies include statutes on public work, highways and roads, state government, and special statutes. These statutes define general responsibilities and liabilities of the state highway agency and must be investigated before a state highway agency moves to any innovative contracting method. Additionally, the state highway agency may develop appropriate regulatory standards and procedures tailored to meet special needs. State highway agencies should also investigate and assess warranties contract and construction.In order to develop a legal and contractual framework against which to evaluate the state of Alabama and other states not active in warranty contracting the writers reviewed the statutes in numerous states that are active in warranty contracting. Ohio, Michigan, Minnesota, Florida, Texas, Illinois, Montana, and others have all been more or less active in warranty contracting. Their statutes were reviewed, as well as the specifications they use for measuring actual road performance against warranted performance. Also, numerous national studies were reviewed. The writers determined that regardless of whether warranties are imposed by legislative mandate or initiated by a state DOT or other body, there are three elements that are consistently found in successful programs, and these elements often require modification of the existing statues. These three elements are design-build contracting, bidding laws that allow for flexibility and innovation, and realistic bonding requirements. Given those elements as a starting point, the actual contract specifications must address when the warranty period commences, the inspection frequency, clear defect definitions, allocation of responsibility for repair, emergency maintenance, circumstances that void the warranty, and dispute resolution.The foregoing statutes and regulations are termed the legal assessment framework for performance warranties. The three broad steps in the framework: initiation of warranty contracting, statute assessment, and regulatory assessment are discussed in detail in the following sections.Initiation of WarrantiesSeveral states initiated the use of warranties as a result of a legislative mandate. For example, in 1999, the Illinois legislature passed a bill that required 20 of the projects outlined in the Illinois Department of Transportation’s Five Year Plan to include 5-year performance warranties (IDOT 2004). Ten of those projects were to be designed to have 35 life cycles (Illinois Compiled Statutes Ch.605*5/4-410). Also in 1999,Ohio began using warranties due to a legislative mandate that required a minimum of one-fifth of road construction projects to be bid with a warranty. According to Ohio Revised Code *5525.25, the requirements were later changed on the suggestion of the highway agency to make the minimums into maximums so it could spend more time evaluating what types or projects are best suited for warranties(ODOT 1999). The warranties were to range from 2 to 7 years, depending on the type of construction. Finally, in a less demanding mandate, the Michigan Compiled Laws*247.661, in a state highway funds appropriation bill, included the instruction that,” the Department [of Transportation] shall, where possible, secure warranties of not less than five-year, full replacement guarantee for contracted construction work..” These types of mandates generally require the agency to first come up with an outline of how it plans to incorporate these directives into existing procedures and specifications, as well as prepare reports regarding the success of these programs and their cost effectiveness.Alternatively, some agencies begin the use of warranties on their own initiative. In Texas, the State Comptroller’s Office issued a report on the Department of Transportation’s (DOT) operations and strongly recommended the use of more innovative methods, including warranties, to better meet the transportation needs of the state(Strayhorn2001). As a result, the Texas Transportation Institute commenced its own investigation of warranties and developed an implementation plan for the Texas DOT(Anderson et al.2006). One of the reasons cited for the study was the potential for a future legislative mandate, and the need to research the area before the agency wad forced to make use of warranties. Montana acted without any government influence by initiating a bill(Bill Draft No.LC0443) that called for the formation of a committee to study the feasibility of design-build and warranty contracting. This committee was to include members of the House and Senate, Department of Transportation officials, representatives from contractor’s associations, and a representative from the general public and would submit a report to the office of Budget and Program Planning. This bill was not enacted, but the Department continued their efforts by preparing a report containing specific suggestions as to how Montana could implement warranties on future highway construction projects (Stephens et al.2002).Like Texas and Montana, most states have made their own investigations into the use of performance-based warranties. Generally, state highway agencies have worked with research teams, contractors and industry associations to extensively evaluate the feasibility of warranted projects. Although sometimes a political push may be needed to encourage the use of innovative methods, states, which begin researching new ideas on their own, may have more time to carefully select the best use for these innovations. As exemplified by Ohio, who found it infeasible to meet existing legislative mandates, states may have to amend the legislation later, indicating the legislature may not be best suited to make the first move.Statutory AssessmentAs pointed-out earlier, statutes regarding public work, public transportation, state government, and other related statutes should be evaluated in terms of the legal environment of the warranty contracting. Three related major legislations are project delivery, public bidding procedures, andbonding requirements.Legislation regarding Design-Build Project DeliveryHistorically, contractors are told what materials to use and how to use them in the construction project. State personnel oversee the construction and perform continuous quality assurance testing to ensure the contractor is following the specifications. Legislation may restrict a state to this process, which does not allow for the increased contractor control that use of a warranty may dictate. Several transportation agencies have explicit authorization for design-build contracting methods. For instance, Ohio Revised Code *5517.011 allows for a value-based selection process where technical proposals can be weighted and the bid awarded to the contractor with the lowest adjusted price. These projects may be limited to a specific type of construction, such as tollway or bridge projects, or by the dollar amount of design-build contracts that may be awarded annually. Oregon Revised Statute *383.005 allows for tollway contracts to be awarded considering cost, design, quality, structural integrity, and experience. Wisconsin Statute *84.11(5n) allows for certain bridge projects to be bid under design-build after a prequalification process, assessment of a variety of Transportation and the governor. In Ohio, the Revised Code *5517.011, however, limits design-build contracts to $ 250 million biennially.Other statutes are more general, simply stating that public agencies are permitted to use design-build contracting methods, e.g. Idaho Code * 67-2309. In state where design-build contracts are specifically outlawed by statute(e.g. Tenn. Code *4-5-102), the agency has few options. In Texas, where design-build is not allowed, the agency has implemented a rigid, multistep prequalification process in an effort to factor in advantages one contractor may have over another, when still complying with the traditional design-bid-build laws (Strayhorn 2001). Design-build and warranties seem to go hand-in-hand, allowing less agency interaction from the beginning of the project and more confidence in the contractor’s ability to fulfill the warranty requirements. However, the proper statutes need to be in place for an agency to utilize this innovative contracting method.Legislation of Public Bidding ProceduresThe use of warranties and other innovative contracting methods may not fit cleanly within existing bidding procedures for public contracts. If the request for proposals details the project in terms of performance based specifications, bidding laws must account for the different methods and materials proposed by bidders. Traditionally, bidding laws require an agency to solicit bids through a competitive, sealed bidding process and award the contract to the “ lowest responsible bidder.”Exceptions to the lowest bidder rule are sometimes built into statutes, but the more common exceptions only allow an agency to reject all bids if they are all unreasonable or when it is in the interest of the awarding authority to reject all bids, e. g., Alabama Code *39-2-6(c). However, the lowest responsible bidder language presents a way through which a state may avoid contracting with simply the lowest pecuniary bidder, which may better serve the goals of the project.Application of Assessment Framework to AlabamaThe proposed assessment framework was used to investigate the laws and regulations necessary in Alabama to successfully incorporate warranties into current contracting practices and at the same time, avoiding litigation. Currently, the state of Alabama has no legislative directive requiring the use of warranties. Therefore, the Alabama DOT, working with the surety industry, contractors and academics, will need to develop a plan if they intend to implement warranties. Indoing so, the agency should look at statutes which may impede the use of warranties. Please refer to the Appendix for a list of Alabama Statutes.为保证合同的法律环境介绍在美国国务院,国道机构正在受到越来越大的压力,以提供持久的运输基础设施和功能迅速在最佳生命周期成本。

相关文档
最新文档