西电毕设翻译中文版

合集下载

计算机专业毕业设计说明书外文翻译(中英对照)

计算机专业毕业设计说明书外文翻译(中英对照)

Talking about security loopholesRichard S. Kraus reference to the core network security business objective is to protect the sustainability of the system and data security, This two of the main threats come from the worm outbreaks, hacking attacks, denial of service attacks, Trojan horse. Worms, hacker attacks problems and loopholes closely linked to, if there is major security loopholes have emerged, the entire Internet will be faced with a major challenge. While traditional Trojan and little security loopholes, but recently many Trojan are clever use of the IE loophole let you browse the website at unknowingly were on the move.Security loopholes in the definition of a lot, I have here is a popular saying: can be used to stem the "thought" can not do, and are safety-related deficiencies. This shortcoming can be a matter of design, code realization of the problem.Different perspective of security loo phole sIn the classification of a specific procedure is safe from the many loopholes in classification.1. Classification from the user groups:● Public loopholes in the software category. If the loopholes in Windows, IEloophole, and so on.● specialized software loophole. If Oracle loopholes, Apach e, etc. loopholes.2. Data from the perspective include :● could not reasonably be read and read data, including the memory of thedata, documents the data, Users input data, the data in the database, network,data transmission and so on.● designa ted can be written into the designated places (including the localpaper, memory, databases, etc.)● Input data can be implemented (including native implementation,according to Shell code execution, by SQL code execution, etc.)3. From the point of view of the scope of the role are :● Remote loopholes, an attacker could use the network and directly throughthe loopholes in the attack. Such loopholes great harm, an attacker can createa loophole through other people's computers operate. Such loopholes and caneasily lead to worm attacks on Windows.● Local loopholes, the attacker must have the machine premise accesspermissions can be launched to attack the loopholes. Typical of the local authority to upgrade loopholes, loopholes in the Unix system are widespread, allow ordinary users to access the highest administrator privileges.4. Trigger conditions from the point of view can be divided into:● Initiative trigger loopholes, an attacker can take the initiative to use the loopholes in the attack, If direct access to computers.● Passive trigger loopholes must be computer operators can be carried out attacks with the use of the loophole. For example, the attacker made to a mail administrator, with a special jpg image files, if the administrator to open image files will lead to a picture of the software loophole was triggered, thereby system attacks, but if managers do not look at the pictures will not be affected by attacks.5. On an operational perspective can be divided into:● File opera tion type, mainly for the operation of the target file path can be controlled (e.g., parameters, configuration files, environment variables, the symbolic link HEC), this may lead to the following two questions: ◇Content can be written into control, the contents of the documents can be forged. Upgrading or authority to directly alter the important data (such as revising the deposit and lending data), this has many loopholes. If history Oracle TNS LOG document can be designated loopholes, could lead to any person may control the operation of the Oracle computer services;◇information content can be output Print content has been contained to a screen to record readable log files can be generated by the core users reading papers, Such loopholes in the history of the Unix system crontab subsystem seen many times, ordinary users can read the shadow ofprotected documents;● Memory coverage, mainly for memory modules can be specified, writecontent may designate such persons will be able to attack to enforce the code (buffer overflow, format string loopholes, PTrace loopholes, Windows 2000 history of the hardware debugging registers users can write loopholes), or directly alter the memory of secrets data.● logic errors, such wide gaps exist, but very few changes, so it is difficult todiscern, can be broken down as follows : ◇loopholes competitive conditions (usually for the design, typical of Ptrace loopholes, The existence of widespread document timing of competition) ◇wrong tactic, usually in design. If the history of the FreeBSD Smart IO loopholes. ◇Algorithm (usually code or design to achieve), If the history of Microsoft Windows 95/98 sharing password can easily access loopholes. ◇Imperfections of the design, such as TCP / IP protocol of the three-step handshake SYN FLOOD led to a denial of service attack. ◇realize the mistakes (usually no problem for the design, but the presence of coding logic wrong, If history betting system pseudo-random algorithm)● External orders, Typical of external commands can be controlled (via the PATH variable, SHELL importation of special characters, etc.) and SQL injection issues.6. From time series can be divided into:● has long found loopholes: manufacturers already issued a patch or repairmethods many people know already. Such loopholes are usually a lot of people have had to repair macro perspective harm rather small.● recently discovered loophole: manufacturers just made patch or repairmethods, the people still do not know more. Compared to greater danger loopholes, if the worm appeared fool or the use of procedures, so will result in a large number of systems have been attacked.● 0day: not open the loophole in the private transactions. Usually such loopholes to the public will not have any impact, but it will allow an attacker to the targetby aiming precision attacks, harm is very great.Different perspective on the use of the loopholesIf a defect should not be used to stem the "original" can not do what the (safety-related), one would not be called security vulnerability, security loopholes and gaps inevitably closely linked to use.Perspective use of the loopholes is:● Data Perspective: visit had not visited the data, including reading and writing.This is usually an attacker's core purpose, but can cause very serious disaster (such as banking data can be written).● Competence Perspective: Major Powers to bypass or p ermissions. Permissionsare usually in order to obtain the desired data manipulation capabilities.● Usability perspective: access to certain services on the system of controlauthority, this may lead to some important services to stop attacks and lead to a denial of service attack.● Authentication bypass: usually use certification system and the loopholes willnot authorize to access. Authentication is usually bypassed for permissions or direct data access services.● Code execution perspective: mainly procedures for the importation of thecontents as to implement the code, obtain remote system access permissions or local system of higher authority. This angle is SQL injection, memory type games pointer loopholes (buffer overflow, format string, Plastic overflow etc.), the main driving. This angle is usually bypassing the authentication system, permissions, and data preparation for the reading.Loopholes explore methods mustFirst remove security vulnerabilities in software BUG in a subset, all software testing tools have security loopholes to explore practical. Now that the "hackers" used to explore the various loopholes that there are means available to the model are:● fuzz testing (black box testing), by constructing procedures may lead toproblems of structural input data for automatic testing.● FOSS audit (White Box), now have a series of tools that can assist in thedetection of the safety procedures BUG. The most simple is your hands the latest version of the C language compiler.● IDA anti-compilation of the audit (gray box testing), and above the sourceaudit are very similar. The only difference is that many times you can obtain software, but you can not get to the source code audit, But IDA is a very powerful anti-Series platform, let you based on the code (the source code is in fact equivalent) conducted a safety audit.● dynamic tracking, is the record of proceedings under different conditions andthe implementation of all security issues related to the operation (such as file operations), then sequence analysis of these operations if there are problems, it is competitive category loopholes found one of the major ways. Other tracking tainted spread also belongs to this category.● patch, the software manufacturers out of the question usually addressed in thepatch. By comparing the patch before and after the source document (or the anti-coding) to be aware of the specific details of loopholes.More tools with which both relate to a crucial point: Artificial need to find a comprehensive analysis of the flow path coverage. Analysis methods varied analysis and design documents, source code analysis, analysis of the anti-code compilation, dynamic debugging procedures.Grading loopholesloopholes in the inspection harm should close the loopholes and the use of the hazards related Often people are not aware of all the Buffer Overflow Vulnerability loopholes are high-risk. A long-distance loophole example and better delineation:●R emote access can be an OS, application procedures, version information.●open unnecessary or dangerous in the service, remote access to sensitiveinformation systems.● Remote can be restricted for the documents, data reading.●remotely important or res tricted documents, data reading.● may be limited for long-range document, data revisions.● Remote can be restricted for important documents, data changes.● Remote can be conducted without limitation in the important documents, datachanges, or for general service denial of service attacks.● Remotely as a normal user or executing orders for system and network-leveldenial of service attacks.● may be remote management of user identities to the enforcement of the order(limited, it is not easy to use).● can be remote management of user identities to the enforcement of the order(not restricted, accessible).Almost all local loopholes lead to code execution, classified above the 10 points system for:●initiative remote trigger code execution (such a s IE loophole).● passive trigger remote code execution (such as Word gaps / charting softwareloopholes).DEMOa firewall segregation (peacekeeping operation only allows the Department of visits) networks were operating a Unix server; operating systems only root users and users may oracle landing operating system running Apache (nobody authority), Oracle (oracle user rights) services.An attacker's purpose is to amend the Oracle database table billing data. Its possible attacks steps:● 1. Access pea cekeeping operation of the network. Access to a peacekeepingoperation of the IP address in order to visit through the firewall to protect the UNIX server.● 2. Apache services using a Remote Buffer Overflow Vulnerability direct accessto a nobody's competence hell visit.● 3. Using a certain operating system suid procedure of the loophole to upgradetheir competence to root privileges.● 4. Oracle sysdba landing into the database (local landing without a password).● 5. Revised target table data.Over five down for process analysis:●Step 1: Authentication bypass●Step 2: Remote loopholes code execution (native), Authentication bypassing ● Step 3: permissions, authentication bypass● Step 4: Authentication bypass● Step 5: write data安全漏洞杂谈Richard S. Kraus 网络安全的核心目标是保障业务系统的可持续性和数据的安全性,而这两点的主要威胁来自于蠕虫的暴发、黑客的攻击、拒绝服务攻击、木马。

毕业设计中英文翻译

毕业设计中英文翻译

Integrated circuitAn integrated circuit or monolithic integrated circuit (also referred to as IC, chip, or microchip) is an electronic circuit manufactured by the patterned diffusion of trace elements into the surface of a thin substrate of semiconductor material. Additional materials are deposited and patterned to form interconnections between semiconductor devices.Integrated circuits are used in virtually all electronic equipment today and have revolutionized the world of electronics. Computers, mobile phones, and other digital appliances are now inextricable parts of the structure of modern societies, made possible by the low cost of production of integrated circuits.IntroductionICs were made possible by experimental discoveries showing that semiconductor devices could perform the functions of vacuum tubes and by mid-20th-century technology advancements in semiconductor device fabrication. The integration of large numbers of tiny transistors into a small chip was an enormous improvement over the manual assembly of circuits using discrete electronic components. The integrated circuit's mass production capability, reliability, and building-block approach tocircuit design ensured the rapid adoption of standardized ICs in place of designs using discrete transistors.There are two main advantages of ICs over discrete circuits: cost and performance. Cost is low because the chips, with all their components, are printed as a unit by photolithography rather than being constructed one transistor at a time. Furthermore, much less material is used to construct a packaged IC than to construct a discrete circuit. Performance is high because the components switch quickly and consume little power (compared to their discrete counterparts) as a result of the small size and close proximity of the components. As of 2006, typical chip areas range from a few square millimeters to around 350 mm2, with up to 1 million transistors per mm2.TerminologyIntegrated circuit originally referred to a miniaturized electronic circuit consisting of semiconductor devices, as well as passive components bonded to a substrate or circuit board.[1] This configuration is now commonly referred to as a hybrid integrated circuit. Integrated circuit has since come to refer to the single-piece circuit construction originally known as a monolithic integrated circuit.[2]InventionEarly developments of the integrated circuit go back to 1949, when the German engineer Werner Jacobi (Siemens AG) filed a patent for an integrated-circuit-like semiconductor amplifying device showing five transistors on a common substrate arranged in a 2-stage amplifier arrangement. Jacobi disclosed small and cheap hearing aids as typical industrial applications of his patent. A commercial use of his patent has not been reported.The idea of the integrated circuit was conceived by a radar scientist working for the Royal Radar Establishment of the British Ministry of Defence, Geoffrey W.A. Dummer (1909–2002). Dummer presented the idea to the public at the Symposium on Progress in Quality Electronic Components in Washington, D.C. on May 7, 1952.[4] He gave many sympodia publicly to propagate his ideas, and unsuccessfully attempted to build such a circuit in 1956.A precursor idea to the IC was to create small ceramic squares (wafers), each one containing a single miniaturized component. Components could then be integrated and wired into a tridimensional or tridimensional compact grid. This idea, which looked very promising in 1957, was proposed to the US Army by Jack Kilby, and led to the short-lived Micro module Program. However, as the project was gaining momentum, Jack Kilby came up with a new, revolutionary design: the IC.Newly employed by Texas Instruments, Jack Kilby recorded his initial ideas concerning the integrated circuit in July 1958, successfully demonstrating the first working integrated example on September 12, 1958.In his patent application of February 6, 1959, Jack Kilby described his new device as ―a body of semiconductor material ... wherein all the components of the electronic circuit are completely integrated.‖Jack Kilby won the 2000 Nobel Prize in Physics for his part of the invention of the integrated circuit.Jack Kilby's work was named an IEEE Milestone in 2009.Noyce also came up with his own idea of an integrated circuit half a year later than Jack Kilby. His chip solved many practical problems that Jack Kilby's had not. Produced at Fairchild Semiconductor, it was made of silicon, whereas Jack Kilby chip was made of germanium. GenerationsIn the early days of integrated circuits, only a few transistors could be placed on a chip, as the scale used was large because of the contemporary technology, and manufacturing yields were low by today's standards. As the degree of integration was small, the design was done easily. Over time, millions, and today billions of transistors could be placed on one chip, and to make a good design became a task to be planned thoroughly. This gave rise to new design methods.SSI, MSI and LSIThe first integrated circuits contained only a few transistors. Called "small-scale integration" (SSI), digital circuits containing transistors numbering in the tens for example, while early linear ICs such as the Plessey SL201 or the Philips TAA320 had as few as two transistors. The term Large Scale Integration was first used by IBM scientist Rolf Landauer when describing the theoretical concept, from there came the terms for SSI, MSI, VLSI, and ULSI.SSI circuits were crucial to early aerospace projects, and aerospace projects helped inspire development of the technology. Both the Minuteman missile and Apollo program needed lightweight digital computers for their inertial guidance systems; the Apollo guidance computer led and motivated the integrated-circuit technology,while the Minuteman missile forced it into mass-production. The Minuteman missile program and various other Navy programs accounted for the total $4 million integrated circuit market in 1962, and by 1968, U.S. Government space and defense spending still accounted for 37% of the $312 million total production. The demand by the U.S. Government supported the nascent integrated circuit market until costs fell enough to allow firms to penetrate the industrial and eventually the consumer markets. The average price per integrated circuit dropped from $50.00 in1962 to $2.33 in 1968.[13] Integrated circuits began to appear in consumer products by the turn of the decade, a typical application being FMinter-carrier sound processing in television receivers.The next step in the development of integrated circuits, taken in the late 1960s, introduced devices which contained hundreds of transistors on each chip, called "medium-scale integration" (MSI).They were attractive economically because while they cost little more to produce than SSI devices, they allowed more complex systems to be produced using smaller circuit boards, less assembly work (because of fewer separate components), and a number of other advantages.Further development, driven by the same economic factors, led to "large-scale integration" (LSI) in the mid 1970s, with tens of thousands of transistors per chip.Integrated circuits such as 1K-bit RAMs, calculator chips, and the first microprocessors, that began to be manufactured in moderate quantities in the early 1970s, had under 4000 transistors. True LSI circuits, approaching 10,000 transistors, began to be produced around 1974, for computer main memories and second-generation microprocessors.VLSIThe final step in the development process, starting in the 1980s and continuing through the present, was "very large-scale integration" (VLSI). The development started with hundreds of thousands of transistors in the early 1980s, and continues beyond several billion transistors as of 2009. Multiple developments were required to achieve this increased density. Manufacturers moved to smaller design rules and cleaner fabrication facilities, so that they could make chips with more transistors and maintain adequate yield. The path of process improvements was summarized by the International Technology Roadmap for Semiconductors (ITRS). Design tools improved enough to make it practical to finish these designs in a reasonable time. The more energy efficient CMOS replaced NMOS and PMOS, avoiding a prohibitive increase in power consumption. Better texts such as the landmark textbook by Mead and Conway helped schools educate more designers, among other factors.In 1986 the first one megabit RAM chips were introduced, which contained more than one million transistors. Microprocessor chips passed the million transistor mark in 1989 and the billion transistor mark in 2005.[14] The trend continues largely unabated, with chips introduced in 2007 containing tens of billions of memory transistors.[15]ULSI, WSI, SOC and 3D-ICTo reflect further growth of the complexity, the term ULSI that stands for "ultra-large-scale integration" was proposed for chips of complexityof more than 1 million transistors.Wafer-scale integration (WSI) is a system of building very-large integrated circuits that uses an entire silicon wafer to produce a single "super-chip". Through a combination of large size and reduced packaging, WSI could lead to dramatically reduced costs for some systems, notably massively parallel supercomputers. The name is taken from the term Very-Large-Scale Integration, the current state of the art when WSI was being developed.A system-on-a-chip (SoC or SOC) is an integrated circuit in which all the components needed for a computer or other system are included on a single chip. The design of such a device can be complex and costly, and building disparate components on a single piece of silicon may compromise the efficiency of some elements. However, these drawbacks are offset by lower manufacturing and assembly costs and by a greatly reduced power budget: because signals among the components are kept on-die, much less power is required (see Packaging).A three-dimensional integrated circuit (3D-IC) has two or more layers of active electronic components that are integrated both vertically and horizontally into a single circuit. Communication between layers useson-die signaling, so power consumption is much lower than in equivalent separate circuits. Judicious use of short vertical wires can substantially reduce overall wire length for faster operation.Advances in integrated circuitsAmong the most advanced integrated circuits are the microprocessors or "cores", which control everything from computers and cellular phones to digital microwave ovens. Digital memory chips and ASICs are examples of other families of integrated circuits that are important to the modern information society. While the cost of designing and developing a complex integrated circuit is quite high, when spread across typically millions of production units the individual IC cost is minimized. The performance of ICs is high because the small size allows short traces which in turn allows low power logic (such as CMOS) to be used at fast switching speeds.ICs have consistently migrated to smaller feature sizes over the years, allowing more circuitry to be packed on each chip. This increased capacity per unit area can be used to decrease cost and/or increase functionality—see Moore's law which, in its modern interpretation, states that the number of transistors in an integrated circuit doubles every two years. In general, as the feature size shrinks, almost everything improves—the cost per unit and the switching power consumption godown, and the speed goes up. However, ICs with nanometer-scale devices are not without their problems, principal among which is leakage current (see subthreshold leakage for a discussion of this), although these problems are not insurmountable and will likely be solved or at least ameliorated by the introduction of high-k dielectrics. Since these speed and power consumption gains are apparent to the end user, there is fierce competition among the manufacturers to use finer geometries. This process, and the expected progress over the next few years, is well described by the International Technology Roadmap for Semiconductors (ITRS).In current research projects, integrated circuits are also developed for sensoric applications in medical implants or other bioelectronic devices. Particular sealing strategies have to be taken in such biogenic environments to avoid corrosion or biodegradation of the exposed semiconductor materials.[16] As one of the few materials well established in CMOS technology, titanium nitride (TiN) turned out as exceptionally stable and well suited for electrode applications in medical implants.[17][18] ClassificationIntegrated circuits can be classified into analog, digital and mixed signal (both analog and digital on the same chip).Digital integrated circuits can contain anything from one to millions of logic gates, flip-flops, multiplexers, and other circuits in a few square millimeters. The small size of these circuits allows high speed, low power dissipation, and reduced manufacturing cost compared with board-level integration. These digital ICs, typically microprocessors, DSPs, and micro controllers, work using binary mathematics to process "one" and "zero" signals.Analog ICs, such as sensors, power management circuits, and operational amplifiers, work by processing continuous signals. They perform functions like amplification, active filtering, demodulation, and mixing. Analog ICs ease the burden on circuit designers by having expertly designed analog circuits available instead of designing a difficult analog circuit from scratch.ICs can also combine analog and digital circuits on a single chip to create functions such as A/D converters and D/A converters. Such circuits offer smaller size and lower cost, but must carefully account for signal interference.ManufacturingFabricationRendering of a small standard cell with three metal layers (dielectric has been removed). The sand-colored structures are metal interconnect, with the vertical pillars being contacts, typically plugs of tungsten. The reddish structures are poly-silicon gates, and the solid at the bottom is the crystalline silicon bulk.Schematic structure of a CMOS chip, as built in the early 2000s. The graphic shows LDD-Misfit's on an SOI substrate with five materialization layers and solder bump for flip-chip bonding. It also shows the section for FEOL (front-end of line), BEOL (back-end of line) and first parts of back-end process.The semiconductors of the periodic table of the chemical elements were identified as the most likely materials for a solid-state vacuum tube. Starting with copper oxide, proceeding to germanium, then silicon, the materials were systematically studied in the 1940s and 1950s. Today, silicon monocrystals are the main substrate used for ICs although someIII-V compounds of the periodic table such as gallium arsenide are used for specialized applications like LEDs, lasers, solar cells and the highest-speed integrated circuits. It took decades to perfect methods of creating crystals without defects in the crystalline structure of the semiconducting material.Semiconductor ICs are fabricated in a layer process which includes these key process steps:∙Imaging∙Deposition∙EtchingThe main process steps are supplemented by doping and cleaning.∙Integrated circuits are composed of many overlapping layers, each defined by photolithography, and normally shown in different colors.Some layers mark where various dopants are diffused into thesubstrate (called diffusion layers), some define where additional ions are implanted (implant layers), some define the conductors(poly-silicon or metal layers), and some define the connectionsbetween the conducting layers (via or contact layers). All components are constructed from a specific combination of these layers.∙In a self-aligned CMOS process, a transistor is formed wherever the gate layer (poly-silicon or metal) crosses a diffusion layer.∙Capacitive structures, in form very much like the parallel conducting plates of a traditional electrical capacitor, are formedaccording to the area of the "plates", with insulating material between the plates. Capacitors of a wide range of sizes are common on ICs.∙Meandering stripes of varying lengths are sometimes used to form on-chip resistors, though most logic circuits do not need any resistors.The ratio of the length of the resistive structure to its width, combined with its sheet resistivity, determines the resistance.∙More rarely, inductive structures can be built as tiny on-chip coils, or simulated by gyrators.Since a CMOS device only draws current on the transition between logic states, CMOS devices consume much less current than bipolar devices.A random access memory is the most regular type of integrated circuit; the highest density devices are thus memories; but even a microprocessor will have memory on the chip. (See the regular array structure at the bottom of the first image.) Although the structures are intricate – with widths which have been shrinking for decades – the layers remain much thinner than the device widths. The layers of material are fabricated much like a photographic process, although light waves in the visible spectrum cannot be used to "expose" a layer of material, as they would be too large for the features. Thus photons of higher frequencies (typically ultraviolet) are used to create the patterns for each layer. Because each feature is so small, electron microscopes are essential tools for a process engineer who might be debugging a fabrication process.Each device is tested before packaging using automated test equipment (ATE), in a process known as wafer testing, or wafer probing. The wafer is then cut into rectangular blocks, each of which is called a die. Each good die (plural dice, dies, or die) is then connected into a package using aluminum (or gold) bond wires which are welded and/or thermosonic bonded to pads, usually found around the edge of the die. After packaging, the devices go through final testing on the same or similar ATE used during wafer probing. Industrial CT scanning can also be used. Test cost can account for over 25% of the cost of fabrication on lower cost products, but can be negligible on low yielding, larger, and/or higher cost devices.As of 2005, a fabrication facility (commonly known as a semiconductor fab) costs over $1 billion to construct,[19] because much of the operation is automated. Today, the most advanced processes employ the following techniques:∙The wafers are up to 300 mm in diameter (wider than a common dinner plate).∙Use of 32 nanometer or smaller chip manufacturing process. Intel, IBM, NEC, and AMD are using ~32 nanometers for their CPU chips.IBM and AMD introduced immersion lithography for their 45 nmprocesses[20]∙Copper interconnects where copper wiring replaces aluminium for interconnects.∙Low-K dielectric insulators.∙Silicon on insulator (SOI)∙Strained silicon in a process used by IBM known as strained silicon directly on insulator (SSDOI)∙Multigate devices such as trin-gate transistors being manufactured by Intel from 2011 in their 22 nim process.PackagingIn the late 1990s, plastic quad flat pack (PQFP) and thin small-outline package (TSOP) packages became the most common for high pin count devices, though PGA packages are still often used for high-end microprocessors. Intel and AMD are currently transitioning from PGA packages on high-end microprocessors to land grid array (LGA) packages.Ball grid array (BGA) packages have existed since the 1970s. Flip-chip Ball Grid Array packages, which allow for much higher pin count than other package types, were developed in the 1990s. In an FCBGA package the die is mounted upside-down (flipped) and connects to the packageballs via a package substrate that is similar to a printed-circuit board rather than by wires. FCBGA packages allow an array of input-output signals (called Area-I/O) to be distributed over the entire die rather than being confined to the die periphery.Traces out of the die, through the package, and into the printed circuit board have very different electrical properties, compared to on-chip signals. They require special design techniques and need much more electric power than signals confined to the chip itself.When multiple dies are put in one package, it is called SiP, for System In Package. When multiple dies are combined on a small substrate, often ceramic, it's called an MCM, or Multi-Chip Module. The boundary between a big MCM and a small printed circuit board is sometimes fuzzy. Chip labeling and manufacture dateMost integrated circuits large enough to include identifying information include four common sections: the manufacturer's name or logo, the part number, a part production batch number and/or serial number, and a four-digit code that identifies when the chip was manufactured. Extremely small surface mount technology parts often bear only a number used in a manufacturer's lookup table to find the chip characteristics.The manufacturing date is commonly represented as a two-digit year followed by a two-digit week code, such that a part bearing the code 8341 was manufactured in week 41 of 1983, or approximately in October 1983. Legal protection of semiconductor chip layoutsLike most of the other forms of intellectual property, IC layout designs are creations of the human mind. They are usually the result of an enormous investment, both in terms of the time of highly qualified experts, and financially. There is a continuing need for the creation of new layout-designs which reduce the dimensions of existing integrated circuits and simultaneously increase their functions. The smaller an integrated circuit, the less the material needed for its manufacture, and the smaller the space needed to accommodate it. Integrated circuits are utilized in a large range of products, including articles of everyday use, such as watches, television sets, washing machines, automobiles, etc., as well as sophisticated data processing equipment.The possibility of copying by photographing each layer of an integrated circuit and preparing photomasks for its production on the basis of the photographs obtained is the main reason for the introduction of legislation for the protection of layout-designs.A diplomatic conference was held at Washington, D.C., in 1989, which adopted a Treaty on Intellectual Property in Respect of Integrated Circuits (IPIC Treaty). The Treaty on Intellectual Property in respect of Integrated Circuits, also called Washington Treaty or IPIC Treaty (signed at Washington on May 26, 1989) is currently not in force, but was partially integrated into the TRIPs agreement.National laws protecting IC layout designs have been adopted in a number of countries.Other developmentsIn the 1980s, programmable logic devices were developed. These devices contain circuits whose logical function and connectivity can be programmed by the user, rather than being fixed by the integrated circuit manufacturer. This allows a single chip to be programmed to implement different LSI-type functions such as logic gates, adders and registers. Current devices called field-programmable gate arrays can now implement tens of thousands of LSI circuits in parallel and operate up to 1.5 GHz (Anachronism holding the speed record).The techniques perfected by the integrated circuits industry over the last three decades have been used to create very small mechanical devices driven by electricity using a technology known asmicroelectromechanical systems. These devices are used in a variety of commercial and military applications. Example commercial applications include DLP projectors, inkjet printers, and accelerometers used to deploy automobile airbags.In the past, radios could not be fabricated in the same low-cost processes as microprocessors. But since 1998, a large number of radio chips have been developed using CMOS processes. Examples include Intel's DECT cordless phone, or Atheros's 802.11 card.Future developments seem to follow the multi-coremulti-microprocessor paradigm, already used by the Intel and AMD dual-core processors. Intel recently unveiled a prototype, "not for commercial sale" chip that bears 80 microprocessors. Each core is capable of handling its own task independently of the others. This is in response to the heat-versus-speed limit that is about to be reached using existing transistor technology. This design provides a new challenge to chip programming. Parallel programming languages such as theopen-source X10 programming language are designed to assist with this task.集成电路集成电路或单片集成电子电路(也称为IC、集成电路片或微型集成电路片)是一种电子电路制作的图案扩散微量元素分析在基体表面形成一层薄的半导体材料。

供电毕设(含外文文献+中文翻译)

供电毕设(含外文文献+中文翻译)

供电毕设(含外文文献+中文翻译)某钢铁企业变电所保护系统及防护系统设计1 绪论1.1 变电站继电保护的发展变电站是电力系统的重要组成部分,它直接影响整个电力系统的安全与经济运行,失恋系发电厂和用户的中间环节,起着变换和分配电能的作用,电气主接线是发电厂变电所的主要环节,电气主接线的拟定直接关系着全厂电气设备的选择、配电装置的布置、继电保护和自动装置的确定,是变电站电气部分投资大小的决定性因素。

继电保护的发展现状,电力系统的飞速发展对继电保护不断提出新的要求,电子技术、计算机技术与通信技术的飞速发展又为继电保护技术的发展不断地注入了新的活力,因此,继电保护技术得天独厚,在40余年的时间里完成了发展的4个历史阶段。

随着电力系统的高速发展和计算机技术、通信技术的进步,继电保护技术面临着进一步发展的趋势。

国内外继电保护技术发展的趋势为:计算机化,网络化,保护、控制、测量、数据通信一体化和人工智能化。

继电保护的未来发展,继电保护技术未来趋势是向计算机化,网络化,智能化,保护、控制、测量、数据通信一体化发展。

微机保护技术的发展趋势:①高速数据处理芯片的应用②微机保护的网络化③保护、控制、测量、信号、数据通信一体化④继电保护的智能化1.2本文的主要工作在本次毕业设计中,我主要做了关于某钢铁企业变电所保护系统及防护系统设计,充分利用自己所学的知识,严格按照任务书的要求,围绕所要设计的主接线图的可靠性,灵活性进行研究,包括:负荷计算、主接线的选择、短路电流计算,主变压器继电保护的配置以及线路继电保护的计算与校验的研究等等。

1.3 设计概述1.3.1 设计依据1)继电保护设计任务书。

2)国标GB50062-92《电力装置的继电保护和自动装置设计规范》3)《工业企业供电》1.3.2 设计原始资料本企业共有12个车间,承担各附属厂的设备、变压器修理和制造任务。

1、各车间用电设备情况用电设备明细见表1.1所示。

表1.1 用电设备明细表2、负荷性质本厂大部分车间为一班制,少数车间为两班或者三班制,年最大有功负荷利用小时数为h2300。

毕业设计(论文)外文原文及译文

毕业设计(论文)外文原文及译文

毕业设计(论文)外文原文及译文一、外文原文MCUA microcontroller (or MCU) is a computer-on-a-chip. It is a type of microcontroller emphasizing self-sufficiency and cost-effectiveness, in contrast to a general-purpose microprocessor (the kind used in a PC).With the development of technology and control systems in a wide range of applications, as well as equipment to small and intelligent development, as one of the single-chip high-tech for its small size, powerful, low cost, and other advantages of the use of flexible, show a strong vitality. It is generally better compared to the integrated circuit of anti-interference ability, the environmental temperature and humidity have better adaptability, can be stable under the conditions in the industrial. And single-chip widely used in a variety of instruments and meters, so that intelligent instrumentation and improves their measurement speed and measurement accuracy, to strengthen control functions. In short,with the advent of the information age, traditional single- chip inherent structural weaknesses, so that it show a lot of drawbacks. The speed, scale, performance indicators, such as users increasingly difficult to meet the needs of the development of single-chip chipset, upgrades are faced with new challenges.The Description of AT89S52The AT89S52 is a low-power, high-performance CMOS 8-bit microcontroller with 8K bytes of In-System Programmable Flash memory. The device is manufactured using Atmel's high-density nonvolatile memory technology and is compatible with the industry-standard 80C51 instruction set and pinout. The on-chip Flash allows the program memory to be reprogrammed in-system or by a conventional nonvolatile memory programmer. By combining a versatile 8-bit CPU with In-System Programmable Flash on a monolithic chip, the Atmel AT89S52 is a powerful microcontroller which provides a highly-flexible and cost-effective solution to many embedded control applications.The AT89S52 provides the following standard features: 8K bytes ofFlash, 256 bytes of RAM, 32 I/O lines, Watchdog timer, two data pointers, three 16-bit timer/counters, a six-vector two-level interrupt architecture, a full duplex serial port, on-chip oscillator, and clock circuitry. In addition, the AT89S52 is designed with static logic for operation down to zero frequency and supports two software selectable power saving modes. The Idle Mode stops the CPU while allowing the RAM, timer/counters, serial port, and interrupt system to continue functioning. The Power-down mode saves the RAM contents but freezes the oscillator, disabling all other chip functions until the next interrupt or hardware reset.Features• Compatible with MCS-51® Products• 8K Bytes of In-System Programmable (ISP) Flash Memory– Endurance: 1000 Write/Erase Cycles• 4.0V to 5.5V Operating Range• Fully Static Operation: 0 Hz to 33 MHz• Three-level Program Memory Lock• 256 x 8-bit Internal RAM• 32 Programmable I/O Lines• Three 16-bit Timer/Counters• Eight Interrupt Sources• Full Duplex UART Serial Channel• Low-power Idle and Power-down Modes• Interrupt Recovery from Power-down Mode• Watchdog Timer• Dual Data Pointer• Power-off FlagPin DescriptionVCCSupply voltage.GNDGround.Port 0Port 0 is an 8-bit open drain bidirectional I/O port. As an output port, each pin can sink eight TTL inputs. When 1s are written to port 0 pins, the pins can be used as high-impedance inputs.Port 0 can also be configured to be the multiplexed low-order address/data bus during accesses to external program and data memory. In this mode, P0 has internal pullups.Port 0 also receives the code bytes during Flash programming and outputs the code bytes during program verification. External pullups are required during program verification.Port 1Port 1 is an 8-bit bidirectional I/O port with internal pullups. The Port 1 output buffers can sink/source four TTL inputs. When 1s are written to Port 1 pins, they are pulled high by the internal pullups and can be used as inputs. As inputs, Port 1 pins that are externally being pulled low will source current (IIL) because of the internal pullups.In addition, P1.0 and P1.1 can be configured to be the timer/counter 2 external count input (P1.0/T2) and the timer/counter 2 trigger input (P1.1/T2EX), respectively.Port 1 also receives the low-order address bytes during Flash programming and verification.Port 2Port 2 is an 8-bit bidirectional I/O port with internal pullups. The Port 2 output buffers can sink/source four TTL inputs. When 1s are written to Port 2 pins, they are pulled high by the internal pullups and can be used as inputs. As inputs, Port 2 pins that are externally being pulled low will source current (IIL) because of the internal pullups.Port 2 emits the high-order address byte during fetches from external program memory and during accesses to external data memory that use 16-bit addresses (MOVX @ DPTR). In this application, Port 2 uses strong internal pull-ups when emitting 1s. During accesses to external data memory that use 8-bit addresses (MOVX @ RI), Port 2 emits the contents of the P2 Special Function Register.Port 2 also receives the high-order address bits and some control signals during Flash programming and verification.Port 3Port 3 is an 8-bit bidirectional I/O port with internal pullups. The Port 3 output buffers can sink/source four TTL inputs. When 1s are written to Port 3 pins, they are pulled high by the internal pullups and can be used as inputs. As inputs, Port 3 pins that are externally being pulled low will source current (IIL) because of the pullups.Port 3 also serves the functions of various special features of the AT89S52, as shown in the following table.Port 3 also receives some control signals for Flash programming and verification.RSTReset input. A high on this pin for two machine cycles while the oscillator is running resets the device. This pin drives High for 96 oscillator periods after the Watchdog times out. The DISRTO bit in SFR AUXR (address 8EH) can be used to disable this feature. In the default state of bit DISRTO, the RESET HIGH out feature is enabled.ALE/PROGAddress Latch Enable (ALE) is an output pulse for latching the low byte of the address during accesses to external memory. This pin is also the program pulse input (PROG) during Flash programming.In normal operation, ALE is emitted at a constant rate of 1/6 the oscillator frequency and may be used for external timing or clocking purposes. Note, however, that one ALE pulse is skipped during each access to external data memory.If desired, ALE operation can be disabled by setting bit 0 of SFR location 8EH. With the bit set, ALE is active only during a MOVX or MOVC instruction. Otherwise, the pin is weakly pulled high. Setting the ALE-disable bit has no effect if the microcontroller is in external execution mode.PSENProgram Store Enable (PSEN) is the read strobe to external program memory. When the AT89S52 is executing code from external program memory, PSENis activated twice each machine cycle, except that two PSEN activations are skipped during each access to external data memory.EA/VPPExternal Access Enable. EA must be strapped to GND in order to enable the device to fetch code from external program memory locations starting at 0000H up to FFFFH. Note, however, that if lock bit 1 is programmed, EA will be internally latched on reset. EA should be strapped to VCC for internal program executions.This pin also receives the 12-volt programming enable voltage (VPP) during Flash programming.XTAL1Input to the inverting oscillator amplifier and input to the internal clock operating circuit.XTAL2Output from the inverting oscillator amplifier.Special Function RegistersNote that not all of the addresses are occupied, and unoccupied addresses may not be implemented on the chip. Read accesses to these addresses will in general return random data, and write accesses will have an indeterminate effect.User software should not write 1s to these unlisted locations, since they may be used in future products to invoke new features. In that case, the reset or inactive values of the new bits will always be 0.Timer 2 Registers:Control and status bits are contained in registers T2CON and T2MOD for Timer 2. The register pair (RCAP2H, RCAP2L) are the Capture/Reload registers for Timer 2 in 16-bit capture mode or 16-bit auto-reload mode.Interrupt Registers:The individual interrupt enable bits are in the IE register. Two priorities can be set for each of the six interrupt sources in the IP register.Dual Data Pointer Registers: To facilitate accessing both internal and external data memory, two banks of 16-bit Data Pointer Registers areprovided: DP0 at SFR address locations 82H-83H and DP1 at 84H-85H. Bit DPS = 0 in SFR AUXR1 selects DP0 and DPS = 1 selects DP1. The user should always initialize the DPS bit to the appropriate value before accessing the respective Data Pointer Register.Power Off Flag:The Power Off Flag (POF) is located at bit 4 (PCON.4) in the PCON SFR. POF is set to “1” during power up. It can be set and rest under software control and is not affected by reset.Memory OrganizationMCS-51 devices have a separate address space for Program and Data Memory. Up to 64K bytes each of external Program and Data Memory can be addressed.Program MemoryIf the EA pin is connected to GND, all program fetches are directed to external memory. On the AT89S52, if EA is connected to VCC, program fetches to addresses 0000H through 1FFFH are directed to internal memory and fetches to addresses 2000H through FFFFH are to external memory.Data MemoryThe AT89S52 implements 256 bytes of on-chip RAM. The upper 128 bytes occupy a parallel address space to the Special Function Registers. This means that the upper 128 bytes have the same addresses as the SFR space but are physically separate from SFR space.When an instruction accesses an internal location above address 7FH, the address mode used in the instruction specifies whether the CPU accesses the upper 128 bytes of RAM or the SFR space. Instructions which use direct addressing access of the SFR space. For example, the following direct addressing instruction accesses the SFR at location 0A0H (which is P2).MOV 0A0H, #dataInstructions that use indirect addressing access the upper 128 bytes of RAM. For example, the following indirect addressing instruction, where R0 contains 0A0H, accesses the data byte at address 0A0H, rather than P2 (whose address is 0A0H).MOV @R0, #dataNote that stack operations are examples of indirect addressing, so the upper 128 bytes of data RAM are available as stack space.Timer 0 and 1Timer 0 and Timer 1 in the AT89S52 operate the same way as Timer 0 and Timer 1 in the AT89C51 and AT89C52.Timer 2Timer 2 is a 16-bit Timer/Counter that can operate as either a timer or an event counter. The type of operation is selected by bit C/T2 in the SFR T2CON (shown in Table 2). Timer 2 has three operating modes: capture, auto-reload (up or down counting), and baud rate generator. The modes are selected by bits in T2CON.Timer 2 consists of two 8-bit registers, TH2 and TL2. In the Timer function, the TL2 register is incremented every machine cycle. Since a machine cycle consists of 12 oscillator periods, the count rate is 1/12 of the oscillator frequency.In the Counter function, the register is incremented in response to a1-to-0 transition at its corresponding external input pin, T2. In this function, the external input is sampled during S5P2 of every machine cycle. When the samples show a high in one cycle and a low in the next cycle, the count is incremented. The new count value appears in the register during S3P1 of the cycle following the one in which the transition was detected. Since two machine cycles (24 oscillator periods) are required to recognize a 1-to-0 transition, the maximum count rate is 1/24 of the oscillator frequency. To ensure that a given level is sampled at least once before it changes, the level should be held for at least one full machine cycle.InterruptsThe AT89S52 has a total of six interrupt vectors: two external interrupts (INT0 and INT1), three timer interrupts (Timers 0, 1, and 2), and the serial port interrupt. These interrupts are all shown in Figure 10.Each of these interrupt sources can be individually enabled or disabledby setting or clearing a bit in Special Function Register IE. IE also contains a global disable bit, EA, which disables all interrupts at once.Note that Table 5 shows that bit position IE.6 is unimplemented. In the AT89S52, bit position IE.5 is also unimplemented. User software should not write 1s to these bit positions, since they may be used in future AT89 products. Timer 2 interrupt is generated by the logical OR of bits TF2 and EXF2 in register T2CON. Neither of these flags is cleared by hardware when the service routine is vectored to. In fact, the service routine may have to determine whether it was TF2 or EXF2 that generated the interrupt, and that bit will have to be cleared in software.The Timer 0 and Timer 1 flags, TF0 and TF1, are set at S5P2 of the cycle in which the timers overflow. The values are then polled by the circuitry in the next cycle. However, the Timer 2 flag, TF2, is set at S2P2 and is polled in the same cycle in which the timer overflows.二、译文单片机单片机即微型计算机,是把中央处理器、存储器、定时/计数器、输入输出接口都集成在一块集成电路芯片上的微型计算机。

毕业设计中英文翻译

毕业设计中英文翻译

Bridge Waterway OpeningsIn a majority of cases the height and length of a bridge depend solely upon the amount of clear waterway opening that must be provided to accommodate the floodwaters of the stream. Actually, the problem goes beyond that of merely accommodating the floodwaters and requires prediction of the various magnitudes of floods for given time intervals. It would be impossible to state that some given magnitude is the maximum that will ever occur, and it is therefore impossible to design for the maximum, since it cannot be ascertained. It seems more logical to design for a predicted flood of some selected interval ---a flood magnitude that could reasonably be expected to occur once within a given number of years. For example, a bridge may be designed for a 50-year flood interval; that is, for a flood which is expected (according to the laws of probability) to occur on the average of one time in 50 years. Once this design flood frequency, or interval of expected occurrence, has been decided, the analysis to determine a magnitude is made. Whenever possible, this analysis is based upon gauged stream records. In areas and for streams where flood frequency and magnitude records are not available, an analysis can still be made. With data from gauged streams in the vicinity, regional flood frequencies can be worked out; with a correlation between the computed discharge for the ungauged stream and the regional flood frequency, a flood frequency curve can be computed for the stream in question. Highway CulvertsAny closed conduit used to conduct surface runoff from one side of a roadway to the other is referred to as a culvert. Culverts vary in size from large multiple installations used in lieu of a bridge to small circular or elliptical pipe, and their design varies in significance. Accepted practice treats conduits under the roadway as culverts. Although the unit cost of culverts is much less than that of bridges, they are far more numerous, normally averaging about eight to the mile, and represent a greater cost in highway. Statistics show that about 15 cents of the highway construction dollar goes to culverts, as compared with 10 cents for bridge. Culvert design then is equally as important as that of bridges or other phases of highway and should be treated accordingly.Municipal Storm DrainageIn urban and suburban areas, runoff waters are handled through a system of drainage structures referred to as storm sewers and their appurtenances. The drainage problem is increased in these areas primarily for two reasons: the impervious nature of the area creates a very high runoff; and there is little room for natural water courses. It is often necessary to collect the entire storm water into a system of pipes and transmit it over considerable distances before it can be loosed again as surface runoff. This collection and transmission further increase the problem, since all of the water must be collected with virtually no ponding, thus eliminating any natural storage; and though increased velocity the peak runoffs are reached more quickly. Also, the shorter times of peaks cause the system to be more sensitive to short-duration, high-intensity rainfall. Storm sewers, like culverts and bridges, are designed for storms of various intensity –return-period relationship, depending upon the economy and amount of ponding that can be tolerated.Airport DrainageThe problem of providing proper drainage facilities for airports is similar in many ways to that of highways and streets. However, because of the large and relatively flat surface involved the varying soil conditions, the absence of natural water courses and possible side ditches, and the greater concentration of discharge at the terminus of the construction area, some phases of the problem are more complex. For the average airport the overall area to be drained is relatively large and an extensive drainage system is required. The magnitude of such a system makes it even more imperative that sound engineeringprinciples based on all of the best available data be used to ensure the most economical design. Overdesign of facilities results in excessive money investment with no return, and underdesign can result in conditions hazardous to the air traffic using the airport.In other to ensure surfaces that are smooth, firm, stable, and reasonably free from flooding, it is necessary to provide a system which will do several things. It must collect and remove the surface water from the airport surface; intercept and remove surface water flowing toward the airport from adjacent areas; collect and remove any excessive subsurface water beneath the surface of the airport facilities and in many cases lower the ground-water table; and provide protection against erosion of the sloping areas. Ditches and Cut-slope DrainageA highway cross section normally includes one and often two ditches paralleling the roadway. Generally referred to as side ditches these serve to intercept the drainage from slopes and to conduct it to where it can be carried under the roadway or away from the highway section, depending upon the natural drainage. To a limited extent they also serve to conduct subsurface drainage from beneath the roadway to points where it can be carried away from the highway section.A second type of ditch, generally referred to as a crown ditch, is often used for the erosion protection of cut slopes. This ditch along the top of the cut slope serves to intercept surface runoff from the slopes above and conduct it to natural water courses on milder slopes, thus preventing the erosion that would be caused by permitting the runoff to spill down the cut faces.12 Construction techniquesThe decision of how a bridge should be built depends mainly on local conditions. These include cost of materials, available equipment, allowable construction time and environmental restriction. Since all these vary with location and time, the best construction technique for a given structure may also vary. Incremental launching or Push-out MethodIn this form of construction the deck is pushed across the span with hydraulic rams or winches. Decks of prestressed post-tensioned precast segments, steel or girders have been erected. Usually spans are limited to 50~60 m to avoid excessive deflection and cantilever stresses , although greater distances have been bridged by installing temporary support towers . Typically the method is most appropriate for long, multi-span bridges in the range 300 ~ 600 m ,but ,much shorter and longer bridges have been constructed . Unfortunately, this very economical mode of construction can only be applied when both the horizontal and vertical alignments of the deck are perfectly straight, or alternatively of constant radius. Where pushing involves a small downward grade (4% ~ 5%) then a braking system should be installed to prevent the deck slipping away uncontrolled and heavy bracing is then needed at the restraining piers.Bridge launching demands very careful surveying and setting out with continuous and precise checks made of deck deflections. A light aluminum or steel-launching nose forms the head of the deck to provide guidance over the pier. Special teflon or chrome-nickel steel plate bearings are used to reduce sliding friction to about 5% of the weight, thus slender piers would normally be supplemented with braced columns to avoid cracking and other damage. These columns would generally also support the temporary friction bearings and help steer the nose.In the case of precast construction, ideally segments should be cast on beds near the abutments and transferred by rail to the post-tensioning bed, the actual transport distance obviously being kept to the minimum. Usually a segment is cast against the face of the previously concerted unit to ensure a good fit when finally glued in place with an epoxy resin. If this procedure is not adopted , gaps of approximately 500mm shold be left between segments with the reinforcements running through andstressed together to form a complete unit , but when access or space on the embankment is at a premium it may be necessary to launch the deck intermittently to allow sections to be added progressively .The correponding prestressing arrangements , both for the temporary and permanent conditions would be more complicated and careful calculations needed at all positions .The pricipal advantage of the bridge-launching technique is the saving in falsework, especially for high decks. Segments can also be fabricated or precast in a protected environment using highly productive equipment. For concrete segment, typically two segment are laid each week (usually 10 ~ 30 m in length and perhaps 300 to 400 tonnes in weight) and after posttensioning incrementally launched at about 20 m per day depending upon the winching/jacking equipment.Balanced Cantiulever ConstructionDevelopment in box section and prestressed concrete led to short segment being assembled or cast in place on falsework to form a beam of full roadway width. Subsequently the method was refined virtually to eliminate the falsework by using a previously constructed section of the beam to provide the fixing for a subsequently cantilevered section. The principle is demonsrated step-by-step in the example shown in Fig.1.In the simple case illustrated, the bridge consists of three spans in the ratio 1:1:2. First the abutments and piers are constructed independently from the bridge superstructure. The segment immediately above each pier is then either cast in situ or placed as a precast unit .The deck is subsequently formed by adding sections symmetrically either side.Ideally sections either side should be placed simultaneously but this is usually impracticable and some inbalance will result from the extra segment weight, wind forces, construction plant and material. When the cantilever has reached both the abutment and centre span,work can begin from the other pier , and the remainder of the deck completed in a similar manner . Finally the two individual cantilevers are linked at the centre by a key segment to form a single span. The key is normally cast in situ.The procedure initially requires the first sections above the column and perhaps one or two each side to be erected conventionally either in situ concrete or precast and temporarily supported while steel tendons are threaded and post-tensioned . Subsequent pairs of section are added and held in place by post-tensioning followed by grouting of the ducts. During this phase only the cantilever tendons in the upper flange and webs are tensioned. Continuity tendons are stressed after the key section has been cast in place. The final gap left between the two half spans should be wide enough to enable the jacking equipment to be inserted. When the individual cantilevers are completed and the key section inserted the continuity tendons are anchored symmetrically about the centre of the span and serve to resist superimposed loads, live loads, redistribution of dead loads and cantilever prestressing forces.The earlier bridges were designed on the free cantilever principle with an expansion joint incorporated at the center .Unfortunately,settlements , deformations , concrete creep and prestress relaxation tended to produce deflection in each half span , disfiguring the general appearance of the bridge and causing discomfort to drivers .These effects coupled with the difficulties in designing a suitable joint led designers to choose a continuous connection, resulting in a more uniform distribution of the loads and reduced deflection. The natural movements were provided for at the bridge abutments using sliding bearings or in the case of long multi-span bridges, joints at about 500 m centres.Special Requirements in Advanced Construction TechniquesThere are three important areas that the engineering and construction team has to consider:(1) Stress analysis during construction: Because the loadings and support conditions of the bridge are different from the finished bridge, stresses in each construction stage must be calculated to ensurethe safety of the structure .For this purpose, realistic construction loads must be used and site personnel must be informed on all the loading limitations. Wind and temperature are usually significant for construction stage.(2) Camber: In order to obtain a bridge with the right elevation, the required camber of the bridge at each construction stage must be calculated. It is required that due consideration be given to creep and shrinkage of the concrete. This kind of the concrete. This kind of calculation, although cumbersome, has been simplified by the use of the compiters.(3) Quality control: This is important for any method construction, but it is more so for the complicated construction techniques. Curing of concrete, post-tensioning, joint preparation, etc. are detrimental to a successful structure. The site personnel must be made aware of the minimum concrete strengths required for post-tensioning, form removal, falsework removal, launching and other steps of operations.Generally speaking, these advanced construction techniques require more engineering work than the conventional falsework type construction, but the saving could be significant.大桥涵洞在大多数情况中桥梁的高度和跨度完全取决于河流的流量,桥梁的高度和跨度必须能够容纳最大洪水量.事实上,这不仅仅是洪水最大流量的问题,还需要在不同时间间隔预测不同程度的水灾。

电气毕业设计用外文翻译(中英文对照)

电气毕业设计用外文翻译(中英文对照)

The Transformer on load ﹠Introduction to DC Machine sThe Transformer on loadIt has been shown that a primary input voltage 1V can be transformed to any desired open-circuit secondary voltage 2E by a suitable choice of turns ratio. 2E is available for circulating a load current impedance. For the moment, a lagging power factor will be considered. The secondary current and the resulting ampere-turns 22N I will change the flux, tending to demagnetize the core, reduce m Φ and with it 1E . Because the primary leakage impedance drop is so low, a small alteration to 1E will cause an appreciable increase of primary current from 0I to a new value of 1I equal to ()()i jX R E V ++111/. The extra primary current and ampere-turns nearly cancel the whole of the secondary ampere-turns. This being so , the mutual flux suffers only a slight modification and requires practically the same net ampere-turns 10N I as on no load. The total primary ampere-turns are increased by an amount 22N I necessary to neutralize the same amount of secondary ampere-turns. In the vector equation , 102211N I N I N I =+; alternatively, 221011N I N I N I -=. At full load, the current 0I is only about 5% of the full-load current and so 1I is nearly equal to 122/N N I . Because in mind that 2121/N N E E =, the input kV A which is approximately 11I E is also approximately equal to the output kV A, 22I E .The physical current has increased, and with in the primary leakage flux to which it is proportional. The total flux linking the primary ,111Φ=Φ+Φ=Φm p , is shown unchanged because the total back e.m.f.,(dt d N E /111Φ-)is still equal and opposite to 1V . However, there has been a redistribution of flux and the mutual component has fallen due to the increase of 1Φ with 1I . Although the change is small, the secondary demand could not be met without a mutual flux and e.m.f. alteration to permit primary current to change. The net flux s Φlinking the secondary winding has been further reduced by the establishment of secondary leakage flux due to 2I , and this opposes m Φ. Although m Φ and2Φ are indicated separately , they combine to one resultant in the core which will be downwards at the instant shown. Thus the secondary terminal voltage is reduced to dt d N V S /22Φ-= which can be considered in two components, i.e. dt d N dt d N V m //2222Φ-Φ-=or vectorially 2222I jX E V -=. As for the primary, 2Φ is responsible for a substantially constant secondaryleakage inductance 222222/Λ=ΦN i N . It will be noticed that the primary leakage flux is responsiblefor part of the change in the secondary terminal voltage due to its effects on the mutual flux. The two leakage fluxes are closely related; 2Φ, for example, by its demagnetizing action on m Φ has caused the changes on the primary side which led to the establishment of primary leakage flux.If a low enough leading power factor is considered, the total secondary flux and the mutual flux are increased causing the secondary terminal voltage to rise with load. p Φ is unchanged in magnitude from the no load condition since, neglecting resistance, it still has to provide a total back e.m.f. equal to 1V . It is virtually the same as 11Φ, though now produced by the combined effect of primary and secondary ampere-turns. The mutual flux must still change with load to give a change of 1E and permit more primary current to flow. 1E has increased this time but due to the vector combination with 1V there is still an increase of primary current.Two more points should be made about the figures. Firstly, a unity turns ratio has been assumed for convenience so that '21E E =. Secondly, the physical picture is drawn for a different instant of time from the vector diagrams which show 0=Φm , if the horizontal axis is taken as usual, to be the zero time reference. There are instants in the cycle when primary leakage flux is zero, when the secondary leakage flux is zero, and when primary and secondary leakage flux is zero, and when primary and secondary leakage fluxes are in the same sense.The equivalent circuit already derived for the transformer with the secondary terminals open, can easily be extended to cover the loaded secondary by the addition of the secondary resistance and leakage reactance.Practically all transformers have a turns ratio different from unity although such an arrangement issometimes employed for the purposes of electrically isolating one circuit from another operating at the same voltage. To explain the case where 21N N ≠ the reaction of the secondary will be viewed from the primary winding. The reaction is experienced only in terms of the magnetizing force due to the secondary ampere-turns. There is no way of detecting from the primary side whether 2I is large and 2N small or vice versa, it is the product of current and turns which causes the reaction. Consequently, a secondary winding can be replaced by any number of different equivalent windings and load circuits which will give rise to an identical reaction on the primary .It is clearly convenient to change the secondary winding to an equivalent winding having the same number of turns 1N as the primary.With 2N changes to 1N , since the e.m.f.s are proportional to turns, 2212)/('E N N E = which is the same as 1E .For current, since the reaction ampere turns must be unchanged 1222'''N I N I = must be equal to 22N I .i.e. 2122)/(I N N I =.For impedance , since any secondary voltage V becomes V N N )/(21, and secondary current I becomes I N N )/(12, then any secondary impedance, including load impedance, must become I V N N I V /)/('/'221=. Consequently, 22212)/('R N N R = and 22212)/('X N N X = .If the primary turns are taken as reference turns, the process is called referring to the primary side. There are a few checks which can be made to see if the procedure outlined is valid.For example, the copper loss in the referred secondary winding must be the same as in the original secondary otherwise the primary would have to supply a different loss power. ''222R I must be equal to 222R I . )222122122/()/(N N R N N I ∙∙ does in fact reduce to 222R I .Similarly the stored magnetic energy in the leakage field )2/1(2LI which is proportional to 22'X I will be found to check as ''22X I . The referred secondary 2212221222)/()/(''I E N N I N N E I E kVA =∙==.The argument is sound, though at first it may have seemed suspect. In fact, if the actual secondarywinding was removed physically from the core and replaced by the equivalent winding and load circuit designed to give the parameters 1N ,'2R ,'2X and '2I , measurements from the primary terminals would be unable to detect any difference in secondary ampere-turns, kVA demand or copper loss, under normal power frequency operation.There is no point in choosing any basis other than equal turns on primary and referred secondary, but it is sometimes convenient to refer the primary to the secondary winding. In this case, if all the subscript 1’s are interchanged for the subscript 2’s, the necessary referring constants are easily found; e.g. 2'1R R ≈,21'X X ≈; similarly 1'2R R ≈ and 12'X X ≈.The equivalent circuit for the general case where 21N N ≠ except that m r has been added to allow for iron loss and an ideal lossless transformation has been included before the secondary terminals to return '2V to 2V .All calculations of internal voltage and power losses are made before this ideal transformation is applied. The behaviour of a transformer as detected at both sets of terminals is the same as the behaviour detected at the corresponding terminals of this circuit when the appropriate parameters are inserted. The slightly different representation showing the coils 1N and 2N side by side with a core in between is only used for convenience. On the transformer itself, the coils are , of course , wound round the same core.Very little error is introduced if the magnetising branch is transferred to the primary terminals, but a few anomalies will arise. For example ,the current shown flowing through the primary impedance is no longer the whole of the primary current. The error is quite small since 0I is usually such a small fraction of 1I . Slightly different answers may be obtained to a particular problem depending on whether or not allowance is made for this error. With this simplified circuit, the primary and referred secondary impedances can be added to give: 221211)/(Re N N R R += and 221211)/(N N X X Xe +=It should be pointed out that the equivalent circuit as derived here is only valid for normal operation at power frequencies; capacitance effects must be taken into account whenever the rate of change of voltage would give rise to appreciable capacitance currents, dt CdV I c /=. They are important at high voltages and at frequencies much beyond 100 cycles/sec. A further point is not theonly possible equivalent circuit even for power frequencies .An alternative , treating the transformer as a three-or four-terminal network, gives rise to a representation which is just as accurate and has some advantages for the circuit engineer who treats all devices as circuit elements with certain transfer properties. The circuit on this basis would have a turns ratio having a phase shift as well as a magnitude change, and the impedances would not be the same as those of the windings. The circuit would not explain the phenomena within the device like the effects of saturation, so for an understanding of internal behaviour .There are two ways of looking at the equivalent circuit:(a) viewed from the primary as a sink but the referred load impedance connected across '2V ,or (b) viewed from the secondary as a source of constant voltage 1V with internal drops due to 1Re and 1Xe . The magnetizing branch is sometimes omitted in this representation and so the circuit reduces to a generator producing a constant voltage 1E (actually equal to 1V ) and having an internal impedance jX R + (actually equal to 11Re jXe +).In either case, the parameters could be referred to the secondary winding and this may save calculation time .The resistances and reactances can be obtained from two simple light load tests.Introduction to DC MachinesDC machines are characterized by their versatility. By means of various combination of shunt, series, and separately excited field windings they can be designed to display a wide variety of volt-ampere or speed-torque characteristics for both dynamic and steadystate operation. Because of the ease with which they can be controlled , systems of DC machines are often used in applications requiring a wide range of motor speeds or precise control of motor output.The essential features of a DC machine are shown schematically. The stator has salient poles and is excited by one or more field coils. The air-gap flux distribution created by the field winding is symmetrical about the centerline of the field poles. This axis is called the field axis or direct axis.As we know , the AC voltage generated in each rotating armature coil is converted to DC in the external armature terminals by means of a rotating commutator and stationary brushes to which the armature leads are connected. The commutator-brush combination forms a mechanical rectifier,resulting in a DC armature voltage as well as an armature m.m.f. wave which is fixed in space. The brushes are located so that commutation occurs when the coil sides are in the neutral zone , midway between the field poles. The axis of the armature m.m.f. wave then in 90 electrical degrees from the axis of the field poles, i.e., in the quadrature axis. In the schematic representation the brushes are shown in quarature axis because this is the position of the coils to which they are connected. The armature m.m.f. wave then is along the brush axis as shown.. (The geometrical position of the brushes in an actual machine is approximately 90 electrical degrees from their position in the schematic diagram because of the shape of the end connections to the commutator.)The magnetic torque and the speed voltage appearing at the brushes are independent of the spatial waveform of the flux distribution; for convenience we shall continue to assume a sinusoidal flux-density wave in the air gap. The torque can then be found from the magnetic field viewpoint.The torque can be expressed in terms of the interaction of the direct-axis air-gap flux per pole d Φ and the space-fundamental component 1a F of the armature m.m.f. wave . With the brushes in the quadrature axis, the angle between these fields is 90 electrical degrees, and its sine equals unity. For a P pole machine 12)2(2a d F P T ϕπ= In which the minus sign has been dropped because the positive direction of the torque can be determined from physical reasoning. The space fundamental 1a F of the sawtooth armature m.m.f. wave is 8/2π times its peak. Substitution in above equation then gives a d a a d a i K i mPC T ϕϕπ==2 Where a i =current in external armature circuit;a C =total number of conductors in armature winding;m =number of parallel paths through winding;And mPC K a a π2=Is a constant fixed by the design of the winding.The rectified voltage generated in the armature has already been discussed before for an elementary single-coil armature. The effect of distributing the winding in several slots is shown in figure ,in which each of the rectified sine waves is the voltage generated in one of the coils, commutation taking place at the moment when the coil sides are in the neutral zone. The generated voltage as observed from the brushes is the sum of the rectified voltages of all the coils in series between brushes and is shown by the rippling line labeled a e in figure. With a dozen or so commutator segments per pole, the ripple becomes very small and the average generated voltage observed from the brushes equals the sum of the average values of the rectified coil voltages. The rectified voltage a e between brushes, known also as the speed voltage, is m d a m d a a W K W mPC e ϕϕπ==2 Where a K is the design constant. The rectified voltage of a distributed winding has the same average value as that of a concentrated coil. The difference is that the ripple is greatly reduced.From the above equations, with all variable expressed in SI units:m a a Tw i e =This equation simply says that the instantaneous electric power associated with the speed voltage equals the instantaneous mechanical power associated with the magnetic torque , the direction of power flow being determined by whether the machine is acting as a motor or generator.The direct-axis air-gap flux is produced by the combined m.m.f. f f i N ∑ of the field windings, the flux-m.m.f. characteristic being the magnetization curve for the particular iron geometry of the machine. In the magnetization curve, it is assumed that the armature m.m.f. wave is perpendicular to the field axis. It will be necessary to reexamine this assumption later in this chapter, where the effects of saturation are investigated more thoroughly. Because the armature e.m.f. is proportional to flux timesspeed, it is usually more convenient to express the magnetization curve in terms of the armature e.m.f. 0a e at a constant speed 0m w . The voltage a e for a given flux at any other speed m w is proportional to the speed,i.e. 00a m m a e w w e Figure shows the magnetization curve with only one field winding excited. This curve can easily be obtained by test methods, no knowledge of any design details being required.Over a fairly wide range of excitation the reluctance of the iron is negligible compared with that of the air gap. In this region the flux is linearly proportional to the total m.m.f. of the field windings, the constant of proportionality being the direct-axis air-gap permeance.The outstanding advantages of DC machines arise from the wide variety of operating characteristics which can be obtained by selection of the method of excitation of the field windings. The field windings may be separately excited from an external DC source, or they may be self-excited; i.e., the machine may supply its own excitation. The method of excitation profoundly influences not only the steady-state characteristics, but also the dynamic behavior of the machine in control systems.The connection diagram of a separately excited generator is given. The required field current is a very small fraction of the rated armature current. A small amount of power in the field circuit may control a relatively large amount of power in the armature circuit; i.e., the generator is a power amplifier. Separately excited generators are often used in feedback control systems when control of the armature voltage over a wide range is required. The field windings of self-excited generators may be supplied in three different ways. The field may be connected in series with the armature, resulting in a shunt generator, or the field may be in two sections, one of which is connected in series and the other in shunt with the armature, resulting in a compound generator. With self-excited generators residual magnetism must be present in the machine iron to get the self-excitation process started.In the typical steady-state volt-ampere characteristics, constant-speed primemovers being assumed. The relation between the steady-state generated e.m.f. a E and the terminal voltage t V isa a a t R I E V -=Where a I is the armature current output and a R is the armature circuit resistance. In a generator, a E is large than t V ; and the electromagnetic torque T is a countertorque opposing rotation.The terminal voltage of a separately excited generator decreases slightly with increase in the load current, principally because of the voltage drop in the armature resistance. The field current of a series generator is the same as the load current, so that the air-gap flux and hence the voltage vary widely with load. As a consequence, series generators are not often used. The voltage of shunt generators drops off somewhat with load. Compound generators are normally connected so that the m.m.f. of the series winding aids that of the shunt winding. The advantage is that through the action of the series winding the flux per pole can increase with load, resulting in a voltage output which is nearly constant. Usually, shunt winding contains many turns of comparatively heavy conductor because it must carry the full armature current of the machine. The voltage of both shunt and compound generators can be controlled over reasonable limits by means of rheostats in the shunt field. Any of the methods of excitation used for generators can also be used for motors. In the typical steady-state speed-torque characteristics, it is assumed that the motor terminals are supplied from a constant-voltage source. In a motor the relation between the e.m.f. a E generated in the armature and the terminal voltage t V isa a a t R I E V +=Where a I is now the armature current input. The generated e.m.f. a E is now smaller than the terminal voltage t V , the armature current is in the opposite direction to that in a motor, and the electromagnetic torque is in the direction to sustain rotation ofthe armature.In shunt and separately excited motors the field flux is nearly constant. Consequently, increased torque must be accompanied by a very nearly proportional increase in armature current and hence by a small decrease in counter e.m.f. to allow this increased current through the small armature resistance. Since counter e.m.f. is determined by flux and speed, the speed must drop slightly. Like the squirrel-cage induction motor ,the shunt motor is substantially a constant-speed motor having about 5 percent drop in speed from no load to full load. Starting torque and maximum torque are limited by the armature current that can be commutated successfully.An outstanding advantage of the shunt motor is ease of speed control. With a rheostat in the shunt-field circuit, the field current and flux per pole can be varied at will, and variation of flux causes the inverse variation of speed to maintain counter e.m.f. approximately equal to the impressed terminal voltage. A maximum speed range of about 4 or 5 to 1 can be obtained by this method, the limitation again being commutating conditions. By variation of the impressed armature voltage, very wide speed ranges can be obtained.In the series motor, increase in load is accompanied by increase in the armature current and m.m.f. and the stator field flux (provided the iron is not completely saturated). Because flux increases with load, speed must drop in order to maintain the balance between impressed voltage and counter e.m.f.; moreover, the increase in armature current caused by increased torque is smaller than in the shunt motor because of the increased flux. The series motor is therefore a varying-speed motor with a markedly drooping speed-load characteristic. For applications requiring heavy torque overloads, this characteristic is particularly advantageous because the corresponding power overloads are held to more reasonable values by the associated speed drops. Very favorable starting characteristics also result from the increase in flux with increased armature current.In the compound motor the series field may be connected either cumulatively, so that its.m.m.f.adds to that of the shunt field, or differentially, so that it opposes. The differential connection is very rarely used. A cumulatively compounded motor hasspeed-load characteristic intermediate between those of a shunt and a series motor, the drop of speed with load depending on the relative number of ampere-turns in the shunt and series fields. It does not have the disadvantage of very high light-load speed associated with a series motor, but it retains to a considerable degree the advantages of series excitation.The application advantages of DC machines lie in the variety of performance characteristics offered by the possibilities of shunt, series, and compound excitation. Some of these characteristics have been touched upon briefly in this article. Still greater possibilities exist if additional sets of brushes are added so that other voltages can be obtained from the commutator. Thus the versatility of DC machine systems and their adaptability to control, both manual and automatic, are their outstanding features.负载运行的变压器及直流电机导论负载运行的变压器通过选择合适的匝数比,一次侧输入电压1V 可任意转换成所希望的二次侧开路电压2E 。

毕业设计(论文)外文翻译【范本模板】

毕业设计(论文)外文翻译【范本模板】

华南理工大学广州学院本科生毕业设计(论文)翻译英文原文名Review of Vibration Analysis Methods for Gearbox Diagnostics and Prognostics中文译名对变速箱振动分析的诊断和预测方法综述学院汽车工程学院专业班级车辆工程七班学生姓名刘嘉先学生学号201130085184指导教师李利平填写日期2015年3月15日英文原文版出处:Proceedings of the 54th Meeting of the Society for Machinery Failure Prevention Technology, Virginia Beach,V A, May 1-4,2000,p. 623-634译文成绩:指导教师(导师组长)签名:译文:简介特征提取技术在文献中有描述;然而,大多数人似乎掩盖所需的特定的预处理功能。

一些文件没有提供足够的细节重现他们的结果,并没有一个全面的比较传统的功能过渡齿轮箱数据。

常用术语,如“残差信号”,是指在不同的文件不同的技术.试图定义了状态维修社区中的常用术语和建立所需的特定的预处理加工特性。

本文的重点是对所使用的齿轮故障检测功能。

功能分为五个不同的组基于预处理的需要。

论文的第一部分将提供预处理流程的概述和其中每个特性计算的处理方案。

在下一节中,为特征提取技术描述,将更详细地讨论每一个功能。

最后一节将简要概述的宾夕法尼亚州立大学陆军研究实验室的CBM工具箱用于齿轮故障诊断。

特征提取概述许多类型的缺陷或损伤会增加机械振动水平。

这些振动水平,然后由加速度转换为电信号进行数据测量。

原则上,关于受监视的计算机的健康的信息被包含在这个振动签名。

因此,新的或当前振动签名可以与以前的签名进行比较,以确定该元件是否正常行为或显示故障的迹象。

在实践中,这种比较是不能奏效的。

由于大的变型中,签名的直接比较是困难的。

相反,一个涉及从所述振动署名数据特征提取更多有用的技术也可以使用。

汽车电子毕设设计外文文献翻译(适用于毕业论文外文翻译+中英文对照)

汽车电子毕设设计外文文献翻译(适用于毕业论文外文翻译+中英文对照)

Ultrasonic ranging system designPublication title: Sensor Review. Bradford: 1993.Vol.ABSTRACT: Ultrasonic ranging technology has wide using worth in many fields, such as the industrial locale, vehicle navigation and sonar engineering. Now it has been used in level measurement, self-guided autonomous vehicles, fieldwork robots automotive navigation, air and underwater target detection, identification, location and so on. So there is an important practicing meaning to learn the ranging theory and ways deeply. To improve the precision of the ultrasonic ranging system in hand, satisfy the request of the engineering personnel for the ranging precision, the bound and the usage, a portable ultrasonic ranging system based on the single chip processor was developed.Keywords: Ultrasound, Ranging System, Single Chip Processor1. IntroductiveWith the development of science and technology, the improvement of people’s standard of living, speeding up the development and construction of the city. Urban drainage system have greatly developed their situation is construction improving. However, due to historical reasons many unpredictable factors in the synthesis of her time, the city drainage system. In particular drainage system often lags behind urban construction. Therefore, there are often good building excavation has been building facilities to upgrade the drainage system phenomenon. It brought to the city sewage, and it is clear to the city sewage and drainage culvert in the sewage treatment system.Co mfort is very important to people’s lives. Mobile robots designed to clear the drainage culvert and the automatic control system Free sewage culvert clear guarantee robots, the robot is designed to clear the culvert sewage to the core. Control system is the core component of the development of ultrasonic range finder. Therefore, it is very important to design a good ultrasonic range finder.2. A principle of ultrasonic distance measurementThe application of AT89C51:SCM is a major piece of computer components are integrated into the chip micro-computer. It is a multi-interface and counting on the micro-controller integration, and intelligence products are widely used in industrial automation. and MCS-51 microcontroller is a typical and representative.Microcontrollers are used in a multitude of commercial applications such as modems, motor-control systems, air conditioner control systems, automotive engine and among others. The high processing speed and enhanced peripheral set of these microcontrollers make them suitable for such high-speed event-based applications. However, these critical application domains also require that these microcontrollers are highly reliable. The high reliability and low market risks can be ensured by a robust testing process and a proper tools environment for the validation of these microcontrollers both at the component and at the system level. Intel Plaform Engineering department developed an object-oriented multi-threaded test environment for the validation of its AT89C51 automotive microcontrollers. The goals of this environment was not only to provide a robust testing environment for the AT89C51 automotive microcontrollers, but to develop an environment which can be easily extended and reused for the validation of several other future microcontrollers. The environment was developed in conjunction with Microsoft Foundation Classes(AT89C51).1.1 Features* Compatible with MCS-51 Products* 2Kbytes of Reprogrammable Flash MemoryEndurance: 1,000Write/Erase Cycles* 2.7V to 6V Operating Range* Fully Static operation: 0Hz to 24MHz* Two-level program memory lock* 128x8-bit internal RAM* 15programmable I/O lines* Two 16-bit timer/counters* Six interrupt sources*Programmable serial UART channel* Direct LED drive output* On-chip analog comparator* Low power idle and power down modes1.2 DescriptionThe AT89C2051 is a low-voltage, high-performance CMOS 8-bit microcomputer with 2Kbytes of flash programmable and erasable read only memory (PEROM). The device is manufactured using Atmel’s high density nonvolatile memory technology and is compatible with the industry standard MCS-51 instruction set and pinout. By combining a versatile 8-bit CPU with flash on a monolithic chip, the Atmel AT89C2051 is a powerful microcomputer which provides a highly flexible and cost effective solution to many embedded control applications.The AT89C2051 provides the following standard features: 2Kbytes of flash,128bytes of RAM, 15 I/O lines, two 16-bit timer/counters, a five vector two-level interrupt architecture, a full duplex serial port, a precision analog comparator, on-chip oscillator and clock circuitry. In addition, the AT89C2051 is designed with static logicfor operation down to zero frequency and supports two software selectable power saving modes. The idle mode stops the CPU while allowing the RAM, timer/counters, serial port and interrupt system to continue functioning. The power down mode saves the RAM contents but freezer the oscillator disabling all other chip functions until the next hardware reset.1.3 Pin Configuration1.4 Pin DescriptionVCC Supply voltage.GND Ground.Prot 1Prot 1 is an 8-bit bidirectional I/O port. Port pins P1.2 to P1.7 provide internal pullups. P1.0 and P1.1 require external pullups. P1.0 and P1.1 also serve as the positive input (AIN0) and the negative input (AIN1), respectively, of the on-chip precision analog comparator. The port 1 output buffers can sink 20mA and can drive LED displays directly. When 1s are written to port 1 pins, they can be used as inputs. When pins P1.2 to P1.7 are used as input and are externally pulled low, they will source current (IIL) because of the internal pullups.Port 3Port 3 pins P3.0 to P3.5, P3.7 are seven bidirectional I/O pins with internal pullups. P3.6 is hard-wired as an input to the output of the on-chip comparator and is not accessible as a general purpose I/O pin. The port 3 output buffers can sink 20mA. When 1s are written to port 3 pins they are pulled high by the internal pullups and can be used as inputs. As inputs, port 3 pins that are externally being pulled low will source current (IIL) because of the pullups.Port 3 also serves the functions of various special features of the AT89C2051 as listed below.1.5 Programming the FlashThe AT89C2051 is shipped with the 2 Kbytes of on-chip PEROM code memory array in the erased state (i.e., contents=FFH) and ready to be programmed. The code memory array is programmed one byte at a time. Once the array is programmed, to re-program any non-blank byte, the entire memory array needs to be erased electrically.Internal address counter: the AT89C2051 contains an internal PEROM address counter which is always reset to 000H on the rising edge of RST and is advanced applying a positive going pulse to pin XTAL1.Programming algorithm: to program the AT89C2051, the following sequence is recommended.1. power-up sequence:Apply power between VCC and GND pins Set RST and XTAL1 to GNDWith all other pins floating , wait for greater than 10 milliseconds2. Set pin RST to ‘H’ set pin P3.2 to ‘H’3. Apply the appropriate combination of ‘H’ or ‘L’ logic to pins P3.3, P3.4, P3.5,P3.7 to select one of the programming operations shown in the PEROM programming modes table.To program and Verify the Array:4. Apply data for code byte at location 000H to P1.0 to P1.7.5.Raise RST to 12V to enable programming.5. Pulse P3.2 once to program a byte in the PEROM array or the lock bits. The byte-write cycle is self-timed and typically takes 1.2ms.6. To verify the programmed data, lower RST from 12V to logic ‘H’ level and set pins P3.3 to P3.7 to the appropriate levels. Output data can be read at the port P1 pins.7. To program a byte at the next address location, pulse XTAL1 pin once to advance the internal address counter. Apply new data to the port P1 pins.8. Repeat steps 5 through 8, changing data and advancing the address counter for the entire 2 Kbytes array or until the end of the object file is reached.9. Power-off sequence: set XTAL1 to ‘L’ set RST to ‘L’Float all other I/O pins Turn VCC power off2.1 The principle of piezoelectric ultrasonic generatorPiezoelectric ultrasonic generator is the use of piezoelectric crystal resonators to work. Ultrasonic generator, the internal structure as shown, it has two piezoelectric chip and a resonance plate. When it’s two plus pulse signal, the frequency equal to the intrinsic piezoelectric oscillation frequency chip, the chip will happen piezoelectric resonance, and promote the development of plate vibration resonance, ultrasound is generated. Conversely, it will be for vibration suppression of piezoelectric chip, the mechanical energy is converted to electrical signals, then it becomes the ultrasonic receiver.The traditional way to determine the moment of the echo’s arrival is based on thresholding the received signal with a fixed reference. The threshold is chosen well above the noise level, whereas the moment of arrival of an echo is defined as the first moment the echo signal surpasses that threshold. The intensity of an echo reflecting from an object strongly depends on the object’s nature, size and distance from the sensor. Further, the time interval from the echo’s starting point to the moment when it surpasses the threshold changes with the intensity of the echo. As a consequence, a considerable error may occur even two echoes with different intensities arriving exactly at the same time will surpass the threshold at different moments. The stronger one will surpass the threshold earlier than the weaker, so it will be considered as belonging to a nearer object.2.2 The principle of ultrasonic distance measurementUltrasonic transmitter in a direction to launch ultrasound, in the moment to launch the beginning of time at the same time, the spread of ultrasound in the air, obstacles on his way to return immediately, the ultrasonic reflected wave received by the receiverimmediately stop the clock. Ultrasound in the air as the propagation velocity of 340m/s, according to the timer records the time t, we can calculate the distance between the launch distance barrier(s), that is: s=340t / 23. Ultrasonic Ranging System for the Second Circuit DesignSystem is characterized by single-chip microcomputer to control the use of ultrasonic transmitter and ultrasonic receiver since the launch from time to time, single-chip selection of 875, economic-to-use, and the chip has 4K of ROM, to facilitate programming.3.1 40 kHz ultrasonic pulse generated with the launchRanging system using the ultrasonic sensor of piezoelectric ceramic sensorsUCM40, its operating voltage of the pulse signal is 40kHz, which by the single-chip implementation of the following procedures to generate.puzel: mov 14h, # 12h; ultrasonic firing continued 200msHere: cpl p1.0; output 40kHz square wavenop;nop;nop;djnz 14h, here;retRanging in front of single-chip termination circuit P1.0 input port, single chip implementation of the above procedure, the P1.0 port in a 40kHz pulse output signal, after amplification transistor T, the drive to launch the first ultrasonic UCM40T, issued 40kHz ultrasonic pulse, and the continued launch of 200ms. Ranging the right and the left side of the circuit, respectively, then input port P1.1 and P1.2, the working principle and circuit in front of the same location.3.2 Reception and processing of ultrasonicUsed to receive the first launch of the first pair UCM40R, the ultrasonic pulse modulation signal into an alternating voltage, the op-amp amplification IC1A and after polarization IC1B to IC2. IC2 is locked loop with audio decoder chip LM567, internal voltage-controlled oscillator center frequency of f0=1/1.1R8C3, capacitor C4 determinetheir target bandwidth. R8-conditioning in the launch of the high jump 8 feet into a low-level, as interrupt request signals to the single-chip processing.Ranging in front of single-chip termination circuit output port INT0 interrupt the highest priority, right or left location of the output circuit with output gate IC3A access INT1 port single-chip, while single-chip P1.3 and P1.4 received input IC3A, interrupted by the process to identify the source of inquiry to deal with, interrupt priority level for the first left right after. Part of the source code is as follows:Receivel: push pswpush accclr ex1; related external interrupt 1jnb p1.1, right; P1.1 pin to 0, ranging from right to interrupt service routine circuitjnb p1.2, left; P1.2 pin to 0, to the left ranging circuit interrupt service routinereturn: SETB EX1; open external interrupt 1pop accpop pswretiright: …; right location entrance circuit interrupt service routineAjmp Returnleft: …; left ranging entrance circuit interrupt service routineAjmp Return3.3 The calculation of ultrasonic propagation timeWhen you start firing at the same time start the single-chip circuitry within the timer T0, the use of timer counting function records the time and the launch of ultrasonic reflected wave received time. When you receive the ultrasonic reflected wave, the receiver circuit output a negative jump in the end of INT0 or INT1 interrupt request generates a signal, single-chip microcomputer in response to external interrupt request, the implementation of the external interrupt service subroutine, read the time difference, calculating the distance. Some of its source code is as follows:RECEIVE0: PUSH PSWPUSH ACCCLR EX0; related external interrupt 0MOV R7, TH0; read the time valueMOV R6, TL0CLR CMOV A, R6SUBB A, #0BBH; calculate the time differenceMOV 31H, A; storage resultsMOV A, R7SUBB A, # 3CHMOV 30H, ASETB EX0; open external interrupt 0\POP ACCPOP PSWRETIFor a flat target, a distance measurement consists of two phases: a coarse measurement and a fine measurement:Step 1: Transmission of one pulse train to produce a simple ultrasonic wave.Step 2: Changing the gain of both echo amplifiers according to equation, until the echo is detected.Step 3: Detection of the amplitudes and zero-crossing times of both echoes.Step 4: Setting the gains of both echo amplifiers to normalize the output at, say 3 volts. Setting the period of the next pulses according to the: period of echoes. Setting the time window according to the data of step 2.Step 5: Sending two pulse trains to produce an interfered wave. Testing the zero-crossing times and amplitudes of the echoes. If phase inversion occurs in the echo, determine to otherwise calculate to by interpolation using the amplitudes near the trough. Derive t sub m1 and t sub m2.Step 6: Calculation of the distance y using equation.4、The ultrasonic ranging system software designSoftware is divided into two parts, the main program and interrupt service routine. Completion of the work of the main program is initialized, each sequence of ultrasonic transmitting and receiving control.Interrupt service routines from time to time to complete three of the rotation direction of ultrasonic launch, the main external interrupt service subroutine to read the value of completion time, distance calculation, the results of the output and so on.5、ConclusionsRequired measuring range of 30cm-200cm objects inside the plane to do a number of measurements found that the maximum error is 0.5cm, and good reproducibility. Single-chip design can be seen on the ultrasonic ranging system has a hardware structure is simple, reliable, small features such as measurement error. Therefore, it can be used not only for mobile robot can be used in other detection system.Thoughts: As for why the receiver do not have the transistor amplifier circuit, because the magnification well, integrated amplifier, but also with automatic gain control level, magnification to 76dB, the center frequency is 38k to 40k, is exactly resonant ultrasonic sensors frequency.6、Parking sensor6.1 Parking sensor introductionReversing radar, full name is "reversing the anti-collision radar, also known as" parking assist device, car parking or reversing the safety of assistive devices, ultrasonic sensors(commonly known as probes), controls and displays (or buzzer)and other components. To inform the driver around the obstacle to the sound or a moreintuitive display to lift the driver parking, reversing and start the vehicle around tovisit the distress caused by, and to help the driver to remove the vision deadends and blurred vision defects and improve driving safety.6.2 Reversing radar detection principleReversing radar, according to high-speed flight of the bats in thenight, not collided with any obstacle principles of design anddevelopment. Probe mounted on the rear bumper, according to different price and brand, the probe only ranging from two, three, four, six, eight,respectively, pipe around. The probe radiation, 45-degree angle up and downabout the search target. The greatest advantage is to explore lower than the bumper of the driver from the rear window is difficult to see obstacles, and the police, suchas flower beds, children playing in the squatting on the car.Display parking sensor installed in the rear view mirror, it constantlyremind drivers to car distance behindthe object distance to the dangerous distance, the buzzer starts singing, allow the driver to stop. When the gear lever linked into reverse gear, reversing radar, auto-start the work, the working range of 0.3 to 2.0 meters, so stop when the driver was very practical. Reversing radar is equivalent to an ultrasound probe for ultrasonic probe can be divided into two categories: First, Electrical, ultrasonic, the second is to use mechanical means to produce ultrasound, in view of the more commonly used piezoelectric ultrasonic generator, it has two power chips and a soundingboard, plus apulse signal when the poles, its frequency equal to the intrinsic oscillation frequency of the piezoelectric pressure chip will be resonant and drivenby the vibration of the sounding board, the mechanical energy into electrical signal, which became the ultrasonic probe works. In order to better study Ultrasonic and use up, people have to design and manufacture of ultrasonic sound, the ultrasonic probe tobe used in the use of car parking sensor. With this principle in a non-contactdetection technology for distance measurement is simple, convenient and rapid, easyto do real-time control, distance accuracy of practical industrial requirements. Parking sensor for ranging send out ultrasonic signal at a givenmoment, and shot in the face of the measured object back to the signal wave, reversing radar receiver to use statistics in the ultrasonic signal from the transmitter to receive echo signals calculate the propagation velocity in the medium, which can calculate the distance of the probe and to detect objects.6.3 Reversing radar functionality and performanceParking sensor can be divided into the LCD distance display, audible alarm, and azimuth directions, voice prompts, automatic probe detection function is complete, reversing radar distance, audible alarm, position-indicating function. A good performance reversing radar, its main properties include: (1) sensitivity, whether theresponse fast enough when there is an obstacle. (2) the existence of blind spots. (3) detection distance range.6.4 Each part of the roleReversing radar has the following effects: (1) ultrasonic sensor: used tolaunch and receive ultrasonic signals, ultrasonic sensors canmeasure distance. (2) host: after the launch of the sine wave pulse to the ultrasonic sensors, and process the received signal, to calculate the distance value, the data and monitor communication. (3) display or abuzzer: the receivinghost from the data, and display the distance value and provide differentlevels according to the distance from the alarm sound.6.5 Cautions1, the installation height: general ground: car before the installation of 45 ~55: 50 ~ 65cmcar after installation. 2, regular cleaningof the probe to prevent the fill. 3, do not use the hardstuff the probe surface cover will produce false positives or ranging allowed toprobe surface coverage, such as mud. 4, winter to avoid freezing. 5, 6 / 8 probe reversing radar before and after the probe is not free to swap may cause the ChangMing false positive problem. 6, note that the probe mounting orientation, in accordance with UP installation upward. 7, the probe is not recommended to install sheetmetal, sheet metal vibration will cause the probe resonance, resulting in false positives.超声测距系统设计原文出处:传感器文摘布拉福德:1993年超声测距技术在工业现场、车辆导航、水声工程等领域具有广泛的应用价值,目前已应用于物位测量、机器人自动导航以及空气中与水下的目标探测、识别、定位等场合。

(完整版)电气专业中英文对照翻译毕业设计论文

(完整版)电气专业中英文对照翻译毕业设计论文

优秀论文审核通过未经允许切勿外传Chapter 3 Digital Electronics3.1 IntroductionA circuit that employs a numerical signal in its operation is classified as a digital circuitputers,pocket calculators, digital instruments, and numerical control (NC) equipment are common applications of digital circuits. Practically unlimited quantities of digital information can be processed in short periods of time electronically. With operational speed of prime importance in electronics today,digital circuits are used more frequently.In this chapter, digital circuit applications are discussed.There are many types of digital circuits that electronics, including logic circuits, flip-flop circuits, counting circuits, and many others. The first sections of this unit discuss the number systems that are basic to digital circuit understanding. The remainder of the chapter introduces some of the types of digital circuits and explains Boolean algebra as it is applied to logic circuits.3.2 Digital Number SystemsThe most common number system used today is the decimal system,in which 10 digits are used for counting. The number of digits in the systemis called its base (or radix).The decimal system,therefore,the counting process. The largest digit that can be used in a specific place or location is determined by the base of the system. In the decimal system the first position to the left of the decimal point is called the units place. Any digit from 0 to 9 can be used in this place.When number values greater than 9 are used,they must be expressed with two or more places.The next position to the left of the units place in a decimal system is the tens place.The number 99 is the largest digital value that can be expressed by two places in the decimal system.Each place added to the left extends the number system by a power of 10.Any number can be expressed as a sum of weighted place values.The decimal number 2583,for example, is expressed as (2×1000)+(5×100)+(8×10)+(3×1).The decimal number system is commonly used in our daily lives. Electronically, the binary system.Electronically,the value of 0 can be associated with a low-voltage value or no voltage. The number 1 can then be associated with a voltage value larger than 0. Binary systems that use these voltage values are said to , this chapter.The two operational states of a binary system,1 and 0,are natural circuit conditions. When a circuit is turned off or the off, or 0,state. An electrical circuit that the on,or 1,state. By using transistor or ICs,it is electronically possible to change states in less than a microsecond. Electronic devices make it possible to manipulate millions of 0s and is in a second and thus to process information quickly.The basic principles of numbering used in decimal numbers apply ingeneral to binary numbers.The base of the binary system is 2,meaning that only the digits 0 and 1 are used to express place value. The first place to the left of the binary point,or starting point,represents the units,or is,location. Places to the left of the binary point are the powers of 2.Some of the place values in base 2 are 2º=1,2¹=2,2²=4,2³=8,2⁴=16,25=32,and 26=64.When bases other than 10 are used,the numbers should example.The number 100₂(read“one,zero,zero, base 2”)is equivalent to 4 in base 10,or 410.Starting with the first digit to the left of the binary point,this number this method of conversion a binary number to an equivalent decimal number,write down the binary number first. Starting at the binary point,indicate the decimal equivalent for each binary place location where a 1 is indicated. For each 0 in the binary number leave a blank space or indicate a 0 ' Add the place values and then record the decimal equivalent.The conversion of a decimal number to a binary equivalent is achieved by repetitive steps of division by the number 2.When the quotient is even with no remainder,a 0 is recorded.When the quotient process continues until the quotient is 0.The binary equivalent consists of the remainder values in the order last to first.3.2.2 Binary-coded Decimal (BCD) Number SystemWhen large numbers are indicated by binary numbers,they are difficult to use. For this reason,the Binary-Coded Decimal(BCD) method of counting was devised. In this system four binary digits are used to represent each decimal digit.To illustrate this procedure,the number 105,is converted to a BCD number.In binary numbers,To apply the BCD conversion process,the base 10 number is first divided into digits according to place values.The number 10510 gives the digits 1-0-5.Converting each displayed by this process with only 12 binary numbers. The between each group of digits is important when displaying BCD numbers.The largest digit to be displayed by any group of BCD numbers is 9.Six digits of a number-coding group are not used at all in this system.Because of this, the octal (base 8) and the binary form but usually display them in BCD,octal,or a base 8 system is 7. The place values starting at the left of the octal point are the powers of eight: 80=1,81=8,82=64,83=512,84=4096,and so on.The process of converting an octal number to a decimal number is the same as that used in the binary-to-decimal conversion process. In this method, equivalent decimal is 25810.Converting an octal number to an equivalent binary number is similar to the BCD conversion process. The octal number is first divided into digits according to place value. Each octal digit is then converted into an equivalent binary number using only three digits.Converting a decimal number to an octal number is a process of repetitive division by the number 8.After the quotient determined,the remainder is brought down as the place value.When the quotient is even with no remainder,a 0 is transferred to the place position.The number for converting 409810 to base 8 is 100028.Converting a binary number to an octal number is an importantconversion process of digital circuits. Binary numbers are first processed at a very output circuit then accepts this signal and converts it to an octal signal displayed on a readout device.must first be divided into groups of three,starting at the octal point.Each binary group is then converted into an equivalent octal number.These numbers are then combined,while remaining in their same respective places,to represent the equivalent octal number.3.2.4 Hexadecimal Number SystemThe digital systems to process large number values.The base of this system is 16,which means that the largest number used in a place is 15.Digits used by this system are the numbers 0-9 and the letters A-F. The letters A-P are used to denote the digits 10-15,respectively. The place values to the left of the .The process of changing a proper digital order.The place values,or powers of the base,are then positioned under the respective digits in step 2.In step 3,the value of each digit is recorded. The values in steps 2 and 3 are then multiplied together and added. The sum gives the decimal equivalent value of a . Initially,the converted to a binary number using four digits per group. The binary group is combined to form the equivalent binary number.The conversion of a decimal number to a ,as with other number systems. In this procedure the division is by 16 and remainders can be as large as 15.Converting a binary number to a groups of four digits,starting at the converted to a digital circuit-design applications binary signals arefar superior to those of the octal,decimal,or be processed very easily through electronic circuitry,since they can be represented by two stable states of operation. These states can be easily defined as on or off, 1 or 0,up or down,voltage or no voltage,right or left,or any other two-condition states. There must be no in-between state.The symbols used to define the operational state of a binary system are very important.In positive binary logic,the state of voltage,on,true,or a letter designation (such as A ) is used to denote the operational state 1 .No voltage,off,false,and the letter A are commonly used to denote the 0 condition. A circuit can be set to either state and will remain in that state until it is caused to change conditions.Any electronic device that can be set in one of two operational states or conditions by an outside signal is said to be bistable. Relays,lamps,switches,transistors, diodes and ICs may be used for this purpose. A bistable device .By using many of these devices,it is possible to build an electronic circuit that will make decisions based upon the applied input signals. The output of this circuit is a decision based upon the operational conditions of the input. Since the application of bistable devices in digital circuits makes logical decisions,they are commonly called binary logic circuits.If we were to draw a circuit diagram for such a system,including all the resistors,diodes,transistors and interconnections,we would face an overwhelming task, and an unnecessary one.Anyone who read the circuit diagram would in their mind group the components into standard circuits and think in terms of the" system" functions of the individual gates. Forthis reason,we design and draw digital circuit with standard logic symbols. Three basic circuits of this type are used to make simple logic decisions.These are the AND circuit, OR circuit, and the NOT circuit.Electronic circuits designed to perform logic functions are called gates.This term refers to the capability of a circuit to pass or block specific digital signals.The logic-gate symbols are shown in Fig.3-1.The small circle at the output of NOT gate indicates the inversion of the signal. Mathematically,this action is described as A=.Thus without the small circle,the rectangle would represent an amplifier (or buffer) with a gain of unity.An AND gate the 1 state simultaneously,then there will be a 1 at the output.The AND gate in Fig. 3-1 produces only a 1 out-put when A and B are both 1. Mathematically,this action is described as A·B=C. This expression shows the multiplication operation. An OR gate Fig.3-1 produces a when either or both inputs are l.Mathematically,this action is described as A+B=C. This expression shows OR addition. This gate is used to make logic decisions of whether or not a 1 appears at either input.An IF-THEN type of sentence is often used to describe the basic operation of a logic state.For example,if the inputs applied to an AND gate are all 1,then the output will be 1 .If a 1 is applied to any input of an OR gate,then the output will be 1 .If an input is applied to a NOT gate,then the output will be the opposite or inverse.The logic gate symbols in Fig. 3-1 show only the input and output connections. The actual gates,when wired into a digital circuit, would pin 14 and 7.3.4 Combination Logic GatesWhen a NOT gate is combined with an AND gate or an OR gate,it iscalled a combination logic gate. A NOT-AND gate is called a NAND gate,which is an inverted AND gate. Mathematically the operation of a NAND gate is A·B=. A combination NOT-OR ,or NOR,gate produces a negation of the OR function.Mathematically the operation of a NOR gate is A+B=.A 1 appears at the output only when A is 0 and B is 0.The logic symbols are shown in Fig. 3-3.The bar over C denotes the inversion,or negative function,of the gate.The logic gates discussed .In actual digital electronic applications,solid-state components are ordinarily used to accomplish gate functions.Boolean algebra is a special form of algebra that was designed to show the relationships of logic operations.Thin form of algebra is ideally suited for analysis and design of binary logic systems.Through the use of Boolean algebra,it is possible to write mathematical expressions that describe specific logic functions.Boolean expressions are more meaningful than complex word statements or or elaborate truth tables.The laws that apply to Boolean algebra are used to simplify complex expressions. Through this type of operation it may be possible to reduce the number of logic gates needed to achieve a specific function before the circuits are designed.In Boolean algebra the variables of an equation are assigned by letters of the alphabet.Each variable then exists in states of 1 or 0 according to its condition.The 1,or true state,is normally represented by a single letter such as A,B or C.The opposite state or condition is then described as 0,or false,and is represented by or A’.This is described as NOT A,A negated,or A complemented.Boolean algebra is somewhat different from conventional algebra withrespect to mathematical operations.The Boolean operations are expressed as follows:Multiplication:A AND B,AB,,A·BOR addition:A OR B .A+BNegation,or complementing:NOT A,,A’Assume that a digital logic circuit only C is on by itself or when A,B and C are all on expression describes the desired output. Eight (23) different combinations of A,B,and C exist in this expression because there are three,inputs. Only two of those combinations should cause a signal that will actuate the output. When a variable is not on (0),it is expressed as a negated letter. The original statement is expressed as follows: With A,B,and C on or with A off, B off, and C on ,an output (X)will occur:ABC+C=XA truth table illustrates if this expression is achieved or not.Table 3-1 shows a truth table for this equation. First,ABC is determined by multiplying the three inputs together.A 1 appears only when the A,B,and C inputs are all 1.Next the negated inputs A andB are determined.Then the products of inputs C,A,and B are listed.The next column shows the addition of ABC and C.The output of this equation shows that output 1 is produced only when C is 1 or when ABC is 1.A logic circuit to accomplish this Boolean expression is shown in Fig. 3-4.Initially the equation is analyzed to determine its primary operational function.Step1 shows the original equation.The primary function is addition,since it influences all parts of the equation in some way.Step 2 shows the primary function changed to a logic gate diagram.Step 3 showsthe branch parts of the equation expressed by logic diagram,with AND gates used to combine terms.Step 4 completes the process by connecting all inputs together.The circles at inputs,of the lower AND gate are used to achieve the negative function of these branch parts.The general rules for changing a Boolean equation into a logic circuit diagram are very similar to those outlined.Initially the original equation must be analyzed for its primary mathematical function.This is then changed into a gate diagram that is inputted by branch parts of the equation.Each branch operation is then analyzed and expressed in gate form.The process continues until all branches are completely expressed in diagram formmon inputs are then connected together.3.5 Timing and Storage ElementsDigital electronics involves a number of items that are not classified as gates.Circuits or devices of this type the operation of a system.Included in this system are such things as timing devices,storage elements,counters,decoders,memory,and registers.Truth tables symbols,operational characteristics,and applications of these items will be presented an IC chip. The internal construction of the chip cannot be effectively altered. Operation is controlled by the application of an external signal to the input. As a rule,very little work can be done to control operation other than altering the input signal.The logic circuits in Fig. 3-4 are combinational circuit because the output responds immediately to the inputs and there is no memory. When memory is a part of a logic circuit,the system is called sequential circuit because its output depends on the input plus its an input signal isapplied.A bistable multivibrator,in the strict sense,is a flip-flop. When it is turned on,it assumes a particular operational state. It does not change states until the input is altered.A flip-flop opposite polarity.Two inputs are usually needed to alter the state of a flip-flop. A variety of names are used for the inputs.These vary a great deal between different flip-flops.1. R-S flip-flopsFig.3-5 shows logic circuit construction of an R-S flip-flop. It is constructed from two NAND gates. The output of each NAND provides one of the inputs for the other NAND. R stands for the reset input and S represents the set input.The truth table and logic symbol are shown in Fig. 3-6.Notice that the truth table is somewhat more complex than that of a gate. It shows, for example,the applied input, previous output,and resulting output.To understand the operation of an R-S flip-flop,we must first look at the previous outputs.This is the status of the output before a change is applied to the input. The first four items of the previous outputs are Q=1 and =0. The second four states this case of the input to NANDS is 0 and that is 0,which implies that both inputs to NANDR are 1.By symmetry,the logic circuit will also stable with Q0 and 1.If now R momentarily becomes 0,the output of NANDR,,will rise to resulting in NANDS be realized by a 0 at S.The outputs Q and are unpredictable when the inputs R and S are 0 states.This case is not allowed.Seldom would individual gates be used to construct a flip-flop,rather than one of the special types for the flip-flop packages on a single chipwould be used by a designer.A variety of different flip-flops are used in digital electronic systems today. In general,each flip-flop type R-S-T flip-flop for example .is a triggered R-S flip-flop. It will not change states when the R and S inputs assume a value until a trigger pulse is applied. This would permit a large number of flip-flops to change states all at the same time. Fig. 3-7 shows the logic circuit construction. The truth table and logic symbol are shown in Fig. 3-8. The R and S input are thus active when the signal at the gate input (T) is 1 .Normally,such timing,or synchronizing,signals are distributed throughout a digital system by clock pulses,as shown in Fig. 3-9.The symmetrical clock signal provides two times each period.The circuit can be designed to trigger at the leading or trailing edge of the clock. The logic symbols for edge trigger flip-flops are shown in Fig.3-10.2. J-K flip-flopsAnother very important flip-flop unpredictable output state. The J and K inputs addition to this,J-K flip-flops may employ preset and preclear functions. This is used to establish sequential timing operations. Fig.3-11 shows the logic symbol and truth table of a J-K flip-flop.3. 5. 2 CountersA flip-flop be used in switching operations,and it can count pulses.A series of interconnected flip-flops is generally called a register.Each register can store one binary digit or bit of data. Several flip-flops connected form a counter. Counting is a fundamental digital electronic function.For an electronic circuit to count,a number of things must beachieved. Basically,the circuit must be supplied with some form of data or information that is suitable for processing. Typically,electrical pulses that turn on and off are applied to the input of a counter. These pulses must initiate a state change in the circuit when they are received. The circuit must also be able to recognize where it is in counting sequence at any particular time. This requires some form of memory. The counter must also be able to respond to the next number in the sequence. In digital electronic systems flip-flops are primarily used to achieve counting. This type of device is capable of changing states when a pulse is applied,output pulse.There are several types of counters used in digital circuitry today.Probably the most common of these is the binary counter.This particular counter is designed to process two-state or binary information. J-K flip-flops are commonly used in binary counters.Refer now to the single J-K flip-flop of Fig. 3-11 .In its toggle state,this flip-flop is capable of achieving counting. First,assume that the flip-flop is in its reset state. This would cause Q to be 0 and Q to be 1 .Normally,we are concerned only with Q output in counting operations. The flip-flop is now connected for operation in the toggle mode. J and K must both be made the 1 state. When a pulse is applied to the T,or clock,input,Q changes to 1.This means that with one pulse applied,a 1 is generated in the output. The flip-flop the next pulse arrives,Q resets,or changes to 0. Essentially,this means that two input pulses produce only one output pulse. This is a divide-by-two function.For binary numbers,counting is achieved by a number of divide-by-two flip-flops.To count more than one pulse,additional flip-flops must be employed. For each flip-flop added to the counter,its capacity is increased by the power of 2. With one flip-flop the maximum count was 20,or 1 .For two flip-flops it would count two places,such as 20 and 21.This would reach a count of 3 or a binary number of 11.The count would be 00,01,10,and 11. The counter would then clear and return to 00. In effect, this counts four state changes. Three flip-flops would count three places,or 20,21,and 22.This would permit a total count of eight state changes.The binary values are 000,001,010,011,100,101,110 and 111.The maximum count is seven,or 111 .Four flip-flops would count four places,or 20,21,22,and 23.The total count would make 16 state changes. The maximum count would be 15,or the binary number 1111.Each additional flip-flop would cause this to increase one binary place.河南理工大学电气工程及其自动化专业中英双语对照翻译。

毕业设计中英文翻译

毕业设计中英文翻译

毕业设计中英文翻译Key to the development of four-rotors micro air vehicletechnologyTo date, micro d experimental study on the basic theory of rotary wing aircraft and have made more progress, but to really mature and practical, also faces a number of key technical challenges.1. Optimal designOverall design of rotary-wing aircraft when small, need to be guided by the following principles: light weight, small size, high speed, low power consumption and costs. But these principles there are constraints and conflicting with each other, such as: vehicle weights are the same, is inversely proportional to its size and speed, low energy consumption. Therefore, when the overall design of miniature four-rotor aircraft, first select the appropriate body material based on performance and price, as much as possible to reduce the weight of aircraft; second, the need to take into account factors such as weight, size, speed and energy consumption, ensuring the realization of design optimization.2. The power and energyPower unit includes: rotor, micro DC motor, gear reducer, photoelectric encoder and motor drive module, the energy provided by onboard batteries. Four-rotors micro air vehicle's weight is a major factor affecting their size and weight of the power and energy devices accounted for a large share of the weight of the entire body. For the OS4 II, the proportion is as high as 75%. Therefore, development of lighter, more efficient power and energy devices is further miniaturized four key to rotary wing aircraft.The other hand, the lifting occurs with a power unit, most airborne energy consumption. For example, OS4 II power 91% power consumption. To increase the efficiency of aircraft, the key is to improve the efficiency of the power plant. In addition to maximize transmission efficiency, you must alsoselect the motor and reduction ratios, taking into account the maximum efficiency and maximum power output under the premise of two indicators, electric operating point within the recommended run area.3. The establishment of mathematical modelIn order to achieve effective control of four-rotors micro air vehicles, must be established accurately under various flight model. But during the flight, it not only accompanied by a variety of physical effects (aerodynamic, gravity, gyroscopic effect and rotor moment of inertia, also is vulnerable to disturbances in the external environment, such as air. Therefore, it is difficult to establish an effective, reliable dynamic model. In addition, the use of rotary wing, small size, light weight, easy to shape, it is difficult to obtain accurate aerodynamic performance parameters, and also directly affects the accuracy of the model.Establishment of mathematical model of four-rotor MAV, must also be studied and resolved problems rotor under low Reynolds number aerodynamics. Aerodynamics of micro air vehicle with conventional aircraft is very different, many aerodynamic theory and analysis tools are not currently applied, requires the development of new theories and research techniques.4. Flight controlFour-rotors micro air vehicle is a six degrees of freedom (location and attitude) and 4 control input (rotor speed) ofunderactuated system (Underactuated System), have more than one variable, linear, strongly coupled and interfere with sensitive features, makes it very difficult to design of flight control system. In addition, the controller model accuracy and precision of the sensor performance will also be affected.Attitude control is the key to the entire flight control, because four-rotors micro air vehicle's attitude and position a direct coupling (roll pitch p directly causes the body to move around before and after p), if you can precisely control the spacecraft attitude, then the control law is sufficient to achieve itsposition and velocity PID control. International study to focus on with attitude control design and validation, results show that although the simulation for nonlinear control law to obtain good results, but has a strong dependence on model accuracy, its actual effect rather than PID control. Therefore, developed to control the spacecraft attitude, also has strong anti-jamming and environment-Adaptive attitude control of a tiny four-rotary wing aircraft flight control system of priorities.5. Positioning, navigation and communicationMiniature four-rotor aircraft is primarily intended for near-surface environments, such as urban areas, forests, and interior of the tunnel. However, there are also aspects of positioning, navigation and communication. One hand, in near-surface environments, GPS does not work often requires integrated inertial navigation, optics, acoustics, radar and terrain-matching technology, development of a reliable and accurate positioning and navigation technology, on the other, near-surface environment, terrain, sources of interference and current communication technology reliability, security and robustness ofapplication still cannot meet the actual demand. Therefore, development of small volume, light weight, low power consumption, reliability and anti-jamming communication chain in four-rotors micro air vehicle technology (in particular the multi-aircraft coordination control technology) development, are crucial.微小型四旋翼飞行器发展的关键技术迄今为止,微小型四旋翼飞行器基础理论与实验研究已取得较大进展,但要真正走向成熟与实用,还面临着诸多关键技术的挑战。

6、毕业设计(论文)外文翻译(原文)模板

6、毕业设计(论文)外文翻译(原文)模板

编号:桂林电子科技大学信息科技学院毕业设计(论文)外文翻译(原文)系(部):专业:学生姓名:学号:指导教师单位:姓名:职称:年月日1、所填写内容“居中”对齐,注意每项下划线长度一致,所填字体为三号字、宋体字。

2、A4纸打印;页边距要求如下:页边距上下各为2.5 厘米,左右边距各为2.5厘米。

正文:要求为小四号Times New Roman字体,行间距取固定值(设置值为20磅);字符间距为默认值(缩放100%,间距:标准)。

页眉处“共X页”,X需要手动修改。

大功率LED散热的研究摘要:如何提高大功率LED的散热能力,是LED器件封装和器件应用设计要解决的核心问题。

介绍并分析了国内外大功率LED散热封装技术的研究现状,总结了其发展趋势与前景用途。

关键词:大功率LED;散热;封装1. 引言发光二极管(LED )诞生至今,已经实现了全彩化和高亮度化,并在蓝光LED 和紫光LED 的基础上开发了白光LED ,它为人类照明史又带来了一次飞跃。

发光二极管(LED)具有低耗能、省电、寿命长、耐用等优点,因而被各方看好将取代传统照明成为未来照明光源。

而大功率LED 作为第四代电光源,赋有“绿色照明光源”之称,具有体积小、安全低电压、寿命长、电光转换效率高、响应速度快、节能、环保等优良特性,必将取代传统的白炽灯、卤钨灯和荧光灯而成为21世纪的新一代光源。

普通LED 功率一般为0.05W ,工作电流为20mA ,大功率LED可以达到1W,2W,甚至数十瓦!工作电流可以是几十毫安到几百毫安不等。

其特点具有体积小、耗电小、发热小、寿命长、响应速度快、安全低电压、耐候性好、方向性好等优点。

外罩可用PC管制作,耐高温达135 度,低温-45 度。

广泛应用在油田、石化、铁路、矿山、部队等特殊行业、舞台装饰、城市景观照明、显示屏以及体育场馆等,特种工作灯具中的具有广泛的应用前景。

但由于目前大功率白光LED 的转换效率还较低,光通量较小,成本较高等方面因素的制约,因此大功率白光LED 短期内的应用主要是一些特殊领域的特种工作灯具,中长期目标才能是通用照明领域。

毕设翻译原文

毕设翻译原文

Heat transfer simulation in drag–pick cutting of rocksJohn P. Loui , U.M. Rao Karanama .Central Mining Research Institute, Barwa Road, Dhanbad, Jharkhand , Indiab .Department of Mining Engineering, Indian Institute of Technology, Kharagpur , IndiaAbstractA two-dimensional transient heat transfer model is developed using finite element method for the study of temperature rise during continuous drag-cutting. The simulation results such as temperature built-up with time and maximum stabilized pick–rock interface temperature are compared with experimental results for various input parameters. The effect of frictional force and cutting speed on the temperature developed at the pick–rock interface is also studied and compared with the experimental observations.Keywords: FEM; Rock cutting; Heat transfer; Wear1. IntroductionAll the rock cutting operations involve rock fracturing and subsequent removal of the broken rock chips from the tool–rock interface and drag –picks are one of the many types of cutting tools used for cutting rocks in rock excavation engineering. They are versatile cutting tools and have proved to be more efficient and desirable for cutting soft rock formations. However,there is a continuous effort to extend their applications to all types of rock formations.The forces responsible for rock fracture under the action of a drag-cutter can be resolved into two mutually perpendicular directions viz., thrust (normal) force Fn and cutting (tangential) force Fc. It is the cutting force,which decides the specific energy requirement for any cutting operation.Amajor part of the total energy spent during drag-cutting is lost as frictional heat. The temperature rise at the pick–rock interface due to this frictional heat has a significant effect on the wear rate of the cutting tool. Gray et al.(1962),De Vries (1966), Roxborough (1969), Barbish and Gardner (1969), Estes (1978), Detournay and Defourny (1992), Cools (1993), Loui and Rao (1997) found that the higher temperatures encountered in tool–rock interaction ultimately results in drastic reduction in drag-bit performance. It may also cause significant thermal stresses in rock as well as the tool. The experimental investigations conducted earlier (De Vries, 1966; Estes, 1978;Karfakis and Heins, 1966; Loui and Rao, 1997;Martin and Fowell, 1997) could only measure pick–rock interface temperatures at 2–3 locations on the cutting tool.Most of the temperature measurements during laboratory experiments were done by thermocouples placed within the tool. Conducting such experiments is not onlytime consuming and costly but also provide inadequate information if the objective is to study the temperature distribution in the pick–rock system.Analytical modelling for predicting the temperature during rock cutting requires major simplification of the problem and this may not be able to provide accurate results for the complicated real life situation of drag-cutting.Therefore, a numerical modelling technique viz., the finite element method is used in the current study to develop a two dimensional transient heat transfer model to solve for the temperature profile in the pick –rock system.The present paper discusses the development of this transient heat transfer model and its experimental validation.2.Theoretical heat transfer analysis in drag-cuttingPrior to the finite element solution of the problem,theoretical analysis has been done to evaluate input parameters for the finite element program. These parameters include velocity field in the pick–rock system,forces acting on the rake and flank face of the drag-cutter and the heat generated due to the interfacial friction while cutting.2.1. Velocity fieldsFor simplicity, the drag-cutting process is simulated as the pick remaining stationary against the rock moving past the cutter at a cutting velocity Vc. The resulting velocity fields in the uncut rock and fully formed chip are evaluated theoretically as input parameters for the numerical model.Though the researchers in the past have postulated linear (Nishmatsu, 1972) and curvilinear (Loui, 1998)paths of rock failure during the process of chip formation,for simplicity, it is assumed to be linear for the evaluation of velocity fields in the chip and the uncut rock. Fig. 1 illustrates this process of chipping under the action of a drag-cutter. The failure path is linear and at an angle / with respect to the cutting velocity as shown in Fig. 1. The inter-relationships between the cutting velocity Vc, shear velocity along the shear plane Vs, and chip velocity along the rake face Vr are represented in Fig. 2. These velocity fields in the rock areevaluated relative to the pick and thus the pick domain is assumed to be stationary against a moving rock domain.It has been found from chip-formation simulation studies (Loui, 1998) that the fracture plane (shear plane)exists at an angle of 30–35 with respect to the cutting velocity.From the velocity diagram (Fig. 2), the velocity components,u and v in x and y directions respectively, for the uncut rock and fullyformed chip are given by the following Eqs. (1) and (2), respectively. Uncut rocku =V c and v =0 (1)Fully formed chipu =γsin r V and v =γcos r V (2)2.2. ForcesThe forces acting on an orthogonal drag-cutter are representeddiagrammatically in Fig. 1. The cutting force Fc and the thrust force Fn were measured experimentallyand are related to the normal and frictional forces at the rake face and flank face (N and F, and /N and /F , respectively) as shown below: ,sin cos /c F F F F ++=γγ (3),sin cos /n N N F F +-=γγ (4)If μ is the tool –rock interface friction,//NF N F ==μ, (5)Solving for N and /N , we get,γμγμμsin ,2cos )1(2+--=N C F F N (6) γμγμγγμγμγsin 2cos )1()sin cos ()sin cos (2/+---+=C N F F N , (7) 2.3. Heat generationHeat generation during drag-cutting is mainly caused by friction at the interface between the pick and the rock (at flank and rake faces) as the cutter is dragged againstthe rock surface at a certain cutting velocity.It requires large or repeated plastic deformations to result in heat generation as in the case of metal cutting.Though elasto-plastic deformations take place in certainrock types before their failure and the formation of chips, suchdeformations are not large enough in rocks to result in the generation of heat. Therefore, for the purpose of estimating the heat generation during dragcutting,the rock chipping can be assumed to be caused by brittle failure and the heat generation limited to frictional heating. )(c f r V N NV Q Q Q /r tot +=+=μ, (8)where Qr and Qf are the frictional heat generated per second (watts) at the rake and flank faces, respectively,and Vr and Vc are the interfacial chip velocity at the rake face and flank face respectively.The velocity at which rock slides along the rake face of the tool (Vr) after rock chipping is difficult to assess.A fully formed chip does not offer a force against the rake face of the tool since it is completely detached from the rock mass and gets thrown away during the process of cutting. It has been observed by researchers in the past that drag tools undergo severe flank wear (wear land) and insignificant wear of the cutting face (Pliset al., 1988). Hence, for all practical purposes, the heat generated due to tool –rock friction at the rake face could be ignored and Eq. (8) reduces toV N Q Q f /tot μ==, (9)3. Discretization of pick –rock systemSince a simple orthogonal cutting tool is considered,heat transfer in the pick –rock system is treated as a two-dimensional problem by ignoringthe end effects.The whole domain has been discretized and analyzed in a two-dimensional Cartesian coordinate system. In the finite element solution of the problem, the domain is discretized into four-noded isoparametric elements as shown in Fig. 3. In the cutting simulations the pick is assumed to be stationary, thus the spatial discretizationof the pick does not change with time. However,since the rock is assumed to move past the pick at a constant velocity Vc, the discretized domain in the rock changes with time as per the velocity fields evaluated above.4. Finite element formulationGalerkin s approach has been used for converting the thermal energy equation (Eq. (10)) into a set of equivalent integral equations,tT C Q y T V X T C y T T K P P ∂∂=+∂∂+∂∂-∂∂+∂∂ )()x (2222μ, (10) where k is the coefficient of thermal conductivity, q is the density and Cp is the specific heat capacity at constant pressure.Let T be the approximate solution temperature and Rfem the finite element residue. Then, fem 2222-)()x (R tT C Q y T V X T C y T T K P P =∂∂+∂∂+∂∂-∂∂+∂∂ μ, (11) The approximate temperature solution T can be represented over the solution domain by[][]n T N T = , (12)where [N] is the overall shape function vector and {Tn} is the nodal temperature vector. With the use of Eq. (12), Eq. (11) can be discretized (Shih, 1984) yielding,5. Laboratory micro-pick experimentsThe cutting action was simulated using laboratory scale micro-picks and the rotary drag-cutting was carried out against an applied vertical thrust force. The applied thrust levels (Fn) were in the range of 230–750 N and the cutting speeds (Vc) were 0.01, 0.16 and 0.25m/s, which are within the practical drag-cutting ranges.The experiments were conducted on a vertical drill machine. A schematic diagram of the complete experimental setup is shown in Fig. 4. Laboratory scale micro-picks used for rotary cutting had tungsten carbide inserts as the cutting edge. The inserts were 12 mm in length, 10 mm in width and 3.5 mm in thickness and was designed to have a wedge angle of 80 and a rake angle of 10. For the measurement of temperature developed during cutting, copper–constantan thermocouple was introduced into a 1-mm diameter hole drilled at a distance of 2 mm from the cutting edge within the tungsten carbide insert and blazed with silver to secure a good holding. The micro-picks along with thermocouple are given in Fig. 5.A pre-calibrated milli-voltmeter of the range 0.1–1000 mV was used to record the difference of voltage across the thermocouple. Torque generated at the pick–rock interface is measured using a spoked wheelTorque generated at the pick–rock interface is measured using a spoked wheel dynamometer (Rao and Misra, 1994) in line with arecorder.In all these experiments, the drag–pick cutter was held stationary between the plates of the dynamometer and the rock core samples were held in a holder. The rock sample holder is designed to hold samples at one end, while the other end is provided with a taper, which fits into the drill shank. With this arrangement, rock core sample rotates against the stationary drag –pick during the cutting process. The pick-holder and rockholder are shown in Fig. 6. The experimental results have been discussed, in details, in Loui and Rao(1997). However, only a few of the experimental results are used in this paper for validation of the numerical model.6. Results and discussionThe numerical model developed in the current study has the ability to predict the pick–rock interfacial temperature and the temperature profiles in the pick–rock system. The main input parameters, which influence the temperature development at the pick–rock interface, are the cutting speed and the interfacial friction at the flank face ofthe pick. Eq. (9) shows they are linearly related to the quantity of heat generated at the pick–rock interface. The results obtained from the numerical model are compared with those observed from the experimental observations.6.1. Temperature built-upAll the simulation runs indicated that after 6 minute of pick–rock contact time, the temperature through out the domain stabilizes.Pick-rock interface temperature is defined as the average interface temperature observed along the flank face of the tool and is evaluated using Eq. (23). The temperature rise with time at the pick–rock interface for the rock type sandstone at a cutting speed of 0.25 m/s, thrust force of 230 N, and for a depth of cut of 1 mm is shown in Fig. 7. A comparison is also made with the experimental observation of the pick–rock interface temperature for the same input parameters used for the numerical model. The trend by which temperature builds up and further stabilizes has been found to be in a good agreement with the experimental observations. This trend is due to the fact that the amount of the heat generated being much higher during the onset of the cutting process compared to the dissipation of heat. As the cutting proceeds, the temperature builds up in the pick –rock system. When the temperature attains higher regimes, the heat dissipation due to convection and conduction also increases and eventually equals the heat generation due to friction. As the rate of heat generation remains constant, provided the machine operating parameters are unaltered, the temperature in the pick–rock system tends to stabilize after a few minutes of cutting.6.2 Stabilized interface temperatureThe stabilized pick–rock interface temperature at the end of 6 min of continuous cutting is termed as the stabilized interface temperaturefor that particular simulationor experiment. The variation of the stabilized pick–rock interface temperature has been studied against some of the input parameters, which directly influence the temperature such as the cutting speed and the frictional force. Other parameters like depth of cut, rake angle, etc. influence the frictional force at the interface and therefore, have only indirect effect on the temperature developed.Fig. 9 shows the variation of stabilised interfacial temperature with the cutting speed and its comparison with the experimental observation. The input parametersfor the numerical model were taken corresponding to the operating parameters used for the experiments. The predicted values by FEM analysis show a linear variation(Fig. 9) since the cutting speed is directly proportional to the quantity of heat generated (Eq. (9)).The other parameter, which directly influences the heat generation at the pick–rock interface, is the frictional force at the flank face of the pick and therefore,it has been plotted against the pick rock interface temperature for numerical and experimental results (Fig.10). As observed from Fig. 10, both the results show aLineartrend.In general, from all the temperature prediction runs,the numerical results showed a higher temperature values(up to approximately 25%) compared to their experimentalcounterparts. In the numerical model it has been assumed that all of the frictional heat generated at the flank face of the tool has been converted into frictional heat which may be the reason for an over estimation. However, the errors incurred in the experimental dragcutting and also during observation using thermo-couple type of temperature measurement system cannot totally be ignored. Martin and Fowell (1997) has measured the pick–rock interface temperature using thermocouples as well as infra-red gun and found that the latter recorded higher temperature values. The error incurred may be partly due to the two-dimensional approximation of drag-cutting.6.3. Temperature variation along rake face and flankfaceFigure 11 shows the temperature variation at its stabilized state (after 6 min of continuous cutting period) along rake face and flank face of the tool. Both the curves (flank and rake faces) are plotted simultaneouslystarting from the tip of the cutter, which is the intersection point for both the forces. As frictional heat is generate at the wear-land of the flank face of the tool,temperature rises along the wear land reaches a maximum approximately at the mid point of the wear land (hf), and drops rapidly towards the flank side of the cutter as show in Fig. 11. Since at the rake face of the tool no heat is being generated during cutting, temperature falls along the rake face from the tool edge.This indicates that the temperature concentration takes place at the worn-out portion of the flank face of the cutting tool (wear land), which comes in direct contact with the rock.7.ConclusionsA general purpose finite element program has been developed to study the temperature attained during pick–rock interaction. The model has been used forthe prediction of pick–rock interface temperatures as well as temperature profile of the whole pick–rock System.The transient heat transfer modelling showed that the temperature builds up steeply during the onset of cutting and stabilizes within a few minutes of continuous pick–rock contact. This trend has been validated from experimentalobservations.The results obtained from the numerical model proves direct effect of the rock cutting parameters viz., frictional force and cutting velocity on the temperaturerise at the pick–rock interface. This has been validated by linearly increasing trends observed between the stabilized interface temperature and the rock cuttingParameters.The current study has dealt with continuous dragcutting,both numerically and experimentally. However,the transient finite element program developed can bemodified to predict the temperature rise in the pick during intermittent cutting, which mostly occurs in real life cutter picks used in road headers and shearers. With theprior knowledge of frictional forces acting in the pick–rock system during intermittent cutting, this modification can be done by suppressing the heat generationterms and adding convective heat transfer terms at the pick–rock interface nodes when the pick leaves the contact with the rock; and by initialization of rock domaintemperatures and re-introduction of heat generation terms duringre-contact. Since the experimental setup used in the current study was not designed for intermittentcutting, experimental data could not be obtained for validation and therefore, intermittent cutting was not dealt in this paper. Further, it may require a moredetailed three-dimensional modelling to reduce the errors and to get the results closer to the realistic temperature values.ReferencesBarbish, A.B., Gardner, G.H.F., 1969. The effect of heat on some mechanical properties of igneous rocks. ASME J. Soc. Petr. Eng. 9,395–402.Cools, P.C.B.M., 1993. Temperature measurements upon the chisel surface during rock cutting. Int. J. Rock Mech. Min. Sci. Geomech.30, 25–35.De Vries, M.F., 1966. Investigation of drill temperature as a drilling performance criterion. Ph.D. thesis, University of Wisconsin, USA.Detournay, E., Defourny, P., 1992. A phenomenological model for the drilling action of drag bits. Int. J. Rock. Mech. Min. Sci. 29, 13–23. Estes, J.C., 1978. Techniques of pilot scale drilling research. ASME J. Pressure Vessels Technol. 100, 188–193.Gray, K.E., Armstrong, F., Gatlin, C., 1962. Two-dimensional study of rock breakage in drag-bit drilling at atmospheric pressure. J. Pertol. Technol., 93–98.Karfakis, M.G., Heins, R.W., 1966. Laboratory investigation of bit bearing temperatures in rotary drilling. ASME J. Energy Resourc.Tech. 108, 221–227.Loui, J.P., Rao, K.U.M., 1997. Experimental investigations of pick –rock interface temperature in drag–pick cutting. Indian J. Eng.Mater. Sci. 4, 63–66.Loui, J.P., 1998. Finite element simulation and experimental investigationof drag-cutting in rocks. Ph.D. thesis, Indian Institute of Technology, Kharagpur, India.Martin, J.A., Fowell, R.J., 1997. Factors governing the onset of severe drag tool wear in rock cutting. Int. J. RockMech. Min. Sci. 34, 59–69. Nishmatsu, Y., 1972. The mechanics of rock cutting. Int. J. Rock Mech. Min. Sci. 9, 261–272.Plis, M.N., Wingquist, C.F., Roepke, W.W., 1988. Preliminary Evaluation of the Relationship of Bit Wear to Cutting Distance, Forces and Dust Using Selected Commercial and ExperimentalCoal and Rock Cutting Tools. USBM, RI-9193, p. 63.Rao, K.U.M., Misra, B., 1994. Design of a spooked wheel dynamometer. Int. J. Surf. Mining Recl. 8, 146–147.Roxborough, F.F., 1969. Rock cutting research. Tunnels Tunnelling 1, 125–128.Shih, T.M., 1984. Numerical Heat Transfer. Hemisphere/Springer, Washington/New York, p. 563.。

毕业设计文献翻译完美版

毕业设计文献翻译完美版

《毕业设计》文献翻译院系:电子电气工程学院学号:021309208姓名:吴骁奕指导教师:曾国辉完成时间:2013/2/15文献翻译021309208 吴骁奕A Flexible LED Driver for Automotive Lighting Applications: IC Design and E xperimental Characterization一个灵活的LED驱动汽车照明应用:集成电路设计和实验特征Abstract—This letter presents a smart driver for LEDs, particularly摘要:这文章提出了一个智能驱动发光二极管,for automotive lighting applications, which avoid ringing尤其是用于避免振荡和超调现象的汽车照明应用上,and overshoot phenomena. To this aim, advanced Soft Start and为了这个目的,在芯片上集成了优化软启动和电流升降控制技术。

Current Slope Control techniques are integrated on-chip. This letter这篇文章讨论了设计于集合于高电压的互补金属氧化半导体上的驱动技术,discusses the driver design integrating in high voltage CMOStechnology, the digital circuitry for programming and electronic用于编程和电子控制单元连接的数字电路以及功率元件提高到10瓦特。

control units interfacing, and the power devices up to 10W. Experimental同时也展示了不同功率等级使用的发光二极管和与不同类型的连接时的实验特征。

毕业设计论文翻译(译文+原文)

毕业设计论文翻译(译文+原文)

Hacking tricks toward security on network environments Tzer-Shyong Chen1, Fuh-Gwo Jeng 2, and Yu-Chia Liu 11 Department of Information Management, Tunghai University, Taiwan2 Department of Applied Mathematics, National Chiayi University, TaiwanE-Mail:****************.edu.twAbstractMounting popularity of the Internet has led to the birth of Instant Messaging, an up-and-coming form of Internet communication. Instant Messaging is very popular with businesses and individuals since it has instant communication ability. As a result, Internet security has become a pressing and important topic for discussion. Therefore, in recent years, a lot of attention has been drawn towards Internet security and the various attacks carried out by hackers over the Internet. People today often handle affairs via the Internet. For instance, instead of the conventional letter, they communicate with others by e-mails; they chat with friends through an instant messenger; find information by browsing websites instead of going to the library; perform e-commerce transactions through the Internet, etc. Although the convenience of the Internet makes our life easier, it is also a threat to Internet security. For instance, a business email intercepted during its transmission may let slip business confidentiality; file transfers via instant messengers may also be intercepted, and then implanted with backdoor malwares; conversations via instant messengers could be eavesdropped. Furthermore, ID and password theft may lose us money when using Internet bank service. Attackers on the Internet use hacking tricks to damage systems while users are connected to the Internet. These threats along with possible careless disclosure of business information make Instant Messaging a very unsafe method of communication for businesses. The paper divides hacking tricks into three categories: (1) Trojan programs that share files via instant messenger. (2) Phishing or fraud via e-mails. (3) Fake Websites. Keywords:Hacking tricks, Trojan programs, Phishing, Firewall, Intrusion detection system.1. IntroductionIncreasingly more people are using instant messengers such as MSN Messenger, Yahoo! Messenger, ICQ, etc as the media of communication. These instant messengers transmit alphanumeric message as well as permit file sharing. During transfer, a file may be intercepted by a hacker and implanted with backdoor malware. Moreover, the e-mails users receive every day may include Spam, advertisements, and fraudulent mail intended to trick uninformed users. Fake websites too are prevalent. Websites which we often visit could be counterfeited by imitating the interface and the URL of the original, tricking users. The paper classifies hacking tricks into three categories which are explained in the following sections.2. Hacking TricksThe paper divides hacking tricks into three categories: (1) Trojan programs that share files via instant messenger. (2) Phishing (3) Fake Websites.2.1 Trojan programs that share files via instant messengerInstant messaging allows file-sharing on a computer [9]. All present popular instant messengers have file sharing abilities, or allow users to have the above functionality by installing patches or plug-ins; this is also a major threat to present information security. These communication softwares also makeit difficult for existing hack prevention methods to prevent and control information security. Therefore, we shall discuss how to control the flow of instant messages and how to identify dangerous user behavior.Hackers use instant communication capability to plant Trojan program into an unsuspected program; the planted program is a kind of remotely controlled hacking tool that can conceal itself and is unauthorized. The Trojan program is unknowingly executed, controlling the infected computer; it can read, delete, move and execute any file on the computer. The advantages of a hacker replacing remotely installed backdoor Trojan programs [1] with instant messengers to access files are:When the victim gets online, the hacker will be informed. Thus, a hacker can track and access the infected computer, and incessantly steal user information.A hacker need not open a new port to perform transmissions; he can perform his operations through the already opened instant messenger port.Even if a computer uses dynamic IP addresses, its screen name doesn’t change.Certain Trojan programs are designed especially for instant messengers. These Trojans can change group settings and share all files on the hard disk of the infected computer. They can also destroy or modify data, causing data disarray. This kind of program allows a hacker access to all files on an infected computer, and thus poses a great threat to users. The Trojan program takes up a large amount of the resources of the computer causing it to become very slow and often crashes without a reason.Trojan programs that access a user computer through an instant messenger are probably harder to detect than classic Trojan horse programs. Although classic Trojan intrudes a computer by opening a listening or outgoing port which is used to connect toa remote computer, a desktop firewall can effectively block such Trojans. Alternatively, since it is very difficult for the server’s firewall to spot intrusion by controlling an instant messenger’s flow, it is extremely susceptible to intrusion.Present Trojan programs have already successfully implemented instant messengers. Some Trojan programs are Backdoor Trojan, AIMVision, and Backdoor. Sparta.C. Backdoor Trojans use ICQ pager to send messages to its writer. AIMVision steals AIM related information stored in the Windows registry, enabling a hacker to setup an AIM user id. Backdoor. Sparta.C uses ICQ to communicate with its writer and opens a port on an infected host and send its IP Address to the hacker, and at the same time attempts to terminate the antivirus program or firewall of the host.2.1.1 Hijacking and ImpersonationThere are various ways through which a hacker can impersonate other users [7]. The most commonly used method is eavesdropping on unsuspecting users to retrieve user accounts, passwords and other user related information.The theft of user account number and related information is a very serious problem in any instant messenger. For instance, a hacker after stealing a user’s information impersonate the user; the user’s contacts not knowing that the user’s account has been hacked believe that the person they’re talking to is the user, and are persuaded to execute certain programs or reveal confidential information. Hence, theft of user identity not only endangers a user but also surrounding users. Guarding against Internet security problems is presently the focus of future research; because without good protection, a computer can be easily attacked, causing major losses.Hackers wishing to obtain user accounts may do so with the help of Trojans designed to steal passwords. If an instant messenger client stores his/her password on his/her computer, then a hacker can send a Trojan program to the unsuspecting user. When the user executes the program, the program shall search for the user’s password and send it to the hacker. There are several ways through which a Trojan program can send messages back to the hacker. The methods include instant messenger, IRC, e-mails, etc.Current four most popular instant messengers are AIM, Yahoo! Messenger, ICQ, and MSN Messenger, none of which encrypts its flow. Therefore, a hackercan use a man-in-the-middle attack to hijack a connection, then impersonate the hijacked user and participate in a chat-session. Although difficult, a hacker can use the man-in-the-middle attack to hijack the connection entirely. For example, a user may receive an offline message that resembles that sent by the server, but this message could have been sent by the hacker. All at once, the user could also get disconnected to the server. Furthermore, hackers may also use a Denial of Service (DoS) tool or other unrelated exploits to break the user’s connection. However, the server keeps the connection open, and does not know that the user has been disconnected; thus allowing the hacker to impersonate the user. Moreover, since the data flow is unencrypted and unauthenticated, a hacker can use man-in-the-middle attacks that are similar to that of ARP fraud to achieve its purpose.2.1.2 Denial of Service (DoS)There are many ways through which a hacker can launch a denial of service (DoS) attack [2] on an instant messenger user. A Partial DoS attack will cause a user end to hang, or use up a large portion of CPU resources causing the system to become unstable.Another commonly seen attack is the flooding of messages to a particular user. Most instant messengers allow the blocking of a particular user to prevent flood attacks. However, a hacker can use tools that allow him to log in using several different identities at the same time, or automatically create a large number of new user ids, thus enabling a flood attack. Once a flood attack begins, even if the user realizes that his/her computer has been infected, the computer will not be able to respond. Thus, the problem cannot be solved by putting a hacker’s user id on the ignore list of your instant messenger.A DoS attack on an instant messenger client is only a common hacking tool. The difficulty of taking precautions against it could turn this hacking tool into dangerous DoS type attacks. Moreover, some hacking tools do not just cause an instant messenger client to hang, but also cause the user end to consume large amount of CPU time, causing the computer to crash.2.1.3 Information DisclosureRetrieving system information through instant messenger users is currently the most commonly used hacking tool [4]. It can effortlessly collect user network information like, current IP, port, etc. IP address retriever is an example. IP address retrievers can be used to many purposes; for instance, a Trojan when integrated with an IP address retriever allows a hacker to receive all information related to the infected computer’s IP address as soon as the infected computer connects to the internet. Therefore, even if the user uses a dynamic IP address, hackers can still retrieve the IP address.IP address retrievers and other similar tools can also be used by hackers to send data and Trojans to unsuspecting users. Hackers may also persuade unsuspecting users to execute files through social engineering or other unrelated exploits. These files when executed search for information on the user’s computer and sends them back to the hacker through the instant messenger network.Different Trojan programs were designed for different instant messaging clients. For example, with a user accounts and password stealing Trojans a hacker can have full control of the account once the user logs out. The hacker can thus perform various tasks like changing the password and sending the Trojan program to all of the user’s contacts.Moreover, Trojans is not the only way through which a hacker can cause information disclosure. Since data sent through instant messengers are unencrypted, hackers can sniff and monitor entire instant messaging transmissions. Suppose an employee of an enterprise sends confidential information of the enterprise through the instant messenger; a hacker monitoring the instant messaging session can retrieve the data sent by the enterprise employee. Thus, we must face up to the severity of the problem.2.2 PhishingThe word “Phishing” first appeared in 1996. It is a variant of ‘fishing’, and formed by replacing the ‘f’ in ‘fishing’ with ‘ph’ from phone. It means tricking users of their money through e-mails.Based on the statistics of the Internet Crime Complaint Center, loss due to internet scam was as high as $1.256 million USD in 2004. The Internet Crime Complaint Center has listed the above Nigerian internet scam as one of the ten major internet scams.Based on the latest report of Anti-Phishing Working Group (APWG) [8], there has been a 28% growth of Phishing scams in the past 4 months, mostly in the US and in Asia. Through social engineering and Trojans, it is very difficult for a common user to detect the infection.To avoid exploitation of your compassion, the following should be noted:(1)When you need to enter confidentialinformation, first make sure that theinformation is entered via an entirely secureand official webpage. There are two ways todetermine the security of the webpage:a.The address displayed on the browserbegins with https://, and not http://. Payattention to if the letter ‘s’ exists.b.There is a security lock sign on the lowerright corner of the webpage, and whenyour mouse points to the sign, a securitycertification sign shall appear.(2)Consider installing a browser security softwarelike SpoofStick which can detect fake websites.(3)If you suspect the received e-mail is a Phishinge-mail, do not open attachments attached to theemail. Opening an unknown attachment couldinstall malicious programs onto your computer.(4)Do not click on links attached to your emails. Itis always safer to visit the website through theofficial link or to first confirm the authenticityof the link. Never follow or click on suspiciouslinks in an e-mail. It is advisable to enter theURL at the address bar of the web browser,and not follow the given link.Generally speaking, Phishing [3] [5] is a method that exploits people’s sympathy in the form of aid-seeking e-mails; the e-mail act as bait. These e-mails usually request their readers to visit a link that seemingly links to some charitable organization’s website; but in truth links the readers to a website that will install a Trojan program into the reader’s computer. Therefore, users should not forward unauthenticated charity mails, or click on unfamiliar links in an e-mail. Sometimes, the link could be a very familiar link or an often frequented website, but still, it would be safer if you’d type in the address yourself so as to avoid being linked to a fraudulent website. Phisher deludes people by using similar e-mails mailed by well-known enterprises or banks; these e-mails often asks users to provide personal information, or result in losing their personal rights; they usually contain a counterfeit URL which links to a website where the users can fillin the required information. People are often trapped by phishing due to inattentionBesides, you must also be careful when using a search engine to search for donations and charitable organizations.2.3 Fake WebsitesFake bank websites stealing account numbers and passwords have become increasingly common with the growth of online financial transactions. Hence, when using online banking, we should take precautions like using a secure encrypted customer’s certificate, surf the net following the correct procedure, etc.There are countless kinds of phishing baits, for instance, messages that say data expired, data invalid, please update data, or identity verification intended to steal account ID and matching password. This typeof online scam is difficult for users to identify. As scam methods become finer, e-mails and forged websites created by the impostor resemble their original, and tremendous losses arise from the illegal transactions.The following are methods commonly used by fake websites. First, the scammers create a similar website homepage; then they send out e-mails withenticing messages to attract visitors. They may also use fake links to link internet surfers to their website. Next, the fake website tricks the visitors into entering their personal information, credit card information or online banking account number and passwords. After obtaining a user’s information, the scammers can use the information to drain the bank accounts, shop online or create fake credit cards and other similar crimes. Usually, there will be a quick search option on these fake websites, luring users to enter their account number and password. When a user enters their account number and password, the website will respond with a message stating that the server is under maintenance. Hence, we must observe the following when using online banking:(1)Observe the correct procedure for entering abanking website. Do not use links resultingfrom searches or links on other websites.(2)Online banking certifications are currently themost effective security safeguard measure. (3)Do not easily trust e-mails, phone calls, andshort messages, etc. that asks for your accountnumber and passwords.Phishers often impost a well-known enterprise while sending their e-mails, by changing the sender’s e-mail address to that of the well known enterprise, in order to gain people’s trust. The ‘From’ column of an e-mail is set by the mail software and can be easily changed by the web administrator. Then, the Phisher creates a fake information input website, and send out e-mails containing a link to this fake website to lure e-mail recipients into visiting his fake website.Most Phishers create imitations of well known enterprises websites to lure users into using their fake websites. Even so, a user can easily notice that the URL of the website they’re entering has no relation to the intended enterprise. Hence, Phishers may use different methods to impersonate enterprises and other people. A commonly used method is hiding the URL. This can easily be done with the help of JavaScript.Another way is to exploit the loopholes in an internet browser, for instance, displaying a fake URL in the browser’s address bar. The security loophole causing the address bar of a browser to display a fake URL is a commonly used trick and has often been used in the past. For example, an e-mail in HTML format may hold the URL of a website of a well-known enterprise, but in reality, the link connects to a fake website.The key to successfully use a URL similar to that of the intended website is to trick the visual senses. For example, the sender’s address could be disguised as that of Nikkei BP, and the link set to http://www.nikeibp.co.jp/ which has one k less than the correct URL which is http://www.nikkeibp.co.jp/. The two URLs look very similar, and the difference barely noticeable. Hence people are easily tricked into clicking the link.Besides the above, there are many more scams that exploit the trickery of visual senses. Therefore, you should not easily trust the given sender’s name and a website’s appearance. Never click on unfamiliar and suspicious URLs on a webpage. Also, never enter personal information into a website without careful scrutiny.3. ConclusionsBusiness strategy is the most effective form of defense and also the easiest to carry out. Therefore, they should be the first line of defense, and not last. First, determine if instant messaging is essential in the business; then weigh its pros and cons. Rules and norms must be set on user ends if it is decided that the business cannot do without instant messaging functionality. The end server should be able to support functions like centralized logging and encryption. If not, then strict rules must be drawn, and carried out by the users. Especially, business discussions must not be done over an instant messenger.The paper categorized hacking tricks into three categories: (1) Trojan programs that share files via instant messenger. (2) Phishing (3) Fake Websites. Hacking tricks when successfully carried out could cause considerable loss and damage to users. The first category of hacking tricks can be divided into three types: (1) Hijacking and Impersonation; (2) Denial of Service; (3) Information Disclosure.Acknowledgement:This work was supported by the National Science Council, Taiwan, under contract No. NSC 95-2221-E-029-024.References[1] B. Schneier, “The trojan horse race,”Communications of ACM, Vol. 42, 1999, pp.128.[2] C. L. Schuba, “Analysis of a denial of serviceattack on TCP,” IEEE Security and PrivacyConference, 1997, pp. 208-223.[3] E. Schultz, “Phishing is becoming moresophisticated,” Computer and Security, Vol.24(3), 2005, pp. 184-185.[4]G. Miklau, D. Suciu, “A formal analysis ofinformation disclosure in data exchange,”International Conference on Management ofData, 2004, pp. 575-586.[5]J. Hoyle, “'Phishing' for trouble,” Journal ofthe American Detal Association, Vol. 134(9),2003, pp. 1182-1182.[6]J. Scambray, S. McClure, G. Kurtz, Hackingexposed: network security secrets and solutions,McGraw-Hill, 2001.[7]T. Tsuji and A. Shimizu, “An impersonationattack on one-time password authenticationprotocol OSPA,” to appear in IEICE Trans.Commun, Vol. E86-B, No.7, 2003.[8]Anti-Phishing Working Group,.[9]/region/tw/enterprise/article/icq_threat.html.有关网络环境安全的黑客技术摘要:现在人们往往通过互联网处理事务。

西电本科毕设论文格式要求

西电本科毕设论文格式要求

西电本科毕设论文格式要求篇一:西安电子科技大学本科生毕业设计(论文)撰写规范西安电子科技大学本科生毕业设计(论文)撰写规范一. 毕业设计(论文)的总体要求:撰写论文应简明扼要,一般不少于15000字(外语专业可适当减少,但不得少于10000单词,且须全部用外语书写)。

二. 毕业设计(论文)的编写格式:每一章、节的格式和版面要求整齐划一、层次清楚。

其中:1. 论文用纸:统一用A4纸,与论文封皮,任务书,工作计划,成绩考核表一致。

2. 章的标题:如:“摘要”、“目录”、“第一章”、“附录”等,黑体,三号,居中排列。

3. 节的标题:如:“2.1 认证方案”、“9.5 小结”等,宋体,四号,居中排列。

4. 正文:中文为宋体,英文为“Times News Roman”,小1四号。

正文中的图名和表名,宋体,五号。

5. 页眉:宋体五号,居中排列。

左面页眉为论文题目,右面页眉为章次和章标题。

页眉底划线的宽度为0.75磅。

6. 页码:宋体小五号,排在页眉行的最外侧,不加任何修饰。

三. 毕业设计(论文)的前置部分:毕业设计(论文)的前置部分包括封面、中英文摘要、目录等。

1.封面及打印格式(1) 学号:按照学校的统一编号,在右上角正确打印自己的学号,宋体,小四号,加粗。

(2) 题目:题目应和任务书的题目一致,黑体,三号。

(3) 学院、专业、班级、学生姓名和导师姓名职称等内容,宋体,小三号,居中排列。

2. 中英文摘要及关键词摘要是关于论文的内容不加注释和评论的简短陈述,具有独立性和自含性。

它主要是简要说明研究工作的目的、方法、结果和结论,重点说明本论文的成果和新见解。

关键词是为了文献标引工作从论文中选取出来用以表示全文主题内容信息的术语。

(1) 中文摘要,宋体小四号,一般为300字;英文摘要,“Times News Roman”字体,小四号,一般为300个实2词。

摘要中不宜出现公式、非公用的符号、术语等。

(2)每篇论文选取3 ~ 5个关键词,中文为黑体小四号,英文为“Times News Roman”字体加粗,小四号。

毕业设计机电工程系中英文翻译对照

毕业设计机电工程系中英文翻译对照

English translationThe E- Behind EverythingElectricity and magnetism run nearly everything we plug in or turn on. Although it’s something we take for granted, it has taken hundreds of years of experimentation and research to reach the point where we flick a switch and the lights go on.People knew about electricity for a long time. Ancient Greeks noticed that if they rubbed a piece of amber, feathers would stick to it. You’ve experienced a similar thing if you’ve ever had your hair stick up straight after you combed it, or had your socks stick together when you removed them from the drier. This is called static electricity, but back then nobody knew how to explain it or what to do with it.Experiments using friction to generate static electricity led to machines that could produce large amounts of static electricity on demand. In 1660 German Otto von made the first electrostatic generator with a ball of sulfur and some cloth. The ball symbolized the earth, and he believed that this little replica of the e arth would shed part of its electric “soul” when rubbed. It worked, and now scientists could study electric shocks and sparks whenever they wanted.As scientists continued to study electricity, they began thinking of it as an invisible fluid and tried to capture and store it. One of the first to do this was Pieter van, Holland. In 1746 he wrapped a water-filled jar with metal foil and discovered that this simple device could store the energy produced by an electrostatic generator. This device became known as the jar. were very important in other people’s experiments, such as Benjamin Franklin’s famous kite experiment. Many people suspected that lightning and static electricity were the same thing, since both crackled and produced bright sparks. In 1752 Franklin attached a key to a kite and flew it in a storm-threatened sky. (NOTE that Franklin did not fly a kite in an actual storm. NEVER do that!) When a thundercloud moved by, the key sparked. This spark charged the jars and proved that lightning was really electricity. Like many experimenters and scientists Franklin used one discovery to make another. Franklin was not the only scientist inspired to conduct experiments with electricity. In the 1780s, the Italian scientist Luigi m ade a dead frog’s leg move by means of an electric current. called this “animal electricity.” He thought that the wet animal tissue generated electricity when it came in contact with metal probes. He even suggested that the soul was actually Italian Alessandro Volta was skeptical of con clusions. In 1799 he discovered that it wasn’t animal tissue alone producing the electric current at all. Volta believed that the current was actually caused by the interaction of water and chemicals in the animal tissue with the metal probes. Volta stacked metal disks separated by layers of cardboard soaked in salt water. This so-called voltaic pile produced an electric current without needing to be charged like a jar. This invention is still around today, but we call it the battery.Volta’s pile was a lot different from the batteries you put in your Discman. It was big, ugly, and messy, but it worked, making Volta the first person to generate electricity with a chemical reaction. His work was so important that the term volt—the unit of electrical tension or pr—is named in his honor. As for Galvani, although he was proven wrong, his work stimulated research on electricity and the body. That research eventually proved that nerves do carry electrical impulses, an important medical discovery. Like electricity, magnetism was baffling to the earliest researchers. Today manufactured magnets are common, but in earlier timesthe only available magnets were rare and mysterious rocks with an unexplainable attraction for bits of iron. Explanations of the way they work sound strange today. For example, in the 1600, English doctor William Gilbert published a book on magnetism. He thought that these strange substances, called “lodestones,” had a soul that accounted for the attraction of a lodestone to iron and steel. The only real use for lodestones was to make compasses, and many thought the compass needle’s movement was in response to its attraction to the earth’s “soul.” By 1800, after many years of study, scientists began wondering if these two mysterious forces—electricity and magnetism—were related. In 1820 Danish physicist Hans Oersted showed that whenever an electric current flows through a wire, it produces a magnetic field around the wire. French mathematician André-Marie used algebra to come up with a mathematical formula to describe this relationship between electricity and magnetism. He was one of the first to develop measuring techniques for electricity. The unit for current, the ampere, abbreviated as amp or as A, is named in his honor. Groundbreaking experiments in electromagnetism were conducted by British scientist Michael Faraday. He showed that when you move a loop of a wire in a magnetic field, a little bit of current flows through the loop for just a moment. This is called induction. Faraday constructed a different version of it called the induction ring. In later years, engineers would use the principle of the induction ring to build electrical transformers, which are used today in thousands of electrical and electronic devices. Faraday also invented a machine that kept a loop of wire rotating near a magnet continuously. By touching two wires to the rotating loop, he could detect the small flow electric current. This machine used induction to produce a flow of current as long as it was in motion, and so it was an electromagnetic generator. However, the amount of electricity it produced was very tiny. There was still another use for induction. Faraday also created a tiny electric motor—too small to do the work of a steam engine but still quite promising. For thousands of years electricity and magnetism were subjects of interest only to experimenters and scientists. Nobody thought of a practical way of using electricity before the 1800s and it was of little interest to most people. But by Faraday’s time invento rs and engineers were gearing up to transform scientific concepts into practical machines.Telegraphs and TelephonesOne of the most important ways that electricity and magnetism have been put to use is making communication faster and easier. In this day o f instant messaging, cell phones, and pagers, it’s hard to imagine a time when messages had to be written and might spend weeks or even months reaching their destination. They had to be carried great distances by ships, wagon, or even by horseback—you coul dn’t just call somebody up to say hello. That all changed when inventors began using electricity and magnetism to find better ways for people to talk to each other. The telegraph was first conceived of in the 1700s, but few people pursued it. By the 1830s, however, advancements in the field of electromagnetism, such as those made by Alessandro Volta and Joseph Henry, created new interest in electromagnetic communication. In 1837, English scientist Charles Wheatstone opened the first com telegraph line between London and Camden Town, a distance of 1.5 miles. Building on, Samuel Morse, an American artist and inventor, designed a line to connect Washington, DC and Baltimore, Maryland in 1844. Morse’s telegraph was a simple device that used a battery, a switch, and a small electromagnet, but it allowed people miles apart to communicate instantly. Although Morse is often credited with inventing the telegraph, his greatest contribution was actually Morse, a special language designed for the telegraph. Morse'scommercialization of the telegraph spread the technology quickly. In 1861 California was connected to the rest of the United States with the first transcontinental telegraph line. Five years later, engineers found a way of spanning the Atlantic Ocean with telegraph lines, thus connecting the United States and Europe. This was an enormous and challenging job. To do it engineers had to use a huge ship called The Great Eastern to lay the cable across the ocean. It was the only ship with enough room to store all that cable. The world was connected by wire before the nation was connected by rail—the transcontinental railroad wasn’t completed until 1869! The telegraph was the key to fast, efficient railroad service. The railroads and the telegraph expanded side-by-side, crisscrossing every continent, except Anta, in the late 1800s. In the late 19th and early 20th centuries, telegraphy became a very lucrative business for companies such as Western Union. It also provided women with new career options. As convenient as the telegraph was, people dreamt of hearing the voices of loved ones who lived far away. Pretty soon, another instrument to communicate across distances was invented. Alexander Graham Bell, a teacher and inventor, worked with the deaf and became fascinated with studying sound. In 1875, Bell discovered a way to convert sound waves to an undulating current that could be carried along wires. This helped him invent the telephone. The first phone conversation was an inadvertent one between Bell and Watson, his ass istant in the next room. After spilling some acid, Bell said “Mr. Watson, come here.I want you.” He patented his device the same year. Early phone service wasn’t as portable and convenient as today’s. At first, telephones we connected in pairs. You could call only one person, and they could only call you. The telephone exchange changed all that. The first exchange was in New Haven, Connecticut in 1878. It allowed people who subscribed to it to call one another. Operators had to connect the calls, but in 1891 an automatic exchange was invented. Some problems had to be solved, though, before long-distance telephoning could work. The main one was that the signal weakened with distance, disappearing if the telephone lines were too long. A solution was found in 1912 with a way to amplify electrical signals, and transcontinental phone calls were possible. A test took place in 1914, and the next year, Bell, who was in New York, called Watson, who was in San Francisco. He said the same thing he had said during the first phone conversation. Watson’s answer? “It will take me five days to get there now!”Plc development1.1 MotivationProgrammable Logic Controllers (PLC), a computing device invented by Richard E. Morley in 1968, have been widely used in industry including manufacturing systems, transportation systems, chemical process facilities, and many others. At that time, the PLC replaced the hardwired logic with soft-wired logic or so-called relay ladder logic (RLL), a programming language visually resembling the hardwired logic, and reduced thereby the configuration time from 6 months down to 6 days [Moody and Morley, 1999].Although PC based control has started to come into place, PLC based control will remain the technique to which the majority of industrial applications will adhere due to its higher performance, lower price, and superior reliability in harsh environments. Moreover, according to a study on the PLC market of Frost and Sullivan [1995], an increase of the annual sales volume to 15 million PLCs per year with the hardware value of more than 8 billion US dollars has been predicted, though the prices of computing hardware is steadily dropping. The inventor of the PLC, Richard E Morley, fairly considers the PLC market as a 5-billion industry at the present time.Though PLCs are widely used in industrial practice, the programming of PLC based control systems is still very much relying on trial-and-error. Alike software engineering, PLC software design is facing the software dilemma or crisis in a similar way. Morley himself emphasized this aspect most forcefully by indicating [Moody and Morley, 1999, p. 110]:`If houses were built like software projects, a single woodpecker could destroy civilization.” Particularly, practical problems in PLC programming are to eliminate software bugs and to reduce the maintenance costs of old ladder logic programs. Though the hardware costs of PLCs are dropping continuously, reducing the scan time of the ladder logic is still an issue in industry so that low-cost PLCs can be used.In general, the productivity in generating PLC is far behind compared to other domains, for instance, VLSI design, where efficient computer aided design tools are in practice. Existent software engineering methodologies are not necessarily applicable to the PLC based software design because PLC-programming requires a simultaneous consideration of hardware and software. The software design becomes, thereby, more and more the major cost driver. In many industrial design projects, more than SO0/a of the manpower allocated for the control system design and installation is scheduled for testing and debugging PLC programs [Rockwell, 1999].In addition, current PLC based control systems are not properly designed to support the growing demand for flexibility and reconfigurability of manufacturing systems. A further problem, impelling the need for a systematic design methodology, is the increasing software complexity in large-scale projects.1.2 Objective and Significance of the ThesisThe objective of this thesis is to develop a systematic software design methodology for PLC operated automation systems. The design methodology involves high-level description based on state transition models that treat automation control systems as discrete event systems, a stepwise design process, and set of design rules providing guidance and measurements to achieve a successful design. The tangible outcome of this research is to find a way to reduce the uncertainty in managing the control software development process, that is, reducing programming and debugging time and their variation, increasing flexibility of the automation systems, and enabling software reusability through modularity. The goal is to overcome shortcomings of current programming strategies that are based on the experience of the individual software developer.A systematic approach to designing PLC software can overcome deficiencies in the traditional way of programming manufacturing control systems, and can have wide ramifications in several industrial applications. Automation control systems are modeled by formal languages or, equivalently, by state machines. Formal representations provide a high-level description of the behavior of the system to be controlled. State machines can be analytically evaluated as to whether or not they meet the desired goals. Secondly, a state machine description provides a structured representation to convey the logical requirements and constraints such as detailed safety rules. Thirdly, well-defined control systems design outcomes are conducive to automatic code generation- An ability to produce control software executable on commercial distinct logic controllers can reduce programming lead-time and labor cost. In particular, the thesis is relevant with respect to the following aspects.Customer-Driven ManufacturingIn modern manufacturing, systems are characterized by product and process innovation, become customer-driven and thus have to respond quickly to changing system requirements. A majorchallenge is therefore to provide enabling technologies that can economically reconfigure automation control systems in response to changing needs and new opportunities. Design and operational knowledge can be reused in real-time, therefore, giving a significant competitive edge in industrial practice.Higher Degree of Design Automation and Software QualityStudies have shown that programming methodologies in automation systems have not been able to match rapid increase in use of computing resources. For instance, the programming of PLCs still relies on a conventional programming style with ladder logic diagrams. As a result, the delays and resources in programming are a major stumbling stone for the progress of manufacturing industry. Testing and debugging may consume over 50% of the manpower allocated for the PLC program design. Standards [IEC 60848, 1999; IEC-61131-3, 1993; IEC 61499, 1998; ISO 15745-1, 1999] have been formed to fix and disseminate state-of-the-art design methods, but they normally cannot participate in advancing the knowledge of efficient program and system design.A systematic approach will increase the level of design automation through reusing existing software components, and will provide methods to make large-scale system design manageable. Likewise, it will improve software quality and reliability and will be relevant to systems high security standards, especially those having hazardous impact on the environment such as airport control, and public railroads.System ComplexityThe software industry is regarded as a performance destructor and complexity generator. Steadily shrinking hardware prices spoils the need for software performance in terms of code optimization and efficiency. The result is that massive and less efficient software code on one hand outpaces the gains in hardware performance on the other hand. Secondly, software proliferates into complexity of unmanageable dimensions; software redesign and maintenance-essential in modern automation systems-becomes nearly impossible. Particularly, PLC programs have evolved from a couple lines of code 25 years ago to thousands of lines of code with a similar number of 1/O points. Increased safety, for instance new policies on fire protection, and the flexibility of modern automation systems add complexity to the program design process. Consequently, the life-cycle cost of software is a permanently growing fraction of the total cost. 80-90% of these costs are going into software maintenance, debugging, adaptation and expansion to meet changing needs [Simmons et al., 1998].Design Theory DevelopmentToday, the primary focus of most design research is based on mechanical or electrical products. One of the by-products of this proposed research is to enhance our fundamental understanding of design theory and methodology by extending it to the field of engineering systems design. A system design theory for large-scale and complex system is not yet fully developed. Particularly, the question of how to simplify a complicated or complex design task has not been tackled in a scientific way. Furthermore, building a bridge between design theory and the latest epistemological outcomes of formal representations in computer sciences and operations research, such as discrete event system modeling, can advance future development in engineering design. Application in Logical Hardware DesignFrom a logical perspective, PLC software design is similar to the hardware design of integrated circuits. Modern VLSI designs are extremely complex with several million parts and a product development time of 3 years [Whitney, 1996]. The design process is normally separated into acomponent design and a system design stage. At component design stage, single functions are designed and verified. At system design stage, components are aggregated and the whole system behavior and functionality is tested through simulation. In general, a complete verification is impossible. Hence, a systematic approach as exemplified for the PLC program design may impact the logical hardware design.1.3 Structure of the ThesisFigure 1.1 illustrates the outline of the following thesis. Chapter 2 clarifies the major challenges and research issues, and discourses the relevant background and terminology. It will be argued that a systematic design of PLC software can contribute to higher flexibility and reconfigurability of manufacturing systems. The important issue of how to deal with complexity in engineering design with respect to designing and operating a system will be debated. The research approach applied in this thesis is introduced starting from a discussion of design theory and methodology and what can be learnt from that field.Chapter 3 covers the state-of-the-art of control technology and the current practice in designing and programming PLC software. The influences of electrical and software engineering are revealed as well as the potentially applicable methods from computer science are discussed. Pros and cons are evaluated and will lead to the conclusion that a new methodology is required that suffices the increasing complexity of PLC software design.Chapter 4 represents the main body of the thesis and captures the essential features of the design methodology. Though design theory is regarded as being in a pre- scientific stage it has advanced in mechanical, software and system engineering with respect to a number of proposed design models and their evaluation throughout real-world examples. Based on a literature review in Chapter 2 and 3 potential applicable design concepts and approaches are selected and applied to context of PLC software design. Axiomatic design is chosen as underlying design concept since it provides guidance for the designer without restriction to a particular design context. To advance the design concept to PLC software design, a formal notation based on statechart formalism is introduced. Furthermore, a design process is developed that arranges the activities needed in a sequential order and shows the related design outcomes.In Chapter 5, a number of case studies are given to demonstrate the applicability of the developed design methodology. The examples are derived from a complex reference system, a flexible assembly system. The achieved insights are evaluated in a concluding paragraph.Chapter 6 presents the developed computerized design tool for PLC software design on a conceptual level. The software is written in Visual Basic by using ActiveX controls to provide modularity and reuse in a web-based collaborative programming environment. Main components of the PLC software are modeling editors for the structural (modular) and the behavioral design, a layout specification interface and a simulation engine that can validate the developed model. Chapter 7 is concluding this thesis. It addresses the achievements with respect to the research objectives and questions. A critical evaluation is given alongside with an outlook for future research issues.电力的故事当我们插上电源,打开旋钮,电和磁差不多在每样东西上都运行着,今天我们知道这是什么,这一些花了人们上百年时间的实验和研究来达到这一点—当我们按下按钮时,光亮已经开始,人们对电的了解已经有很长一段时间了.古希腊人注意到,摩擦一块琥珀,羽毛将能被吸住.你已经经历过相类似的事情,当你梳头时,头发将垂直竖起,当你从干燥机中拿袜子时,袜子也会粘在一起.这被称作静电.但是在以前人们不知道如何解释此类现象或如何应用这种现象,使用摩擦产生的静电来带动机器的实验可以产生大量所需要的静电.在1660年,德国人Otto von Guericke用一个硫磺球和一些布制造了第一台静电发电机.硫磺球象征大地,他深信这种小型地球复制品被摩擦时将流出电的灵魂,他成功了,现在的科学家可以在任何想要的时候来研究电击和电火花.随着科学家们持续对电的研究,他们开始认为它以一种看不见的方式流动,并试图去捕获并储存.第一次去做这项研究的是荷兰Leyden的Pieter van Musschenbroek.1746,他用一个金属箔片包一个装满水的罐子,发现这种简单的设备能储存由静电发电机产生的能量.这个设备后来著名的莱顿瓶.莱顿瓶在其他人的实验中有非常重要的作用.如Benjamin Franklin著名的风筝实验.许多人认为闪电和静电是同一种东西,由于双方碰撞产生明亮的电火花.1752年, Franklin将一把钥匙绑在风筝上,在一个暴风即将来临的天气里放飞(请记住Franklin不是在一个真正的暴风寸中放飞的,永远不要这样做),当一块雷雨云经过时,钥匙被闪电击中,闪电充满莱顿瓶,由此证明闪电实际也是一种电力.同其他实验人和科学家一样, Franklin用一个发现来做另外一个. Franklin并不是唯一的在电力实验方面灵光突现的科学家.18世纪80年代,意大利的科学家Luigi Galvani用电流让一只切断的青蛙的腿移动. Galvani称之为生物电.他认为当潮湿的动物组织同金属探测针接触时产生电能.他甚至大胆预测精神也是一种电能。

毕设科技文献翻译

毕设科技文献翻译

青岛大学毕业论文(设计)科技文献翻译院系:自动化工程学院控制工程系专业:自动化班级: 2008级自动化6班姓名:王笑指导教师:李明智2012年5月10日《Building Embedded Linux Systems》By Karim Yaghmour IntroductionSince its first public release in 1991, Linux has been put to ever wider uses. Initially confined to a loosely tied group of developers and enthusiasts on the Internet, it eventually matured into a solid Unix-like operating system for workstations, servers, and clusters. Its growth and popularity accelerated the work started by the Free Software Foundation (FSF) and fueled what would later be known as the open source movement. All the while, it attracted media and business interest, which contributed to establishing Linux's presence as a legitimate and viable choice for an operating system.Yet, oddly enough, it is through an often ignored segment of computerized devices that Linux is poised to become the preferred operating system. That segment is embedded systems, and the bulk of the computer systems found in our modern day lives belong to it. Embedded systems are everywhere in our lives, from mobile phones to medical equipment, including air navigation systems, automated bank tellers, MP3 players, printers, cars, and a slew of other devices about which we are often unaware. Every time you look around and can identify a device as containing a microprocessor, you've most likely found another embedded system.If you are reading this book, you probably have a basic idea why one would want to run an embedded system using Linux. Whether because of its flexibility, its robustness, its price tag, the community developing it, or the large number of vendors supporting it, there are many reasons for choosing to build an embedded system with Linux and many ways to carry out the task. This chapter provides the background for the material presented in the rest of the book by discussing definitions, real-life issues, generic embedded Linux systems architecture, examples, and methodology.1.1 DefinitionsThe words "Linux," "embedded Linux," and "real-time Linux" are often used with little reference to what is being designated. Sometimes, the designations may mean something veryprecise. Other times, a broad range or category of applications is meant. Let us look at these terms and what they mean in different situations.1.1.1 What Is Linux?Linux is interchangeably used in reference to the Linux kernel, a Linux system, or a Linux distribution. The broadness of the term plays in favor of the adoption of Linux, in the large sense, when presented to a nontechnical crowd, but can be bothersome when providing technical explanations. If, for instance, I say: "Linux provides TCP/IP networking." Do I mean the TCP/IP stack in the kernel or the TCP/IP utilities provided in a Linux distribution that are also part of an installed Linux system, or both? This vagueness actually became ammunition for the proponents of the "GNU/Linux" moniker, who pointed out that Linux was the kernel, but that the system was mainly built on GNU software.Strictly speaking, Linux refers to the kernel maintained by Linus Torvalds and distributed under the same name through the main repository and various mirror sites. This codebase includes only the kernel and no utilities whatsoever. The kernel provides the core system facilities. It may not be the first software to run on the system, as a bootloader may have preceded it, but once it is running, it is never swapped out or removed from control until the system is shut down. In effect, it controls all hardware and provides higher-level abstractions such as processes, sockets, and files to the different software running on the system.As the kernel is constantly updated, a numbering scheme is used to identify a certain release. This numbering scheme uses three numbers separated by dots to identify the releases. The first two numbers designate the version, and the third designates the release. Linux 2.4.20, for instance, is version number 2.4, release number 20. Odd version numbers, such as 2.5, designate development kernels, while even version numbers, such as 2.4, designate stable kernels. Usually, you should use a kernel from the latest stable series for your embedded system.This is the simple explanation. The truth is that far from the "official" releases, there are many modified Linux kernels that you may find all over the Internet that carry additional version information. 2.4.18-rmk3-hh24, for instance, is a modified kernel distributed by the Familiar project. It is based on 2.4.18, but contains an extra "-rmk3-hh24" version number controlled by the Familiar development team. These extra version numbers, and the kernel itself, will be discussed in more detail in Chapter 5.Linux can also be used to designate a hardware system running the Linux kernel and various utilities running on the kernel. If a friend mentions that his development team is usingLinux in their latest product, he probably means more than the kernel. A Linux system certainly includes the kernel, but most likely includes a number of other software components that are usually run with the Linux kernel. Often, these will be composed of a subset of the GNU software such as the C library and binary utilities. It may also include the X window system or a real-time addition such as RTAI.A Linux system may be custom built, as you'll see later, or can be based on an already available distribution. Your friend's development team probably custom built their own system. Conversely, when a user says she runs Linux on the desktop, she most likely means that she installed one of the various distributions, such as Red Hat or Debian. The user's Linux system is as much a Linux system as that of your friend's, but apart from the kernel, their systems most likely have very different purposes, are built from very different software packages, and run very different applications.Finally, Linux may also designate a Linux distribution. Red Hat, Mandrake, SuSE, Debian, Slackware, Caldera, MontaVista, Embedix, BlueCat, PeeWeeLinux, and others are all Linux distributions. They may vary in purpose, size, and price, but they share a common purpose: to provide the user with a shrink wrapped set of files and an installation procedure to get the kernel and various overlaid software installed on a certain type of hardware for a certain purpose. Most of us are familiar with Linux distributions through CD-ROMs, but there are distributions that are no more than a set of files you retrieve from a web site, untar, and install according to the documentation. The difference between mainstream, user-oriented distributions and these distributions is the automated installation procedure in the mainstream ones.Starting with the next chapter and in the rest of this book, I will avoid referring to the word "Linux" on its own. Instead, I will refer directly to the object of discussion. Rather than talking about the "Linux kernel," I will refer to the "kernel." Rather than talking about the "Linux system," I will refer to the "system." Rather than talking about a "Linux distribution," I will refer to a "distribution." In all these circumstances, "Linux" is implied but avoided to eliminate any possible confusion. I will continue, however, to use the term "Linux," where appropriate, to designate the broad range of software and resources surrounding the kernel.1.1.2 What Is Embedded Linux?Again, we could start with the three designations Linux suggests: a kernel, a system, and a distribution. Yet, we would have to take the kernel off the list right away, as there is no such thing as an embedded version of the kernel distributed by Linus. This doesn't mean the kernel can't be embedded. It only means you do not need a special kernel to create an embedded system.Often, you can use one of the official kernel releases to build your system. Sometimes, you may want to use a modified kernel distributed by a third party, one that has been specifically tailored for a special hardware configuration or for support of a certain type of application. The kernels provided with the various embedded distributions, for example, often include some optimizations not found in the main kernel tree and are patched for support for some debugging tools such as kernel debuggers. Mainly, though, a kernel used in an embedded system differs from a kernel used on a workstation or a server by its build configuration. Chapter 5 covers the build process.An embedded Linux system simply designates an embedded system based on the Linux kernel and does not imply the use of any specific library or user tools with this kernel.An embedded Linux distribution may include: a development framework for embedded linux systems, various software applications tailored for usage in an embedded system, or both.Development framework distributions include various development tools that facilitate the development of embedded systems. This may include special source browsers, cross-compilers, debuggers, project management software, boot image builders, and so on. These distributions are meant to be installed on the development host.Tailored embedded distributions provide a set of applications to be used within the target embedded system. This might include special libraries, execu, and configuration files to be used on the target. A method may also be provided to simplify the generation of root filesystems for the target system.Because this book discusses embedded Linux systems, there is no need to keep repeating "embedded Linux" in every name. Hence, I will refer to the host used for developing the embedded Linux system as the "host system," or "host," for short. The target, which will be the embedded Linux system will be referred to as the "target system," or "target," for short. Distributions providing development frameworks will be referred to as "development distributions."[1] Distributions providing tailored software packages will be referred to as "target distributions."[1] It would be tempting to call these "host distributions," but as you'll see later, some developers choose to develop directly on their target, hence the preference for "development distributions."1.1.3 What Is Real-Time Linux?Initially, real-time Linux designated the RTLinux project released in 1996 by Michael Barabanov under Victor Yodaiken's supervision. The goal of the project was to provide deterministic response times under a Linux environment.Nonetheless, today there are many more projects that provide one form or another of real-time responsiveness under Linux. RTAI, Kurt, and Linux/RK all provide real-time performance under Linux. Some projects' enhancements are obtained by inserting a secondary kernel under the Linux kernel. Others enhance the Linux kernel's response times by means of a patch.The adjective "real-time" is used in conjunction with Linux to describe a number of different things. Mainly, it is used to say that the system or one of its components is supposed to have fixed response times, but if you use a strict definition of "real-time," you may find that what is being offered isn't necessarily "real-time." I will discuss "real-time" issues and further define the meaning of this adjective in Section 1.2.1.21.2 Real Life and Embedded Linux SystemsWhat types of embedded systems are built with Linux? Why do people choose Linux? What issues are specific to the use of Linux in embedded systems? How many people actually use Linux in their embedded systems? How do they use it? All these questions and many more come to mind when pondering the use of Linux in an embedded system. Finding satisfactory answers to the fundamental questions is an important part of building the system. This isn't just a general statement. These answers will help you convince management, assist you in marketing your product, and most of all, enable you to evaluate whether your initial expectations have been met.1.2.1 Types of Embedded Linux SystemsWe could use the traditional segments of embedded systems such as aerospace, automotive systems, consumer electronics, telecom, and so on to outline the types of embedded Linux systems, but this would provide no additional information in regard to the systems being designated, because embedded Linux systems may be structured alike regardless of the market segment. Rather, let's classify embedded systems by criteria that will provide actual information about the structure of the system: size, time constraints, networkability, and degree of user interaction.1.2.1.1 SizeThe size of an embedded linux system is determined by a number of different factors. First, there is physical size. Some systems can be fairly large, like the ones built out of clusters, while others are fairly small, like the Linux watch built by IBM. Most importantly, there are the size attributes of the various electronic components of the system, such as the speed of the CPU, the size of the RAM, and the size of the permanent storage.In terms of size, I will use three broad categories of systems: small, medium, and large. Small systems are characterized by a low-powered CPU with a minimum of 2 MB of ROM and 4 MB of RAM. This isn't to say Linux won't run in smaller memory spaces, but it will take you some effort to do so. If you plan to run Linux in a smaller space than this, think about starting your work from one of the various distributions that put Linux on a single floppy. If you come from an embedded systems background, you may find that you could do much more using something other than Linux in such a small system. Remember to factor in the speed at which you could deploy Linux, though.Medium-sized systems are characterized by a medium-powered CPU with around 32 MB or ROM and 64 MB of RAM. Most consumer-oriented devices built with Linux belong to this category. This includes various PDAs, MP3 players, entertainment systems, and network appliances. Some of these devices may include secondary storage in the form of solid-state drives, CompactFlash, or even conventional hard drives. These types of devices have sufficient horsepower and storage to handle a variety of small tasks or can serve a single purpose that requires a lot of resources.Large systems are characterized by a powerful CPU or collection of CPUs combined with large amounts of RAM and permanent storage. Usually, these systems are used in environments that require large amounts of calculations to carry out certain tasks. Large telecom switches and flight simulators are prime examples of such systems. Typically, such systems are not bound by costs or resources. Their design requirements are primarily based on functionality while cost, size, and complexity remain secondary issues.In case you were wondering, Linux doesn't run on any processor below 32 bits. This rules out quite a number of processors traditionally used in embedded systems. Actually, according to traditional embedded system standards, all systems running Linux would be classified as large systems. This is very true when compared to an 8051 with 4K of memory. Keep in mind, though, current trends: processors are getting faster, RAM is getting cheaper and larger, systems are as integrated as ever, and prices are going down. With growing processing demands and increasingsystem requirements, the types of systems Linux runs on are quickly becoming the standard. In some cases, however, it remains that an 8-bit microcontroller might be the best choice.16-Bit Linux?Strictly speaking, the above statement regarding Linux's inability to run on any processor below 32 bits is not entirely true. There have been Linux ports to a number of odd processors. The Embeddable Linux Kernel Subset (ELKS) project found at /, for example, aims at running Linux on 16-bit processors such as the Intel 8086 and 286. Nevertheless, it remains that the vast majority of development done on the kernel and on user-space applications is 32-bit-centric. Hence, if you choose to use Linux on a processor lower than 32 bits, you will be on your own.1.2.1.2 Time constraintsThere are two types of time constraints for embedded systems: stringent and mild. Stringent time constraints require that the system react in a predefined time frame. Otherwise, catastrophic events happen. Take for instance a factory where workers have to handle materials being cut by large equipment. As a safety precaution, optical detectors are placed around the blades to detect the presence of the specially colored gloves used by the workers. When the system is alerted that a worker's hand is in danger, it must stop the blades immediately. It can't wait for some file to get swapped or for some task to relinquish the CPU. This system has stringent time requirements; it is a hard real-time system.Streaming audio systems would also qualify as having stringent requirements, because any transient lagging is usually perceived as bothersome by the users. Yet, this later example would mostly qualify as a soft real-time system because the failure of the application to perform in a timely fashion all the time isn't catastrophic as it would be for a hard real-time system. In other words, although infrequent failures will be tolerated, the system should be designed to have stringent time requirements.Mild time constraints vary a lot in requirements, but they generally apply to systems where timely responsiveness isn't necessarily critical. If an automated teller takes 10 more seconds to complete a transaction, it's generally not problematic. The same is true for a PDA that takes a certain number of seconds to start an application. The extra time may make the system seem slow, but it won't affect the end result.1.2.1.3 NetworkabilityNetworkability defines whether a system can be connected to a network. Nowadays, we can expect everything to be accessible through the network, even the refrigerator. This, in turn, places special requirements on the systems being built. One factor pushing people to choose Linux as an embedded OS is its proven networking capabilities. Falling prices and standardization of networking components are accelerating this trend. Most Linux devices have one form or another of network capability. You can attach a wireless network card in the Linux distribution built for the Compaq iPAQ, for instance, simply by inserting the adapter in the PCMCIA jacket. Networking issues will be discussed in detail in Chapter 10.1.2.1.4 User interactionThe degree of user interaction varies greatly from one system to another. Some systems, such as PDAs, are centered around user interaction, while others, such as industrial process control systems, might only have LEDs and buttons for interaction. Some other systems, have no user interface whatsoever. For example, some components of an autopilot system in a plane might take care of wing control but have no direct interaction with the human pilots."Building Embedded LINUX system," explained in detail a number of different target architectures and hardware configurations, including a thorough analysis of the Linux support for embedded hardware. All the explanation is for the open source and free software packages. Demonstrates how the operating system components from source, as well as how to find more documentation to help. "Building Embedded LINUX system greatly simplifies the task of complete control of embedded operating systems, whether it is based on technical or economic reasons.The evolution of the embedded system design in general because the traction of the application requirements and IT technology driven. With the continuous innovation and development of microelectronic technology, large-scale integrated circuits and technology has improved continuously. The combination of silicon material and human wisdom to produce large quantities of low-cost, high reliability and high-precision microelectronics structure module, and promote the development of a new technology areas and industries.The embedded system is used as the center, based on computer technology, hardware and software can be cut to meet the stringent requirements of application functionality, reliability, cost, size, power consumption dedicated computer system. The development history of morethan twenty years, also appeared on the internationally famous embedded operating system, such as VxWorks, the Palm OS, the Windows CE and so on. Linux as a Free OS, in recent years in the embedded field meteoric rise, the embedded operating system with the most potential.1.2 Linux multi-threading technologyThread (thread) technology as early as the 1960s have been proposed, but the real application of multi-thread to the operating system, is in the mid-1980s, solaris is the leader in this regard. The traditional Unix also supports the concept of threads, but only allows one thread in a process (process), the multi-threaded means the process. Now, multi-threading technology has been supported by many operating systems, including Windows / NT, of course, including Linux.Why have the concept of the process, but also re-introduction of the thread it? Use multi-threaded in the end What are the benefits? What the system should use multiple threads? We must first answer these questions.One of the reasons for using multiple threads and processes, it is a "thrifty" multi-tasking operating. We know that under Linux, start a new process must be assigned to it a separate address space, the establishment of a large number of data tables to maintain its code segment, stack and data segments, which is an "expensive" multi-tasking ways of working. Run on multiple threads in a process, using the same address space between them, share most of the data space is far less than it takes to start a thread to start a process, it takes space, but also between threads each other time required for switchover is far less than the time required to switch between the process.The reasons for using multiple threads is a convenient communication mechanism between threads. For different processes, they have a separate data space, data transfer can only be carried out by way of communication, this approach is not only time-consuming and inconvenient. Is not thread-shared data space between the threads in the same process, a thread of data can be directly used by other threads, this is not only quick and convenient. Of course, the sharing of data has also brought other issues, some variables can not be modified by two threads, declared as static data in some subroutines are more likely to multi-threaded program a disastrous blow, these are writing multithreaded programs most in need of attention.In addition to the above-mentioned advantages, and the process of multi-threaded program as a multi-tasking, concurrent work, of course, have the following advantages:1) to improve application response. This graphical interface program is particularly relevant when an operation takes very long, the entire system will wait for this operation, the program does not respond to keyboard, mouse, menu operation, while the use of multi-threading technology, will take a long operations (time consuming) is placed in a new thread, you can avoid this embarrassing situation.2) multi-CPU system more effective. The operating system will ensure that when the number of threads is not greater than the number of CPUs, different threads running on a different CPU.3) to improve the program structure. A long and complex process can be considered to be divided into multiple threads into several independent or semi-independent part of the run, this program will be beneficial to understand and modify.Data processing of the threadCompared to one of the biggest advantage of the threads and processes is the sharing of data, the data segment followed by the various processes share the parent process at, you can easily access, modify the data. But it also brought many problems to multi-threaded programming. We must be careful there are a number of different processes accessing the same variable. Many functions are not reentrant, that is, can not run multiple copies of a function (unless you use a different data segment). Static variables declared in the function often cause problems, the return value will have a problem. If the return address of the statically declared within the function space, a thread calls this function to get the address, the address point data, another thread may call this function and modify the period of data. Shared variables in the process must be defined with the keyword volatile, This is to prevent the compiler optimization (such as gcc-OX parameters) to change the way they use. In order to protect the variable, we must use semaphores, mutexes, and other methods to ensure the correct use of the variable. Here, we gradually introduce the relevant knowledge in the thread data.1, the thread dataIn the single-threaded program, there are two basic types of data: global variables and local variables. However, in multi-threaded program, there is a third data type: thread data (TSD: Thread-Specific Data). And global variables, much like the same like the use of global variables within the threads, each function can call it, but its thread external thread is not visible. The need for such data is obvious. Such as common variable errno, it returns the standard error message. It is clearly not a local variable, and almost every function should be able to call it; but it can not be a global variable, otherwise the output is likely to thread A thread B error message. To achieve such a variable, we must use the thread data. Our data for each thread to create a key, this key associated with each thread, use this key to refer to the thread data,but in a different thread, the key representative of the data is different, in the same a thread, it represents the same data content.2, a mutexThe mutex used to ensure a period of time only one thread in the implementation section of code. The necessity is obvious: the assumption that each thread to write data to a file sequence, the end result will be disastrous.3, the condition variableThe previous one, tell us about how to use a mutex to achieve data sharing and communication between threads, mutex is an obvious drawback is that it only two states: locked and non-locking. Condition variables allow threads to block and wait for another thread to send signal to make up for lack of a mutex, it is often used with mutexes. Use, the condition variable is used to block a thread, when the conditions are not met, threads tend to unlock the mutex lock and wait for the conditions change. Once a thread has changed a condition variable, it will notify the appropriate condition variable to wake one or more of this condition variable blocked thread. These threads will re-lock the mutex and re-test conditions are met. In general, the condition variable is used for synchronization between the lines Cheng.4, the semaphoreThe semaphore is essentially a non-negative integer counter, which is used to control access to public resources. When the increase in public resources, call the function sem_post () to increase the semaphore. Only when the signal value is greater than 0:00, the use of public resources, the use, function sem_wait () to reduce the semaphore. Function sem_trywait () function the pthread_ mutex_trylock () the same effect, it is the function sem_wait () non-blocking version. Here we introduced one by one and semaphore-related functions, they are defined in the header file / usr / include / semaphore.h.。

毕业设计中英文翻译doc全文免费

毕业设计中英文翻译doc全文免费

引言概述:本文将分析毕业设计中英文翻译的相关内容。

英文翻译已成为毕业设计中非常重要的一部分,尤其是对于需要与国际合作伙伴进行交流或是需要参考国际文献的学生而言。

因此,本文将从准备工作、翻译要求、技巧与注意事项、常见问题及解决方法以及翻译的评估和总结五个大点进行详细阐述。

正文内容:一、准备工作1.深入了解所翻译文本的背景与内容2.查找并收集相关领域的参考资料3.确定翻译的目标和要求4.确定翻译使用的软件和工具5.制定翻译计划和时间安排二、翻译要求1.翻译精确性与准确性的要求2.翻译专业性与技术性的要求3.翻译语言风格与文化差异的要求4.翻译时效性与速度的要求5.翻译文件格式和排版的要求三、技巧与注意事项1.分析句子结构与语法2.注意词语的多义性与上下文的含义3.灵活运用词典和在线翻译工具4.注意语言表达的流畅性与自然性5.校对和审校翻译结果四、常见问题及解决方法1.生词和专业术语的处理2.语法和表达错误的修改3.文化差异和陈述方式的转换4.上下文理解和逻辑结构的优化5.长句和复杂句的重组和简化五、翻译的评估和总结1.评估翻译的质量和准确性2.总结翻译过程中的经验和教训3.反思并改进翻译方法和技巧4.学习他人的翻译作品和经验5.持续提升和精进翻译技能总结:毕业设计中英文翻译是一项具有挑战性的任务,但通过充分的准备工作、翻译要求的明确、运用恰当的技巧与注意事项、解决常见问题的经验以及评估和总结的反思,我们可以提高翻译的质量和准确性。

同时,不断学习和提升翻译技能也是非常重要的。

通过本文提出的方法和建议,希望能够帮助读者在毕业设计中的英文翻译中取得好的成果。

(endofdocument)。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

基于小波变换的医学图像增强算法低的对比度和差的图像质量是医学图像的主要问题。

通过小波变换和哈尔变换提出了一个新颖的图像增强的方法。

首先,医学图像经过小波变换分解。

第二,所有高频子图片经过哈尔变换分解。

第三,噪音频率通过软阀值的方法降低。

第四,高频系数通过在不同的子图片中不同的重量值来增强。

然后,增强的图像通过逆小波变换和逆哈尔变换而获得。

最后,图像的直方图被非线性的直方图均衡拉伸。

实验表明,这种方法不仅能增强图像的细节,也可以保持有效的边缘特性。

简介:由于先进的医疗设备在医疗领域的投入使用,医学图像增强技术吸引了大量的关注。

增强医学图像可以协助外科医生诊断和解释病因。

因为医学图像的品质通常有噪音和和其他数据采集设备,照明条件等的影响。

而医学图像增强的目标主要是解决问题的低对比度和高水平医学图像的噪声。

医学图像增强技术吸引了许多研究,主要集中在灰度变换和频域转换。

频域变换的研究主要集中在小波变换和直方图均衡,直方图均衡是一个相当典型的空间领域的图像增强方法。

小波变换是一种在1980年代开发的时频分析工具,它已经成功地应用于图像处理领域在Mallat[1]提出了快速分解算法后。

基于小波变换的图像增强方法有很多,比如Luetal.[2],Yang and Hansell[3],Fang and Qi[4],Zhou et al.[5],Wu and Shi[6],etc.在这些论文,提出了基于小波变换的图像增强方法。

然而,我们不能只通过多尺度小波变换获取更多的高频信息。

一幅图像的不同的尺度细节信息可以通过小波变换,但会有一些高频信息隐藏在小波变换的高频子图片中。

如果我们分解这些高频子图片,我们可以获得更多的图像高频信息,可以帮助我们更有效地提高医学图像同时,如果我们使用这两种空间域和变换域队列来增强图像,我们可以获得一个更好的增强图像。

此外,我们应该消除或减少噪声,因为有很多的噪音在高频子图片。

在这封信里有一个新颖的用于提高医学图像的方法,该方法基于小波变换,哈尔变换和非线性平衡柱状图。

方法:我们的思路是:首先我们利用小波变换分解医学图像,然后我们用哈尔变换分解高频子图片。

非线性软阈值滤波方法常用来去除噪声,增强不同权重系数在不同子图片用于增强图像,以及在医学图像的分解中非线性直方图方程用于拉伸强度范围。

详细过程如下:图像可以看作是二维信号,因此图像的小波变换可以获得Mallat算法[1]。

在小波频率领域,图像的边缘特征信息和细节信息分布在高频子图片。

当我们通过k尺度的小波变换分解图像后,我们可以得到3k+1 子图片:{LLk ;HLj;LHj;HHj}。

当j=1,2,3,……k,k表示图像的小波变换分解水平。

LLk表示第k层的低频子图像,HLj,LHj,HHj表示第j层的高频子图像。

但在这些子图片还有更详细的信息。

为了获得更多的图像细节信息,将高频子图象利用哈尔变换分解。

这种方法比小波包变换和一般的多尺度—小波变换简单,哈尔变换是最简单的方向对称正交变换,仅仅用于高频子图像的分解。

它可以帮助我们获得更详细的信息在所有层次子图片中除了在这里的低频子图片。

同时,使用它来帮助我们获得四个新高频子图像的小波变换的子图片,他们是:{HLj00;HLj01;HLj10;HLj11}{LHj00;LHj01;LHj10;LHj11}{HHj00;HHj01;HHj10;HHj11}在j=1、2、……、k、j00 j01,j10 j11表示位置的四子图片来自哈尔变换。

图1是高频子图像的哈尔变换。

图1 图像小波分解和哈尔变换的高频子图片有一个在高频子图片丰富的图像细节信息。

但也有大量的噪音在这些子图片中。

小波变换的光滑函数可以帮助我们降低图像的噪声,但它不能满足我们的要求。

哈尔变换也可以帮助我们减少一些噪音,但仍有很多噪音在高频子图片。

如果我们提高高频系数,图像的细节信息和噪音都将增强。

我们减少噪音的高频子图片通过非线性方法。

因为噪声在不同的高频子图片中的属性是不同的,不同的软阈值在不同的子图片中被用来减少噪音。

设置软阈值j表示规模水平,i(i=1、2、3)分别表示HL,LH,HH高频子带和l(l=00,01、10、11)表示哈尔变换的高频子图片i。

Njil代表信号长度、xjil 是系数,x¯jil表示jil子图像的平均值。

减少噪声的公式Tjil是j,i,l子图像的软阀值、j,i,l表示在前面的方程中,H(x,y)表示在j,i,l 子图像(x,y)位置的高频系数、和G(x,y)表示的系数位置(x,y)运用。

软阈值滤波后,我们通过增强权重系数提高高频子图片。

不同的高频子图象表示不同的图像信息,所以我们应通过增强不同重量值加强不同子图片。

让Wjil设置权重系数,然后我们用以下公式提高高频子图像系数:M(G(x; y);Wjil)= WjilG(x; y)G(x,y)表示运用高频系数的ijl子图像在公式(5)和M(G(x,y),Wjil)表示增强系数。

通过逆小波变换和逆哈尔变换,生成增强图像。

但增强的图像的像素灰度范围小于正常的形象。

也使图像看起来不清楚。

非线性直方图均衡用于拉伸灰度范围:f(x,y)表示一个图像的像素位置(x,y)、T(f(x,y))表示相应的改变像素,fmax是图像的最大强度,M[(0,255],N(0,fmax]。

公式(7)是一个非线性的方法,我们可以根据实际的需求通过改变参数M和N用它来获得图像的强度范围。

结果:这封信里提到的方法过程是用来提高医学图像的。

我们在这里使用两层小波变换。

在非线性直方图均衡的一步,我们集M=255/3和N=fmax / 4。

提高权重系数是1.5。

图2展示了实验结果。

图2 a和d是原始图像,图2 b和e通过该算法增强图像。

图2 f 和c是直方图均衡的结果。

图2 b的PSNR值为39.64,图2c为30.26,图2 e的PSNR值为70.53,图2 f是45.53。

从增强的结果来看,我们所提及的增强图像的方法比直方图均衡方法效果好。

在图2 b和e,不仅图像的模糊和低对比度得到增强,而且图像的纹理清晰。

图2 医学图像增强实验a 和d 原始图像b 和e 通过所提及的方法获得的图像c 和f 通过直方图均衡增强的图像结论:一个重要的问题是基于小波变换的医学图像增强是如何提取高频信息的。

在这封信中哈尔变换用于分解小波高频子图片。

这有助于我们提取有效的高频信息。

不同的增强体重系数在不同的子图片和非线性直方图均衡应用于医学图像增强的过程。

他们也可以帮助我们有效地提高医学图像。

实验结果表明,该算法不仅能增强图像的对比度,而且可以有效地保留原始图像的边缘特性。

在这封信里我们使用两层小波变换将图像分解。

小波分解的水平最好不要超过四倍的软阈值滤波,否则它可能减少有用的细节信息。

改进的多尺度Retinex医学图像增强算法1.简介医学图像处理和分析在基于图像质量的现代疾病图像诊断方面有很大的影响。

而医学彩色图像的常见问题是整体黯淡,低对比度、窄的动态范围和不对称的强度分布,影响诊断过程的准确性。

因此,增强算法已成为图像分析不可缺少的部分,已经广泛应用于医学数字系统。

为了解决上述问题,提出了许多图像增强算法[1 - 5]。

直方图均衡化增强是一个输入图像通过修改其直方图作为所需的形状。

这种技术的优点是,它非常适用于灰度图像,然而,使用直方图均衡化增强彩色图像时,它可能会导致颜色的转变使工件和图像颜色不平衡。

同态滤波是一种典型的图像增强的算法,这种方法基于代表一个输入图像光照和反射率的产品的图像生长模型。

[1]这些方法通常通过减少其动态范围或增加对比度来增强一个输入图像。

近年来,随着色彩恒常性理论的发展,基于多尺度理论和数学形态学的增强算法[2 - 5]在处理医学图像问题已达到更好的效果,如摄影图像,CT图像,图像。

然而大多数应用算法不能满足对免疫组织化学图像增强的需求,并且对图像特征,缺乏合理的分析。

有一些方案存在的缺陷,可以添加更多的有用的信息综合获得更好的结果。

专注于解决上述问题,提出了一种改进的基于多尺度retinex算法。

本文组织如下。

在第2部分中,我们分析免疫组织化学的特征图像和描述多尺度retinex理论。

在第三节,我们指出其中的问题和现在的一些改进多尺度retinex的算法。

最后,改进的医学图像增强算法在下一节中描述。

实验结果和结论分别在部分4和5。

2.图像特性和Retinex理论2.1免疫组织化学图像的分析免疫组织化学(IHC)是一种显示组织的化学成分与特定的抗体的方法。

它广泛应用于基础医学研究和临床检查,对神经解剖学和病理学等起着重要的作用。

实验结果以图像的形式显示在切片上,因此有必要对图像块进行分析。

(1)基于彩色模板的免疫组织化学图像一直被积极的目标关注,拥有基于不同底物使用不同的颜色标记。

目标显示为棕色,有时有一些非特异性染色和背景区域。

一般来说,彩色图像被分为:淡黄色(弱抗原),褐色(中等程度的抗原),黑色和棕色(强大的抗原)。

(2)免疫组织化学图像是数字显微镜或照相机设备成功的捕获的图像。

由于各种噪声,如附件器官,照相设备造成的精度和不均匀的照明,可以看到捕获的医学数字图像的整个场景的基本特征,但图像中昏暗的对象,总是低对比度,缺乏边缘特征和细节信息。

2.2多尺度Retinex理论retinex理论假设的颜色感知严格依赖于人类视觉系统的神经结构。

兰德介绍了基于人类视觉系统中心图像的空间形式。

retinex模型由Frankle和麦肯[6 - 7]介绍了明度的计算。

乔布森在兰德和其他研究成果的基础上,定义了一个单一的retinex算法(SSR)[8]和多尺度retinex算法(MSR)[9]。

MSR可以在动态范围压缩,颜色再现和强化边缘方面比SSR取得更好的性能。

基于成像模型,基本形式的单一retinex(SSR)给出Ri(x,y )= logIi (x,y) − log(Ii (x, y) ∗ F(x, y))在这个公式里,Ri(x,y)是retinex的输出,下标i代表R G B三种不同的颜色通道。

Ii(x,y)表示第i个图像的光谱区域。

*代表卷积运算,F(x,y)代表标准化的高斯公式。

他的公式是:其中c是高斯周围空间常数;K是一个归一化参数,所以∫∫F(x,y)dxdy= 1。

MSR是SSR 的一个扩展系统。

这种方法是由不同尺度高斯函数进行处理实现了不同SSR输出的加权总和。

MSR定义的算法是其中N是数量的规模;Rnj是第i个谱带的第N个规模;j MSR R是MSR算法的第i 个谱带的结果;Nω是N个规模的加权系数。

在retinex输出被显示或是被打印之前,Ri(x,y)必须被拉伸。

为了保存最多的信息,使用自动拉伸方法。

相关文档
最新文档