单片机英文文献资料及翻译
单片机 外文翻译 外文文献 英文文献 单片机简介 中英对
单片机外文翻译外文文献英文文献单片机简介中英对原文来源图书馆电子资源Single chip brief introductionThe monolithic integrated circuit said that the monolithic micro controller, it is not completes some logical function the chip, but integrates a computer system to a chip on. Summary speaking: A chip has become a computer. Its volume is small, the quality is light, and the price cheap, for the study, the application and the development has provided the convenient condition. At the same time, the study use monolithic integrated circuit is understands the computer principle and the structure best choice.The monolithic integrated circuit interior also uses with the computer function similar module, for instance CPU, memory, parallel main line, but also has with the hard disk behave identically the memory component7 what is different is its these part performance is opposite our home-use computer weak many, but the price is also low, generally does not surpass 10 Yuan then Made some control electric appliance one kind with it is not the 'very complex work foot, We use now the completely automatic drum washer, the platoon petti-coat pipe: VCD and so on Inside the electrical appliances may see its form! It is mainly takes the control section the core part.It is one kind of online -like real-time control computer, online -like is the scene control, needs to have the strong antijamming ability,the low cost, this is also and the off-line type computer (for instance home use PC,) main differenceThe monolithic integrated circuit is depending on the procedure, and may revise. Realizes the different function through the different procedure, particularly special unique some functions, this is other component needs to take the very big effort to be able to achieve, some are the flowered big strength is also very difficult to achieve. One is not the very complex function, if develops in the 50s with the US 74 series, or the 60s's CD4000 series these pure hardware do decides, the electric circuit certainly arc a big PCB board ! But if, if succeeded in the 70s with the US puts in the market the series monolithic integrated circuit, the result will have the huge difference. Because only the monolithic integrated circuit compiles through you the procedure may realize the high intelligence, high efficiency, as well as redundant reliabilityThe CPU is the key component of a digital computer. Its purpose isto decode instruction received from memory and perform transfers, arithmetic, logic, and control operations with data stored in internal registers, memory, or I/O interface units. Externally, the CPU provides one or more buses for transferring instructions, data, and control information to and from components connected to it. A microcontroller is present in the keyboard and in the monitor in the generic computer; thus these components are also shaded. In such microcontrollers, the CPU may be quite different from those discussed in this chapter. The wordlengths may be short, the number of registers small, and the instruction sets limited. Performance, relatively speaking, is poor, but adequatefor the task. Most important, the cost of these microcontrollers is very low, making their use cost effective.Because the monolithic integrated circuit to the cost is sensitive, therefore present occupies the dominant status the software is the most preliminary assembly language7 it was except the binary machine code above the most preliminary language, sincewhy were such preliminary must use?Why high-level did the language already achieve the visualization programming level not to use? The reason is very simple, is the monolithic integrated circuit docs not have home computer such CPU, and also has not looked like the hard disk such mass memory equipment. Inside even if a visualization higher order language compilation script only then a button, also will achieve several dozens K the sizes! Does not speak anything regarding the home use PC hard disk, but says regarding the monolithic integrated circuit cannot accept. The monolithic integrated circuit in the hardware source aspect's use factor must very Gao Caixing, therefore assembly, although primitive actually massively is using, Same truth, if attains supercomputer's on operating system and the application software home use PC to come up the movement, home use PC could also not withstand.It can be said that the 20th century surmounted three "the electricity" the time, namely the electrical time, the Electronic Ageand already entered computer time. However, this kind of computer, usually refers to the personal computer, is called PC machine. It by the main engine, the keyboard, the monitor and so on is composed. Also has a kind of computer, most people actually not how familiar. This kind of computer is entrusts with the intelligence each kind of mechanical monolithic integrated circuit (also to call micro controller). , This kind of computer's smallest system only has used as the name suggests a piece of integrated circuit, then carries on the simple operation and the control. Because its volume is small, usually hides in is accused the machinery "the belly". It in the entire installment, plays is having like the human brains role, it went wrong, the entire installment paralyzed. Now, this kind of monolithic integrated circuit's use domain already very widespread, like the intelligent measuring appliance, the solid work paid by time control, the communication equipment, the guidance system, the domestic electric appliances and so on, Once each product used the monolithic integrated circuit, could get up causes the effect which the product turned to a new generation, often before product range crown by adjective---- …intelligence?, like intelligence washer and so on. Now some factory's technical personnel or other extra-curricular electronic exploiter do certain products, are not theelectric circuit are too complex, is the function is too simple, and is imitated extremely easily. Investigates its reason, possibly on card, in the product has not used on the monolithic integrated circuit or other programmable logical component.单片机简介单片机又称单片微控制器,它不是完成某一个逻辑功能的芯片,而是把一个计算机系统集成到一个芯片上。
单片机英文文献及翻译)
Validation and Testing of Design Hardening for Single Event Effects Using the 8051 MicrocontrollerAbstractWith the dearth of dedicated radiation hardened foundries, new and novel techniques are being developed for hardening designs using non-dedicated foundry services. In this paper, we will discuss the implications of validating these methods for the single event effects (SEE) in the space environment. Topics include the types of tests that are required and the design coverage (i.e., design libraries: do they need validating for each application?). Finally, an 8051 microcontroller core from NASA Institute of Advanced Microelectronics (IAμE) CMOS Ultra Low Power Radiation Tolerant (CULPRiT) design is evaluated for SEE mitigative techniques against two commercial 8051 devices.Index TermsSingle Event Effects, Hardened-By-Design, microcontroller, radiation effects.I. INTRODUCTIONNASA constantly strives to provide the best capture of science while operating in a space radiation environment using a minimum of resources [1,2]. With a relatively limited selection of radiation-hardened microelectronic devices that are often two or more generations of performance behind commercialstate-ofthe-art technologies, NASA’s performance of this task is quite challenging. One method of alleviating this is by the use of commercial foundry alternatives with no or minimally invasive design techniques for hardening. This is often called hardened-by-design (HBD).Building custom-type HBD devices using design libraries and automated design tools may provide NASA the solution it needs to meet stringent science performance specifications in a timely,cost-effective, and reliable manner.However, one question still exists: traditional radiation-hardened devices have lot and/or wafer radiation qualification tests performed; what types of tests are required for HBD validation?II. TESTING HBD DEVICES CONSIDERATIONSTest methodologies in the United States exist to qualify individual devices through standards and organizations such as ASTM, JEDEC, and MIL-STD- 883. Typically, TID (Co-60) and SEE (heavy ion and/or proton) are required for device validation. So what is unique to HBD devices?As opposed to a “regular” commercial-off-the-shelf (COTS) device or application specific integrated circuit (ASIC) where no hardening has been performed, one needs to determine how validated is the design library as opposed to determining the device hardness. That is, by using test chips, can we “qualify” a future device using the same library?Consider if Vendor A has designed a new HBD library portable to foundries B and C. A test chip is designed, tested, and deemed acceptable. Nine months later a NASA flight project enters the mix by designing a new device using Vendor A’s library. Does this device require complete radiation qualification testing? To answer this, other questions must be asked.How complete was the test chip? Was there sufficient statistical coverage of all library elements to validate each cell? If the new NASA design uses a partially or insufficiently characterized portion of the design library, full testing might be required. Of course, if part of the HBD was relying on inherent radiation hardness of a process, some of the tests (like SEL in the earlier example) may be waived.Other considerations include speed of operation and operating voltage. For example, if the test chip was tested statically for SEE at a power supply voltage of 3.3V, is the data applicable to a 100 MHz operating frequency at 2.5V? Dynamic considerations (i.e., nonstatic operation) include the propagated effects of Single Event Transients (SETs). These can be a greater concern at higher frequencies.The point of the considerations is that the design library must be known, the coverage used during testing is known, the test application must be thoroughly understood and the characteristics of the foundry must be known. If all these are applicable or have been validated by the test chip, then no testing may be necessary. A task within NASA’s Electronic Parts and Packaging (NEPP) Program was performed to explore these types of considerations.III. HBD TECHNOLOGY EVALUATION USING THE 8051 MICROCONTROLLERWith their increasing capabilities and lower power consumption, microcontrollers are increasingly being used in NASA and DOD system designs. There are existing NASA and DoD programs that are doing technology development to provide HBD. Microcontrollers are one such vehicle that is being investigated to quantify the radiation hardness improvement. Examples of these programs are the 8051 microcontroller being developed by Mission Research Corporation (MRC) and the IAμE (the focus of this study). As these HBD technologies become available, validation of the technology, in the natural space radiation environment, for NASA’s use in spaceflight systems is required.The 8051 microcontroller is an industry standard architecture that has broad acceptance, wide-ranging applications and development tools available. There are numerous commercial vendors that supply this controller or have it integrated into some type of system-on-a-chip structure. Both MRC and IAμE chose this device to demonstrate two distinctly different technologies for hardening. The MRC example of this is to use temporal latches that require specific timing to ensure that single event effects are minimized. The IAμE technology uses ultra low power, and layout and architecture HBD design rules to achieve their results. These are fundamentally different than the approach by Aeroflex-United Technologies Microelectronics Center (UTMC), the commercial vendor of a radiation–hardened 8051, that built their 8051 microcontroller using radiationhardened processes. This broad range of technology within one device structure makes the 8051an ideal vehicle for performing this technology evaluation.The objective of this work is the technology evaluation of the CULPRiT process [3] from IAμE. The process has been baselined against two other processes, the standard 8051 commercial device from Intel and a version using state-of-the-art processing from Dallas Semiconductor. By performing this side-by-side comparison, the cost benefit, performance, and reliability trade study can be done.In the performance of the technology evaluation, this task developed hardware and software for testing microcontrollers. A thorough process was done to optimize the test process to obtain as complete an evaluation as possible. This included taking advantage of the available hardware and writing software that exercised the microcontroller such that all substructures of the processor were evaluated. This process is also leading to a more complete understanding of how to test complex structures, such as microcontrollers, and how to more efficiently test these structures in the future.IV. TEST DEVICESThree devices were used in this test evaluation. The first is the NASA CULPRiT device, which is the primary device to be evaluated. The other two devices are two versions of a commercial 8051, manufactured by Intel and Dallas Semiconductor, respectively.The Intel devices are the ROMless, CMOS version of the classic 8052 MCS-51 microcontroller. They are rated for operation at +5V, over a temperature range of 0 to 70 °C and at a clock speeds of 3.5 MHz to 24 MHz. They are manufactured in Intel’s P629.0 CHMOS III-E process.The Dallas Semiconductor devices are similar in that they are ROMless 8052 microcontrollers, but they are enhanced in various ways. They are rated for operation from 4.25 to 5.5 Volts over 0 to 70 °C at clock speeds up to 25 MHz. They have a second full serial port built in, seven additional interrupts, a watchdog timer, a power fail reset, dual data pointers and variable speed peripheral access. In addition, the core is redesigned so that the machine cycle is shortened for most instructions, resulting in an effective processing ability that is roughly 2.5 times greater (faster) than the standard 8052 device. None of these features, other than those inherent in the device operation, were utilized in order to maximize the similarity between the Dallas and Intel test codes.The CULPRiT technology device is a version of the MSC-51 family compatible C8051 HDL core licensed from the Ultra Low Power (ULP) process foundry. The CULPRiT technology C8051 device is designed to operate at a supply voltage of 500 mV and includes an on-chip input/output signal level-shifting interface with conventional higher voltage parts. The CULPRiT C8051 device requires two separate supply voltages; the 500 mV and the desired interface voltage. The CULPRiT C8051 is ROMless and is intended to be instruction set compatible with the MSC-51 family.V. TEST HARDWAREThe 8051 Device Under Test (DUT) was tested as a component of a functional computer. Aside from DUT itself, the other componentsof the DUT computer were removed from the immediate area of the irradiation beam.A small card (one per DUT package type) with a unique hard-wired identifier byte contained the DUT, its crystal, and bypass capacitors (and voltage level shifters for the CULPRiT DUTs). This "DUT Board" was connected to the "Main Board" by a short 60-conductor ribbon cable. The Main Board had all other components required to complete the DUT Computer, including some which nominally are not necessary in some designs (such as external RAM, external ROM and address latch). The DUT Computer and the Test Control Computer were connected via a serial cable and communications were established between the two by the Controller (that runs custom designed serial interface software). This Controller software allowed for commanding of the DUT, downloading DUT Code to the DUT, and real-time error collection from the DUT during and post irradiation. A 1 Hz signal source provided an external watchdog timing signal to the DUT, whose watchdog output was monitored via an oscilloscope. The power supply was monitored to provide indication of latchup.VI. TEST SOFTWAREThe 8051 test software concept is straightforward. It was designed to be a modular series of small test programs each exercising a specific part of the DUT. Since each test was stand alone, they were loaded independently of each other for execution on the DUT. This ensured that only the desired portion of the 8051 DUT was exercised during the test and helped pinpoint location of errors that occur during testing. All test programs resided on the controller PC until loaded via the serial interface to the DUT computer. In this way, individual tests could have been modified at any time without the necessity of burning PROMs. Additional tests could have also been developed and added without impacting the overall test design. The only permanent code, which was resident on the DUT, was the boot code and serial code loader routines that established communications between the controller PC and the DUT.All test programs implemented:• An external Universal Asynchronous Receive and Transmit device (UART) for transmission of error information and communication to controller computer.• An external real-time clock for data error tag.•A watchdog routine designed to provide visual verification of 8051 health and restart test code if necessary.• A "foul-up" routine to reset program counter if it wanders out of code space.• An external telemetry data storage memory to provide backup of data in the event of an interruption in data transmission.The brief description of each of the software tests used is given below. It should be noted that for each test, the returned telemetry (including time tag) was sent to both the test controller and the telemetry memory, giving the highest reliability that all data is captured.Interrupt –This test used 4 of 6 available interrupt vectors (Serial, External, Timer0 Overflow, and Timer1 Overflow) to trigger routines that sequentially modified a value in the accumulator which was periodically compared to a known value. Unexpected values were transmitted with register information.Logic –This test performed a series of logic and math computations and provided three types of error identifications: 1) addition/subtraction, 2) logic and 3) multiplication/division. All miscompares of computations and expected results were transmitted with other relevant register information.Memory – This test loaded internal data memory at locations D:0x20 through D:0xff (or D:0x20 through D:0x080 for the CULPRiT DUT), indirectly, with an 0x55 pattern. Compares were performed continuously and miscompares were corrected while error information and register values were transmitted.Program Counter -The program counter was used to continuously fetch constants at various offsets in the code. Constants were compared with known values and miscompares were transmitted along with relevant register information. Registers – This test loaded each of four (0,1,2,3) banks of general-purpose registers with either 0xAA (for banks 0 and 2) or 0x55 (for banks 1 and 3). The pattern was alternated in order to test the Program Status Word (PSW) special function register, which controls general-purpose register bank selection. General-purpose register banks were then compared with their expected values. All miscompares were corrected and error information was transmitted.Special Function Registers (SFR) – This test used learned static values of 12 out 21 available SFRs and then constantly compared the learned value with the current one. Miscompares were reloaded with learned value and error information was transmitted.Stack – This test performed arithmetic by pushing and popping operands on the stack. Unexpected results were attributed to errors on the stack or to the stack pointer itself and were transmitted with relevant register information.VII. TEST METHODOLOGYThe DUT Computer booted by executing the instruction code located at address 0x0000. Initially, the device at this location was an EPROM previously loaded with "Boot/Serial Loader" code. This code initialized the DUT Computer and interface through a serial connection to the controlling computer, the "Test Controller". The DUT Computer downloaded Test Code and put it into Program Code RAM (located on the Main Board of the DUT Computer). It then activated a circuit which simultaneously performed two functions: held the DUT reset line active for some time (~10 ms); and, remapped the Test Code residing in the Program Code RAM to locate it to address 0x0000 (the EPROM will no longer be accessible in the DUT Computer's memory space). Upon awaking from the reset, the DUT computer again booted by executing the instruction code at address 0x0000, except this time that code was not be the Boot/Serial Loader code but the Test Code.The Test Control Computer always retained the ability to force the reset/remap function, regardless of the DUT Computer's functionality. Thus, if the test ran without a Single Event Functional Interrupt (SEFI) either the DUT Computer itselfor the Test Controller could have terminated the test and allowed the post-test functions to be executed. If a SEFI occurred, the Test Controller forced a reboot into Boot/Serial Loader code and then executed the post-test functions. During any test of the DUT, the DUT exercised a portion of its functionality (e.g., Register operations or Internal RAM check, or Timer operations) at the highest utilization possible, while making a minimal periodic report to the Test Control Computer to convey that the DUT Computer was still functional. If this reportceased, the Test Controller knew that a SEFI had occurred. This periodic data was called "telemetry". If the DUT encountered an error that was not interrupting the functionality (e.g., a data register miscompare) it sent a more lengthy report through the serial port describing that error, and continued with the test.VIII.DISCUSSIONA. Single Event LatchupThe main argument for why latchup is not an issue for the CULPRiT devices is that the operating voltage of 0.5 volts should be below the holding voltage required for latchup to occur. In addition to this, the cell library used also incorporates the heavy dual guard-barring scheme [4]. This scheme has been demonstrated multiple times to be very effective in rendering CMOS circuits completely immune to SEL up to test limits of 120 MeV-cm2/mg. This is true in circuits operating at 5, 3.3, and 2.5 Volts, as well as the 0.5 Volt CULPRiT circuits. In one case, a 5 Volt circuit fabricated on noncircuits wafers even exhibited such SEL immunity.B. Single Event UpsetThe primary structure of the storage unit used in the CULPRiT devices is the Single Event Resistant Topology (SERT) [5]. Given the SERT cell topology and a single upset node assumption, it is expected that the SERT cell will be completely immune to SEUs occurring internal to the memory cell itself. Obviously there are other things going on. The CULPRiT 8051 results reported here are quite similar to some resultsobtained with a CULPRiT CCSDS lossless compression chip (USES) [6]. The CULPRiT USES was synthesized using exactly the same tools and library as the CULPRiT 8051.With the CULPRiT USES, the SEU cross section data [7] was taken as a function of frequency at two LET values, 37.6 and 58.5 MeV-cm2/mg. In both cases the data fit well to a linear model where cross section is proportional to clock. In the LET 37.6 case, the zero frequency intercept occurred essentially at the zero cross section point, indicating that virtually all of these SEUs are captured SETs from the combinational logic. The LET 58.5 data indicated that the SET (frequency dependent) component is sitting on top of a "dc-bias" component –presumably a second upset mechanism is occurring internal to the SERT cells only at a second, higher LET threshold.The SET mitigation scheme used in the CULPRiT devices is based on the SERT cell's fault tolerant input property when redundant input data is provided to separate storage nodes. The idea is that the redundant input data is provided through a total duplication of combinational logic (referred to as “dual rail design”) such that a simple SET on one rail cannot produce an upset. Therefore, some other upset mechanism must be happening. It is possible that a single particle strike is placing an SET on both halves of the logic streams, allowing an SET to produce an upset. Care was taken to separate the dual sensitive nodes in the SERT cell layouts but the automated place-and-route of the combinatorial logic paths may have placed dual sensitive nodes close enough.At this point, the theory for the CULPRiT SEU response is that at about an LET of 20, the energy deposition is sufficiently wide enough (and in the right locations) to produce an SET in both halves of the combinatorial logic streams. Increasing LET allows for more regions to be sensitive to this effect, yielding a larger cross section. Further, the second SEU mechanism that starts at an LET of about 40-60 has to do with when the charge collection disturbance cloud gets large enough to effectively upset multiples of the redundant storage nodes within the SERT cell itself. In this 0.35 μm library, the node separation is several microns. However, since it takes less charge to upset a node operating at 0.5 Volts, with transistors having effective thresholds around 70 mV, this is likely the effect being observed. Also the fact that the per-bit memory upset cross section for the CULPRiT devices and the commercial technologies are approximately equal, as shown in Figure 9, indicates that the cell itself has become sensitive to upset.IX. SUMMARYA detailed comparison of the SEE sensitivity of a HBD technology (CULPRiT) utilizing the 8051 microcontroller as a test vehicle has been completed. This paper discusses the test methodology used and presents a comparison of the commercial versus CULPRiT technologies based on the data taken. The CULPRiT devices consistently show significantly higher threshold LETs and an immunity to latchup. In all but the memory test at the highest LETs, the cross section curves for all upset events is one to two orders of magnitude lower than the commercial devices. Additionally, theory is presented, based on the CULPRiT technology, that explain these results.This paper also demonstrates the test methodology for quantifying the level of hardness designed into a HBD technology. By using the HBD technology in a real-world device structure (i.e., not just a test chip), and comparing results to equivalent commercial devices, one can have confidence in the level of hardness that would be available from that HBD technology in any circuit application.ACKNOWLEDGEMENTSThe authors of this paper would like to acknowledge the sponsors of this work. These are the NASA Electronic Parts and Packaging Program (NEPP), NASA Flight Programs, and the Defense Threat Reduction Agency (DTRA).。
单片机英文文献及翻译
附录A英文文献翻译原文Temperature Control Using a Microcontroller:An Interdisciplinary Undergraduate Engineering Design ProjectJames S. McDonaldDepartment of Engineering ScienceTrinity UniversitySan Antonio, TX 78212AbstractThis paper describes an interdisc iplinary design project which was done under the author’s supervision by a group of four senior students in the Department of Engineering Science at Trinity University. The objective of the project was to develop a temperature control system for an air-filled chamber. The system was to allow entry of a desired chamber temperature in a prescribed range and to exhibit overshoot and steady-state temperature error of less than 1 degree Kelvin in the actual chamber temperature step response. The details of the design developed by this group of students, based on a Motorola MC68HC05 family microcontroller, are described. The pedagogical value of the problem is also discussed through a description of some of the key steps in the design process. It is shown that the solution requires broad knowledge drawn from several engineering disciplines including electrical, mechanical, and control systems engineering.1 IntroductionThe design project which is the subject of this paper originated from a real-world application.A prototype of a microscope slide dryer had been developed around an OmegaTM modelCN-390 temperature controller, and the objective was to develop a custom temperature control system to replace the Omega system. The motivation was that a custom controller targeted specifically for the application should be able to achieve the same functionality at a much lower cost, as the Omega system is unnecessarily versatile and equipped to handle a wide variety of applications.The mechanical layout of the slide dryer prototype is shown in Figure 1. The main element of the dryer is a large, insulated, air-filled chamber in which microscope slides, each with a tissue sample encased in paraffin, can be set on caddies. In order that the paraffin maintain the proper consistency, the temperature in the slide chamber must be maintained at a desired (constant) temperature. A second chamber (the electronics enclosure) houses a resistive heater and the temperature controller, and a fan mounted on the end of the dryer blows air across theheater, carrying heat into the slide chamber. This design project was carried out during academic year 1996–97 by four students under the author’s supervision as a Senior Design project in the Department of Engineering Science at Trinity University. The purpose of this paper isto describe the problem and the students’ solution in some detail, and to discuss some of the pedagogical opportunities offered by an interdisciplinary design project of this type. The students’ own report was presented a t the 1997 National Conference on Undergraduate Research [1]. Section 2 gives a more detailed statement of the problem, including performance specifications, and Section 3 describes the students’ design. Section 4 makes up the bulk of the paper, and discusses in some detail several aspects of the design process which offer unique pedagogical opportunities. Finally, Section 5 offers some conclusions.2 Problem StatementThe basic idea of the project is to replace the relevant parts of the functionality of an Omega CN-390 temperature controller using a custom-designed system. The application dictates that temperature settings are usually kept constant for long periods of time, but it’s nonetheless important that step changes be tracked in a “reasonable” manner. Thus the main requirements boil down to·allowing a chamber temperature set-point to be entered,·displaying both set-point and actual temperatures, and·tracking step changes in set-point temperature with acceptable rise time, steady-state error, and overshoot.Although not explicitly a part of the specifications in Table 1, it was clear that the customer desired digital displays of set-point and actual temperatures, and that set-point temperature entry should be digital as well (as opposed to, say, through a potentiometer setting).3 System DesignThe requirements for digital temperature displays and setpoint entry alone are enough to dictate that a microcontrollerbased design is likely the most appropriate. Figure 2 shows a block diagram of the stude nts’ design.The microcontroller, a MotorolaMC68HC705B16 (6805 for short), is the heart of the system. It accepts inputs from a simple four-key keypad which allow specification of the set-point temperature, and it displays both set-point and measured chamber temperatures using two-digit seven-segment LED displays controlled by a display driver. All these inputs and outputs are accommodated by parallel ports on the 6805. Chamber temperature is sensed using apre-calibrated thermistor and input via one of the 6805’s analog-to-digital inputs. Finally, a pulse-width modulation (PWM) output on the 6805 is used to drive a relay which switches line power to the resistive heater off and on.Figure 3 shows a more detailed schematic of the electronics and their interfacing to the 6805. The keypad, a Storm 3K041103, has four keys which are interfaced to pins PA0{ PA3 of Port A, configured as inputs. One key functions as a mode switch. Two modes are supported: set mode and run mode. In set mode two of the other keys are used to specify the set-point temperature: one increments it and one decrements. The fourth key is unused at present. The LED displays are driven by a Harris Semiconductor ICM7212 display driver interfaced to pins PB0{PB6 of Port B, configured as outputs. The temperature-sensing thermistor drives, through a voltage divider, pin AN0 (one of eight analog inputs). Finally, pin PLMA (one of two PWM outputs) drives the heater relay.Software on the 6805 implements the temperature control algorithm, maintains the temperature displays, and alters the set-point in response to keypad inputs. Because it is not complete at this writing, software will not be discussed in detail in this paper. The control algorithm in particular has not been determined, but it is likely to be a simple proportional controller and certainly not more complex than a PID. Some control design issues will be discussed in Section 4, however.4 The Design ProcessAlthough essentially the project is just to build a thermostat, it presents many nice pedagogical opportunities. The knowledge and experience base of a senior engineering undergraduate are just enough to bring him or her to the brink of a solution to various aspects of the problem. Yet, in each case, realworld considerations complicate the situation significantly.Fortunately these complications are not insurmountable, and the result is a very beneficial design experience. The remainder of this section looks at a few aspects of the problem which present the type of learning opportunity just described. Section 4.1 discusses some of the features of a simplified mathematical model of the thermal properties of the system and how it can beeasily validated experimentally. Section 4.2 describes how realistic control algorithm designs can be arrived at using introductory concepts in control design. Section 4.3 points out some important deficiencies of such a simplified modeling/control design process and how they can be overcome through simulation. Finally, Section 4.4 gives an overview of some of the microcontroller-related design issues which arise and learning opportunities offered.4.1 MathematicalModelLumped-element thermal systems are described in almost any introductory linear control systems text, and just this sort of model is applicable to the slide dryer problem. Figure 4 shows a second-order lumped-element thermal model of the slide dryer. The state variables are the temperatures Ta of the air in the box and Tb of the box itself. The inputs to the system are the power output q(t) of the heater and the ambient temperature T¥. ma and mb are the masses of the air and the box, respectively, and Ca and Cb their specific heats. μ1 and μ2 are heat transfer coefficients from the air to the box and from the box to the external world, respectively.It’s not hard to show that the (linearized) state equationscorresponding to Figure 4 areTaking Laplace transforms of (1) and (2) and solving for Ta(s), which is the output of interest, gives the following open-loop model of the thermal system:where K is a constant and D(s) is a second-order polynomial.K, tz, and the coefficients ofD(s) are functions of the variousparameters appearing in (1) and (2).Of course the various parameters in (1) and (2) are completely unknown, but it’s not hard to show that, regardless of their values, D(s) has two real zeros. Therefore the main transfer function of interest (which isthe one from Q(s), since we’ll assume constant ambient temperature) can be writtenMoreover, it’s not too hard to show that 1=tp1 <1=tz <1=tp2, i.e., that the zero lies between the two poles. Both of these are excellent exercises for the student, and the result is the openloop pole-zero diagram of Figure 5.Obtaining a complete thermal model, then, is reduced to identifying the constant K and the three unknown time constants in (3). Four unknown parameters is quite a few, but simple experiments show that 1=tp1 _ 1=tz;1=tp2 so that tz;tp2 _ 0 are good approximations. Thus the open-loop system is essentially first-order and can therefore be written(where the subscript p1 has been dropped).Simple open-loop step response experiments show that,for a wide range of initial temperatures and heat inputs, K _0:14 _=W and t _ 295 s.14.2 Control System DesignUsing the first-order model of (4) for the open-loop transfer function Gaq(s) and assuming for the moment that linear control of the heater power output q(t) is possible, the block diagram of Figure 6 represents the closed-loop system. Td(s) is the desired, or set-point, temperature,C(s) is the compensator transfer function, and Q(s) is the heater output in watts.Given this simple situation, introductory linear control design tools such as the root locus method can be used to arrive at a C(s) which meets the step response requirements on rise time, steady-state error, and overshoot specified in Table 1. The upshot, of course, is that a proportional controller with sufficient gain can meet all specifications. Overshoot is impossible, and increasing gains decreases both steady-state error and rise time.Unfortunately, sufficient gain to meet the specifications may require larger heat outputs than the heater is capable of producing. This was indeed the case for this system, and the result is that the rise time specification cannot be met. It is quite revealing to the student how useful such an oversimplified model, carefully arrived at, can be in determining overall performance limitations.4.3 Simulation ModelGross performance and its limitations can be determined using the simplified model of Figure 6, but there are a number of other aspects of the closed-loop system whose effects on performance are not so simply modeled. Chief among these are·quantization error in analog-to-digital conversion of the measured temperature and· the use of PWM to control the heater.Both of these are nonlinear and time-varying effects, and the only practical way to study them is through simulation (or experiment, of course).Figure 7 shows a SimulinkTM block diagram of the closed-loop system which incorporates these effects. A/D converter quantization and saturation are modeled using standard Simulink quantizer and saturation blocks. Modeling PWM is more complicated and requires a customS-function to represent it.This simulation model has proven particularly useful in gauging the effects of varying thebasic PWM parameters and hence selecting them appropriately. (I.e., the longer the period, the larger the temperature error PWM introduces. On the other hand, a long period is desirable to avoid excessiv e relay “chatter,” among other things.) PWM is often difficult for students to grasp, and the simulation model allows an exploration of its operation and effects which is quite revealing.4.4 The MicrocontrollerSimple closed-loop control, keypad reading, and display control are some of the classic applications of microcontrollers, and this project incorporates all three. It is therefore an excellent all-around exercise in microcontroller applications. In addition, because the project isto produce an actua l packaged prototype, it won’t do to use a simple evaluation board with theI/O pins jumpered to the target system. Instead, it’s necessary to develop a complete embedded application. This entails the choice of an appropriate part from the broad range offered in a typical microcontroller family and learning to use a fairly sophisticated development environment. Finally, a custom printed-circuit board for the microcontroller and peripherals must be designed and fabricated.Microcontroller Selection. In view of existing local expertise, the Motorola line of microcontrollers was chosen for this project. Still, this does not narrow the choice down much. A fairly disciplined study of system requirements is necessary to specify which microcontroller, out of scores of variants, is required for the job. This is difficult for students, as they generally lack the experience and intuition needed as well as the perseverance to wade through manufacturers’ selection guides.Part of the problem is in choosing methods for interfacing the various peripherals (e.g., what kind of display driver should be used?). A study of relevant Motorola application notes [2, 3, 4] proved very helpful in understandingwhat basic approaches are available, and what microcontroller/peripheral combinations should be considered.The MC68HC705B16 was finally chosen on the basis of its availableA/D inputs and PWMoutputs as well as 24 digital I/O lines. In retrospect this is probably overkill, as only oneA/D channel, one PWM channel, and 11 I/O pins are actually required (see Figure 3). The decision was made to err on the safe side because a complete development system specific to the chosen part was necessary, and the project budget did not permit a second such system to be purchased should the firstprove inadequate.Microcontroller Application Development. Breadboarding of the peripheral hardware, development of microcontroller software, and final debugging and testing of a customprinted-circuit board for the microcontroller and peripherals all require a development environment of some kind. The choice of a development environment, like that of themicrocontroller itself, can be bewildering and requires some faculty expertise. Motorola makes three grades of development environment ranging from simple evaluation boards (at around $100) to full-blown real-time in-circuit emulators (at more like $7500). The middle option was chosen for this project: the MMEVS, which consists of _ a platform board (which supports all 6805-family parts), _ an emulator module (specific to B-series parts), and _ a cable and target head adapter (package-specific). Overall, the system costs about $900 and provides, with some limitations, in-circuit emulation capability. It also comes with the simple but sufficient software development environment RAPID [5].Students find learning to use this type of system challenging, but the experience they gain in real-world microcontroller application development greatly exceeds the typical first-course experience using simple evaluation boards.Printed-Circuit Board. The layout of a simple (though definitely not trivial) printed-circuit board is another practical learning opportunity presented by this project. The final board layout, with package outlines, is shown (at 50% of actual size) in Figure 8. The relative simplicity of the circuit makes manual placement and routing practical—in fact, it likely gives better results than automatic in an application like this—and the student is therefore exposed to fundamental issues of printed-circuit layout and basic design rules. The layout software used was the very nice package pcb,2 and the board was fabricated in-house with the aid of our staff electronics technician.5 ConclusionThe aim of this paper has been to describe an interdisciplinary, undergraduate engineering design project: a microcontroller- based temperature control system with digital set-point entry and set-point/actual temperature display. A particular design of such a system has been described, and a number of design issues which arise—from a variety of engineering disciplines—have been discussed. Resolution of these issues generally requires knowledge beyond that acquired in introductory courses, but realistically accessible to advance undergraduate students, especiallywith the advice and supervision of faculty.Desirable features of the problem, from a pedagogical viewpoint, include the use of a microcontroller with simple peripherals, the opportunity to usefully apply introductorylevel modeling of physical systems and design of closed-loop controls, and the need for relatively simple experimentation (for model validation) and simulation (for detailed performance prediction). Also desirable are some of the technologyrelated aspects of the problem including practical use of resistive heaters and temperature sensors (requiring knowledge of PWM and calibration techniques, respectively), microcontroller selection and use of development systems, and printedcircuit design.AcknowledgementsThe author would like to acknowledge the hard work, dedication, and ability shown by the students involved in this project: Mark Langsdorf, Matt Rall, PamRinehart, and David Schuchmann. It is their project, and credit for its success belongs to them.References[1] M. Langsdorf, M. Rall, D. Schuchmann, and P. Rinehart,“Temperature control of a microscope slide dryer,” in1997 National Conference on Undergraduate Research,(Austin, TX), April 1997. Poster presentation.[2] Motorola, Inc., Phoenix, AZ, Temperature Measurementand Display Using the MC68HC05B4 and the MC14489,1990. Motorola SemiconductorApplicationNote AN431.[3] Motorola, Inc., Phoenix, AZ, HC05 MCU LED DriveTechniques Using the MC68HC705J1A, 1995. MotorolaSemiconductor Application Note AN1238.[4] Motorola, Inc., Phoenix, AZ, HC05MCU Keypad DecodingTechniques Using the MC68HC705J1A, 1995. MotorolaSemiconductor Application Note AN1239.[5] Motorola, Inc., Phoenix, AZ, RAPID Integrated DevelopmentEnvironment User’s Manual, 1993. (RAPID wasdeveloped by P & E Microcomputer Systems, Inc.).附录B英文文献翻译中文单片机温度控制:一个跨学科的本科生工程设计项目JamesS.McDonald工程科学系三一大学德克萨斯州圣安东尼奥市78212摘要本文所描述的是作者领导由四个三一大学高年级学生组成的团队进行的一个跨学科工程项目的设计。
plc单片机 毕业论文文献翻译 中英文对照
外文翻译:The monolithic In order to prevent without authorization the visit or the copy monolithic integrated circuit machine in the procedure, the majority of monolithic integrated circuits all has the encryption to lock the localization or the encryption byte, by protects the internal procedure. If in programming time encrypts locks the localization to enable (locking), is unable with the ordinary programming directly reading in the monolithic integrated circuit the procedure, this is the so-called copy protection or says the fixed function. In fact, such protective measures are very frail, is very easily explained. The monolithic integrated circuit aggressor with the aid of the special purpose equipment or the self-made equipment, using the monolithic integrated circuit chip design in loophole or the software flaw, through the many kinds of technical method, may withdraw the essential information from the chip, gains in the monolithic integrated circuit the procedure. Therefore, has the newest technology extremely as electronic products project engineer which the essential understanding current monolithic integrated circuit attacks, achieves knows oneself and the other side, knows fairly well, can effectively prevent oneself spends the product which the massive moneys and the time laboriously designs the matter occurrence which is counterfeited by a others night between.monolithic integrated circuits attacks technology:At present, attacks the monolithic integrated circuit mainly to have four kind of technologies, respectively is:This technical usual use processor correspondence connection and in the use agreement, the encryption algorithm or these algorithm security loophole carries on the attack. The software attack obtains the success a case in point is to early A T M E L A the T 89 C series monolithic integrated circuit attack. The aggressor has used in this series monolithic integrated circuit cleaning operation succession design loophole, uses from arranges the procedure to lock the localization after the cleaning encryption, stops the next step of cleaning internal program memory data the operation, thus makes to add the dense monolithic integrated circuit not to turn the encryption monolithic integrated circuit, then use programming read-out internal procedure.This technology usually monitors the processor by the high time resolution when the normal operation all power sources and the connection connection simulation characteristic, and through monitors its electromagnetic radiation characteristic to implement the attack. Because the monolithic integrated circuit is an active electronic device, when it carries out the different instruction, the corresponding mains input consumption also correspondingly changes. Like this analyzes and examines these changes through the use special electronic surveying instrument and mathematics statistical method, then gains in the monolithic integrated circuit the specific essential information.the mistake has the technology This technical use exceptionally working condition causes the processor to make a mistake, then provides the extra visit to carry on the attack. Uses the most widespread mistake to have the attack method including the voltage impact and the clock impact. The low voltage and the high voltage attack may usefor to forbid the protection circuit work or to fortected the information. The power source and the clock transient state jump may affect the single scroll instruction in certain processors the decoding and the ece the processor to carry out the misoperation. Perhaps the clock transient state jump can reposition the protection circuit but not to be able to destroy is proxecution.This technology is the direct exposed chip interior segment, then the observation, holds controls, disturbs the monolithic integrated circuit by to achieve the attack goal.In order to facilitate in order to, the people divide into above four kind of attacks technology two kinds, a kind is the invasion attack (physical attack), this kind of attack needs to destroy the seal, then with the aid of the semiconductor test facility, the microscope and the micro locator, several hours even several week time can complete on the special laboratory flower. All micro probes technology all belongs to the invasion attack. Moreover three methods belong to the non- invasion attack, the monolithic integrated circuit which attacks cannot by the physical damage. In certain situation non- invasion attacks is specially dangerous, this is because the non- invasion attack needs the equipment usually to be possible the self-restraint and the promotion, therefore is extremely inexpensive.The majority of non- invasions attack needs the aggressor to have the good processor knowledge and the software knowledge. Is opposite with it, the invasion probe attack then does not need too many initial knowledge,moreover usually may use the one whole set similar technology to cope with the width scope the product. Therefore, the attack often starts to the monolithic integrated circuit from the invasion reverse engineering, the accumulation experience is helpful to the development more inexpensive and the fast non- invasion attack technology.Last step will be seeks the protection melt silk the position and protects the melt silk to expose under the ultraviolet ray. With enlargement factor at least 100 time of microscopes, inputs the foot from the programming voltage the segment to track generally, seeks the protection melt silk.This technical use exceptionally working condition causes the processor to make a mistake, then provides the extra visit to carry on the attack. Uses the most widespread mistake to have the attack method including the voltage impact and the clock impact. The low voltage and the high voltage attack may use for to forbid the protection circuit work or to force the processor to carry out the misoperation. Perhaps the clock transient state jump can reposition the protection circuit but not to be able to destroy is protected the information. The power source and the clock transient state jump may affect the single scroll instruction in certain processors the decoding and the execution.(4) probe technologyThis technology is the direct exposed chip interior segment, then the observation, holds controls, disturbs the monolithic integrated circuit by to achieve the attack goal.In order to facilitate in order to, the people divide into above four kindof attacks technology two kinds, a kind is the invasion attack (physical attack), this kind of attack needs to destroy the seal, then with the aid of the semiconductor test facility, the microscope and the micro locator, several hours even several week time can complete on the special laboratory flower. All micro probes technology all belongs to the invasion attack. Moreover three methods belong to the non- invasion attack, the monolithic integrated circuit which attacks cannot by the physical damage. In certain situation non- invasion attacks is specially dangerous, this is because the non- invasion attack needs the equipment usually to be possible the self-restraint and the promotion, therefore is extremely inexpensive.The majority of non- invasions attack needs the aggressor to have the good processor knowledge and the software knowledge. Is opposite with it, the invasion probe attack then does not need too many initial knowledge,moreover usually may use the one whole set similar technology to cope with the width scope the product. Therefore, the attack often starts to the monolithic integrated circuit from the invasion reverse engineering, the accumulation experience is helpful to the development more inexpensive and the fast non- invasion attack technology.3 invasions attacks general process:The invasion attack first step uncovers the chip seal. Some two methods may achieve this goal: The first kind is dissolves the chip seal completely, the exposed metal segment. The second kind is only moves above the silicon nucleus plastic seal. The first method needs the chip to tests on the jig, with the aid of Taiwan to operate. The second method except needs to have the aggressor certain knowledge and Wants outside skill, but also needs individual wisdom and the patience, but operates relatively quite is convenient.Above the chip plastic may use the knife to open, around the chip epoxy resin may use the aqua fortis perish. The hot aqua fortis can dissolve the chip seal but not to be able to affect the chip and the segment. This process carries on generally under the extremely dry condition, because the water existence possibly can corrode already the aluminum wire connection which exposes.Then first uses the acetone in the supersonic pond to clean this chip by except the remaining nitric acid, then cleans with the clear water by and is dry except the salinity. Not the supersonic pond, jumps over generally this step. In this kind of situation, the chip surface can a little dirty, but not too affects the ultraviolet ray to the chip operation effect.Last step will be seeks the protection melt silk the position and protects the melt silk to expose under the ultraviolet ray. With enlargement factor at least 100 time of microscopes, inputs the foot from the programming voltage the segment to track generally, seeks the protection melt silk.If does not have the microscope, then uses the chip different partially exposes to the ultraviolet ray under and the observed result way carries on the simple search. When operation applies not the opaque slip of paper cover chipby to protect the program memory not by the ultraviolet ray cleaning. Will protect the melt silk to expose in the ultraviolet ray next 5 ~ 10 minutes can broken the protection position protective function, afterwards, will use the simple programming to be possible the direct readout program memory content.Regarding used the protective layer to protect E E P R O the M unit the monolithic integrated circuit to say that, the use ultraviolet ray repositioned the protection circuit is not feasible. Regarding this kind of type monolithic integrated circuit, uses the micro probe technology reading the memory content generally. Opens after the chip seal, puts in the chip under the microscope to be able very easy finding中文翻译单片机为了防止未经授权访问或拷贝单片机的机内程序,大部分单片机都带有加密锁定位或者加密字节,以保护片内程序。
单片机的外文文献及中文翻译
SCM is an integrated circuit chip,is the use of large scale integrated circuit technology to a data processing capability of CPU CPU random access memory RAM, read-only memory ROM,a variety of I / O port and interrupt system,timers / timer functions (which may also include display driver circuitry,pulse width modulation circuit, analog multiplexer, A / D converter circuit)integrated into a silicon constitute a small and complete computer systems.SCM is also known as micro—controller (Microcontroller),because it is the first to be used in industrial control。
Only a single chip by the CPU chip developed from a dedicated processor. The first design is by a large number of peripherals and CPU on a chip in the computer system, smaller,more easily integrated into a complex and demanding on the volume control device which. The Z80 INTEL is the first designed in accordance with this idea processor, then on the development of microcontroller and dedicated processors will be parting ways。
单片机设计外文文献翻译(含中英文)
附录A 外文翻译——AT89S52/AT89S51技术手册AT89S52译文主要性能与MCS-51单片机产品兼容8K字节在系统可编程Flash存储器1000次擦写周期全静态操作:0Hz~33Hz三级加密程序存储器32个可编程I/O口线三个16位定时器/计数器八个中断源全双工UART串行通道低功耗空闲和掉电模式掉电后中断可唤醒看门狗定时器双数据指针掉电标识符功能特性描述AT89S52是一种低功耗、高性能CMOS8位微控制器,具有8K在系统可编程Flash 存储器。
使用Atmel公司高密度非易失性存储器技术制造,与工业80C51产品指令和引脚完全兼容。
片上Flash 允许程序存储器在系统可编程,亦适于常规编程器。
在单芯片上,拥有灵巧的8位CPU和在系统可编程Flash,使得AT89S52为众多嵌入式控制应用系统提供高灵活、超有效的解决方案。
AT89S52具有以下标准功能:8k字节Flash,256字节RAM,32位I/O口线,看门狗定时器,2个数据指针,三个16位定时器/计数器,一个6向量2级中断结构,全双工串行口,片内晶振及时钟电路。
另外,AT89S52可降至0Hz静态逻辑操作,支持2种软件可选择节电模式。
空闲模式下,CPU停止工作,允许RAM、定时器/计数器、串口、中断继续工作。
掉电保护方式下,RAM内容被保存,振荡器被冻结,单片机一切工作停止,直到下一个中断或硬件复位为止。
引脚结构方框图VCC : 电源GND :地P0口:P0口是一个8位漏极开路的双向I/O口。
作为输出口,每位能驱动8个TTL逻辑电平。
对P0端口写“1”时,引脚用作高阻抗输入。
当访问外部程序和数据存储器时,P0口也被作为低8位地址/数据复用。
在这种模式下,P0具有内部上拉电阻。
在flash编程时,P0口也用来接收指令字节;在程序校验时,输出指令字节。
程序校验时,需要外部上拉电阻。
P1口:P1 口是一个具有内部上拉电阻的8位双向I/O 口,p1 输出缓冲器能驱动4个TTL 逻辑电平。
单片机外文翻译外文文献英文文献单片机的发展与应用
单片机外文翻译外文文献英文文献单片机的发展与应用THE Application and Development ofMicrocontroller UnitMonolithic integrated circuits are a computer chip. It uses tec hnology will have a data processing ability of the microprocessor (cpu), storage in rom (program memory and data storage ram ), the input, output interfaces circuit (I/O) integration interface i tu rned around with a chip in that small, constitutes a very good and the computer hardware system, where the application under the c ontrol of a monolithic integrated circuits can be accurate, fast and efficient procedures provided in advance to complete the task. So, a monolithic integrated circuits will have a computer chip of all t he functions.Thus, the microprocessor (monolithic integrated circuits has generally cpu )chips are not functional, it can independently com plete modern industrial control required for intelligent control func tions, it is monolithic integrated circuits of the biggest characteristi c.Monolithic integrated circuits, however, and different from mac hines ( a microprocessor chips, the memory chip and input and o utput interfaces chip in with a piece of printed circuit board of a microcomputer ), Monolithic integrated circuits chip in developing ago, it is only a function vlsi will have a strong, If of application development, it is a small microcomputer control system, but it m achine or a personal computer (pc is essential. the difference betw een).Monolithic integrated circuits of the application of chips at the level of application, the user (monolithic integrated circuits lear ners with users understand the structure of the chip )monolithic integrated circuits and instruction system, and the integrated use o f technology and system design to the theory and techniques, in th is particular chip design application, thereby, the chip with a parti cular function.Different monolithic integrated circuits have different hardware and software, or the technical features are different, Character de pends on a hardware chip monolithic integrated circuits the intern al structure of the user to use some monolithic integrated circuits, we must know this type of product whether to meet the needs of the facilities and application of the indicators required. The tech nical features include functional characteristics, control and electric al attributes, These information to manufacturers in the technical manual. Software features refers to an instruction system and devel opment support of the environment, the quality of instruction or monolithic integrated circuits for reference, data processing and log ical processing, output characteristics and to the power input requi rements, etc. Development support of the environment, including th e instructions of compatible and portable. support software (contai ns can support the development and application software and hard ware resources. resources). To take advantage of the model of deve lopment of a monolithic integrated circuits application systems, lea rn its structural features and technological characteristic is require d.Monolithic integrated circuits to control system will ever use o f sophisticated electronic circuit or circuit, a control system to achi eve the software controls and enable intelligent, It is monolithic in tegrated circuits to control areas, such as communications products and household appliances, the instruments and processes to contr ol and control devices, theapplication of more monolithic integrate d circuits sector.Monolithic integrated circuits, of course, the application is not limited to the application or the category of the economic perfor mance is more important it is a fundamental change in the traditi onal methods designed to control and mind control techniques. it i s a revolution is an important milestone.Can say now is the policy, a hundred schools of thought conte nd "monolithic integrated circuits, World chip all the company unv eiled his monolithic integrated circuits, from 8, 16 to 32 bits, and,with mainstream c51 series of, and there is not compatible with e ach other, but they, as complementary to monolithic integrated circ uits, the application of the world provide a broad.Throughout monolithic integrated circuits of the development p rocess, the trend of a monolithic integrated circuits, has :1.the low TDP COMSMcs -51 8031 a series of TDP for 630mw, and now a monolit hic integrated circuits, and generally in 100mw. As to ask for lowe r TDP monolithic integrated circuits, and now each monolithic inte grated circuits are used in the basic cmos (complementary metal o xides semiconductor technology). Like 80c51 adopt a hmos (the hig h density metal oxides semiconductor technology) and chmos (com plementary high density metal oxides semiconductor technology). C mos although TDP low, but owing to their physical characteristics to their work at a speed isn't high enough, but it has a high-spee d chmos TDP and low, these features are more appropriate to ask for lower TDP in a battery operated applications. so this process will be for a period of development. the main way to monolithic i ntegrated circuits。
单片机 英文参考文献 翻译
天津科技大学毕业生外文资料翻译姓名:学院:电子信息与自动化学院专业:测控技术与仪器一.英文原文Progress in ComputersPrestige Lecture delivered to IEE, Cambridge, on 5 February 2009Maurice WilkesThe first stored program computers began to work around 1950. The one we built in Cambridge, the EDSAC was first used in the summer of 1949.These early experimental computers were built by people like myself with varying backgrounds. We all had extensive experience in electronic engineering and were confident that that experience would stand us in good stead. This proved true, although we had some new things to learn. The most important of these was that transients must be treated correctly; what would cause a harmless flash on the screen of a television set could lead to a serious error in a computer.As far as computing circuits were concerned, we found ourselves with an embarass de richess. For example, we could use vacuum tube diodes for gates as we did in the EDSAC or pentodes with control signals on both grids, a system widely used elsewhere. This sort of choice persisted and the term families of logic came into use. Those who have worked in the computer field will remember TTL, ECL and CMOS. Of these, CMOS has now become dominant.In those early years, the IEE was still dominated by power engineering and we had to fight a number of major battles in order to get radio engineering along with the rapidly developing subject of electronics.dubbed in the IEE light current electrical engineering.properly recognised as an activity in its own right. I remember that we had some difficulty in organising a conference because the power engineers’ ways of doing things were not our ways. A minor source of irritation was that all IEE published papers were expected to start with a lengthy statement of earlier practice, something difficult to do when there was no earlier practice Consolidation in the 1960sBy the late 50s or early 1960s, the heroic pioneering stage was over and the computer field was starting up in real earnest. The number of computers in the world had increased and they were much more reliable than the very early ones . To those years we can ascribe the first steps in high level languages and the firstoperating systems. Experimental time-sharing was beginning, and ultimately computer graphics was to come along.Above all, transistors began to replace vacuum tubes. This change presented a formidable challenge to the engineers of the day. They had to forget what they knew about circuits and start again. It can only be said that they measured up superbly well to the challenge and that the change could not have gone more smoothly.Soon it was found possible to put more than one transistor on the same bit of silicon, and this was the beginning of integrated circuits. As time went on, a sufficient level of integration was reached for one chip to accommodate enough transistors for a small number of gates or flip flops. This led to a range of chips known as the 7400 series. The gates and flip flops were independent of one another and each had its own pins. They could be connected by off-chip wiring to make a computer or anything else.These chips made a new kind of computer possible. It was called a minicomputer. It was something less that a mainframe, but still very powerful, and much more affordable. Instead of having one expensive mainframe for the whole organisation, a business or a university was able to have a minicomputer for each major department.Before long minicomputers began to spread and become more powerful. The world was hungry for computing power and it had been very frustrating for industry not to be able to supply it on the scale required and at a reasonable cost. Minicomputers transformed the situation.The fall in the cost of computing did not start with the minicomputer; it had always been that way. This was what I meant when I referred in my abstract to inflation in the computer industry ‘going the other way’. As time goes on people get more for their money, not less.Research in Computer Hardware.The time that I am describing was a wonderful one for research in computer hardware. The user of the 7400 series could work at the gate and flip-flop level and yet the overall level of integration was sufficient to give a degree of reliability far above that of discreet transistors. The researcher, in a university or elsewhere, could build any digital device that a fertile imagination could conjure up. In the Computer Laboratory we built the Cambridge CAP, a full-scale minicomputer with fancy capability logic.The 7400 series was still going strong in the mid 1970s and was used for the Cambridge Ring, a pioneering wide-band local area network. Publication of the design study for the Ring came just before the announcement of the Ethernet. Until these two systems appeared, users had mostly been content with teletype-based local area networks.Rings need high reliability because, as the pulses go repeatedly round the ring, they must be continually amplified and regenerated. It was the high reliability provided by the 7400 series of chips that gave us the courage needed to embark on the project for the Cambridge Ring.The RISC Movement and Its AftermathEarly computers had simple instruction sets. As time went on designers of commercially available machines added additional features which they thought would improve performance. Few comparative measurements were done and on the whole the choice of features depended upon the designer’s intuition.In 1980, the RISC movement that was to change all this broke on the world. The movement opened with a paper by Patterson and Ditzel entitled The Case for the Reduced Instructions Set Computer.Apart from leading to a striking acronym, this title conveys little of the insights into instruction set design which went with the RISC movement, in particular the way it facilitated pipelining, a system whereby several instructions may be in different stages of execution within the processor at the same time. Pipelining was not new, but it was new for small computers The RISC movement benefited greatly from methods which had recently become available for estimating the performance to be expected from a computer design without actually implementing it. I refer to the use of a powerful existing computer to simulate the new design. By the use of simulation, RISC advocates were able to predict with some confidence that a good RISC design would be able to out-perform the best conventional computers using the same circuit technology. This prediction was ultimately born out in practice.Simulation made rapid progress and soon came into universal use by computer designers. In consequence, computer design has become more of a science and less of an art. Today, designers expect to have a roomful of, computers available to do their simulations, not just one. They refer to such a roomful by the attractive name of computer farm.The x86 Instruction SetLittle is now heard of pre-RISC instruction sets with one major exception, namely that of the Intel 8086 and its progeny, collectively referred to as x86. This has become the dominant instruction set and the RISC instruction sets that originally had a considerable measure of success are having to put up a hard fight for survival.This dominance of x86 disappoints people like myself who come from the research wings.both academic and industrial.of the computer field. No doubt, business considerations have a lot to do with the survival of x86, but there are other reasons as well. However much we research oriented people would like to think otherwise. high level languages have not yet eliminated the use of machine code altogether. We need to keep reminding ourselves that there is much to be said for strict binary compatibility with previous usage when that can be attained. Nevertheless, things might have been different if Intel’s major attempt to produce a good RISC chip had been more successful. I am referring to the i860 (not the i960, which was something different). In many ways the i860 was an excellent chip, but its software interface did not fit it to be used in a workstation.There is an interesting sting in the tail of this apparently easy triumph of the x86 instruction set. It proved impossible to match the steadily increasing speed of RISC processors by direct implementation of the x86 instruction set as had been done in the past. Instead, designers took a leaf out of the RISC book; although it is not obvious, on the surface, a modern x86 processor chip contains hidden within it a RISC-style processor with its own internal RISC coding. The incoming x86 code is, after suitable massaging, converted into this internal code and handed over to the RISC processor where the critical execution is performed.In this summing up of the RISC movement, I rely heavily on the latest edition of Hennessy and Patterson’s books on computer design as my supporting authority; see in particular Computer Architecture, third edition, 2003, pp 146, 151-4, 157-8.The IA-64 instruction set.Some time ago, Intel and Hewlett-Packard introduced the IA-64 instruction set. This was primarily intended to meet a generally recognised need for a 64 bit address space. In this, it followed the lead of the designers of the MIPS R4000 and Alpha. However one would have thought that Intel would have stressed compatibility with the x86; the puzzle is that they did the exact opposite.Moreover, built into the design of IA-64 is a feature known as predicationwhich makes it incompatible in a major way with all other instruction sets. In particular, it needs 6 extra bits with each instruction. This upsets the traditional balance between instruction word length and information content, and it changes significantly the brief of the compiler writer.In spite of having an entirely new instruction set, Intel made the puzzling claim that chips based on IA-64 would be compatible with earlier x86 chips. It was hard to see exactly what was meant.Chips for the latest IA-64 processor, namely, the Itanium, appear to have special hardware for compatibility. Even so, x86 code runs very slowly.Because of the above complications, implementation of IA-64 requires a larger chip than is required for more conventional instruction sets. This in turn implies a higher cost. Such at any rate, is the received wisdom, and, as a general principle, it was repeated as such by Gordon Moore when he visited Cambridge recently to open the Betty and Gordon Moore Library. I have, however, heard it said that the matter appears differently from within Intel. This I do not understand. But I am very ready to admit that I am completely out of my depth as regards the economics of the semiconductor industry.AMD have defined a 64 bit instruction set that is more compatible with x86 and they appear to be making headway with it. The chip is not a particularly large one. Some people think that this is what Intel should have done. [Since the lecture was delivered, Intel have announced that they will market a range of chips essentially compatible with those offered by AMD.]The Relentless Drive towards Smaller TransistorsThe scale of integration continued to increase. This was achieved by shrinking the original transistors so that more could be put on a chip. Moreover, the laws of physics were on the side of the manufacturers. The transistors also got faster, simply by getting smaller. It was therefore possible to have, at the same time, both high density and high speed.There was a further advantage. Chips are made on discs of silicon, known as wafers. Each wafer has on it a large number of individual chips, which are processed together and later separated. Since shrinkage makes it possible to get more chips on a wafer, the cost per chip goes down.Falling unit cost was important to the industry because, if the latest chips are cheaper to make as well as faster, there is no reason to go on offering the old ones, at least not indefinitely. There can thus be one product for the entiremarket.However, detailed cost calculations showed that, in order to maintain this advantage as shrinkage proceeded beyond a certain point, it would be necessary to move to larger wafers. The increase in the size of wafers was no small matter. Originally, wafers were one or two inches in diameter, and by 2000 they were as much as twelve inches. At first, it puzzled me that, when shrinkage presented so many other problems, the industry should make things harder for itself by going to larger wafers. I now see that reducing unit cost was just as important to the industry as increasing the number of transistors on a chip, and that this justified the additional investment in foundries and the increased risk.The degree of integration is measured by the feature size, which, for a given technology, is best defined as the half the distance between wires in the densest chips made in that technology. At the present time, production of 90 nm chips is still building upSuspension of LawIn March 1997, Gordon Moore was a guest speaker at the celebrations of the centenary of the discovery of the electron held at the Cavendish Laboratory. It was during the course of his lecture that I first heard the fact that you can have silicon chips that are both fast and low in cost described as a violation of Murphy’s law.or Sod’s law as it is usually called in the UK. Moore said that experience in other fields would lead you to expect to have to choose between speed and cost, or to compromise between them. In fact, in the case of silicon chips, it is possible to have both.In a reference book available on the web, Murphy is identified as an engineer working on human acceleration tests for the US Air Force in 1949. However, we were perfectly familiar with the law in my student days, when we called it by a much more prosaic name than either of those mentioned above, namely, the Law of General Cussedness. We even had a mock examination question in which the law featured. It was the type of question in which the first part asks for a definition of some law or principle and the second part contains a problem to be solved with the aid of it. In our case the first part was to define the Law of General Cussedness and the second was the problem;A cyclist sets out on a circular cycling tour. Derive an equation giving the direction of the wind at any time.The single-chip computerAt each shrinkage the number of chips was reduced and there were fewer wiresgoing from one chip to another. This led to an additional increment in overall speed, since the transmission of signals from one chip to another takes a long time.Eventually, shrinkage proceeded to the point at which the whole processor except for the caches could be put on one chip. This enabled a workstation to be built that out-performed the fastest minicomputer of the day, and the result was to kill the minicomputer stone dead. As we all know, this had severe consequences for the computer industry and for the people working in it.From the above time the high density CMOS silicon chip was Cock of the Roost. Shrinkage went on until millions of transistors could be put on a single chip and the speed went up in proportion.Processor designers began to experiment with new architectural features designed to give extra speed. One very successful experiment concerned methods for predicting the way program branches would go. It was a surprise to me how successful this was. It led to a significant speeding up of program execution and other forms of prediction followedEqually surprising is what it has been found possible to put on a single chip computer by way of advanced features. For example, features that had been developed for the IBM Model 91.the giant computer at the top of the System 360 range.are now to be found on microcomputersMurphy’s Law remained in a state of suspension. No longer did it make sense to build experimental computers out of chips with a small scale of integration, such as that provided by the 7400 series. People who wanted to do hardware research at the circuit level had no option but to design chips and seek for ways to get them made. For a time, this was possible, if not easyUnfortunately, there has since been a dramatic increase in the cost of making chips, mainly because of the increased cost of making masks for lithography, a photographic process used in the manufacture of chips. It has, in consequence, again become very difficult to finance the making of research chips, and this is a currently cause for some concern.The Semiconductor Road MapThe extensive research and development work underlying the above advances has been made possible by a remarkable cooperative effort on the part of the international semiconductor industry.At one time US monopoly laws would probably have made it illegal for UScompanies to participate in such an effort. However about 1980 significant and far reaching changes took place in the laws. The concept of pre-competitive research was introduced. Companies can now collaborate at the pre-competitive stage and later go on to develop products of their own in the regular competitive manner.The agent by which the pre-competitive research in the semi-conductor industry is managed is known as the Semiconductor Industry Association (SIA). This has been active as a US organisation since 1992 and it became international in 1998. Membership is open to any organisation that can contribute to the research effort.Every two years SIA produces a new version of a document known as the International Technological Roadmap for Semiconductors (ITRS), with an update in the intermediate years. The first volume bearing the title ‘Roadmap’ was issued in 1994 but two reports, written in 1992 and distributed in 1993, are regarded as the true beginning of the series.Successive roadmaps aim at providing the best available industrial consensus on the way that the industry should move forward. They set out in great detail.over a 15 year horizon. the targets that must be achieved if the number of components on a chip is to be doubled every eighteen months.that is, if Moore’s law is to be maintained.-and if the cost per chip is to fall.In the case of some items, the way ahead is clear. In others, manufacturing problems are foreseen and solutions to them are known, although not yet fully worked out; these areas are coloured yellow in the tables. Areas for which problems are foreseen, but for which no manufacturable solutions are known, are coloured red. Red areas are referred to as Red Brick Walls.The targets set out in the Roadmaps have proved realistic as well as challenging, and the progress of the industry as a whole has followed the Roadmaps closely. This is a remarkable achievement and it may be said that the merits of cooperation and competition have been combined in an admirable manner.It is to be noted that the major strategic decisions affecting the progress of the industry have been taken at the pre-competitive level in relative openness, rather than behind closed doors. These include the progression to larger wafers.By 1995, I had begun to wonder exactly what would happen when the inevitable point was reached at which it became impossible to make transistors any smaller. My enquiries led me to visit ARPA headquarters in Washington DC, where I was given a copy of the recently produced Roadmap for 1994. This made it plain that seriousproblems would arise when a feature size of 100 nm was reached, an event projected to happen in 2007, with 70 nm following in 2010. The year for which the coming of 100 nm (or rather 90 nm) was projected was in later Roadmaps moved forward to 2004 and in the event the industry got there a little sooner.I presented the above information from the 1994 Roadmap, along with such other information that I could obtain, in a lecture to the IEE in London, entitled The CMOS end-point and related topics in Computing and delivered on 8 February 1996.The idea that I then had was that the end would be a direct consequence of the number of electrons available to represent a one being reduced from thousands to a few hundred. At this point statistical fluctuations would become troublesome, and thereafter the circuits would either fail to work, or if they did work would not be any faster. In fact the physical limitations that are now beginning to make themselves felt do not arise through shortage of electrons, but because the insulating layers on the chip have become so thin that leakage due to quantum mechanical tunnelling has become troublesome.There are many problems facing the chip manufacturer other than those that arise from fundamental physics, especially problems with lithography. In an update to the 2001 Roadmap published in 2002, it was stated that the continuation of progress at present rate will be at risk as we approach 2005 when the roadmap projects that progress will stall without research break-throughs in most technical areas “. This was the most specific statement about the Red Brick Wall, that had so far come from the SIA and it was a strong one. The 2003 Roadmap reinforces this statement by showing many areas marked red, indicating the existence of problems for which no manufacturable solutions are known.It is satisfactory to report that, so far, timely solutions have been found to all the problems encountered. The Roadmap is a remarkable document and, for all its frankness about the problems looming above, it radiates immense confidence. Prevailing opinion reflects that confidence and there is a general expectation that, by one means or another, shrinkage will continue, perhaps down to 45 nm or even less.However, costs will rise steeply and at an increasing rate. It is cost that will ultimately be seen as the reason for calling a halt. The exact point at which an industrial consensus is reached that the escalating costs can no longer be met will depend on the general economic climate as well as on the financial strength of the semiconductor industry itself.。
(完整版)单片机毕业参考英文文献及翻译
Structure and function of the MCS-51 seriesStructure and function of the MCS-51 series one-chip computer MCS-51 is a name of a piece of one-chip computer series which Intel Company produces。
This company introduced 8 top-grade one—chip computers of MCS—51 series in 1980 after introducing 8 one-chip computers of MCS-48 series in 1976. It belong to a lot of kinds this line of one—chip computer the chips have,such as 8051, 8031, 8751, 80C51BH, 80C31BH,etc。
, their basic composition, basic performance and instruction system are all the same. 8051 daily representatives— 51 serial one-chip computers 。
An one—chip computer system is made up of several following parts: ( 1) One microprocessor of 8 (CPU)。
( 2) At slice data memory RAM (128B/256B),it use not depositting not can reading /data that write, such as result not middle of operation,final result and data wanted to show, etc. ( 3) Procedure memory ROM/EPROM (4KB/8KB ),is used to preserve the procedure , some initial data and form in slice. But does not take ROM/EPROM within some one-chip computers, such as 8031 , 8032, 80C ,etc。
单片机英文文献外文翻译
单片机英文文献Principle of MCUSingle-chip is an integrated on a single chip a complete computer system. Even though most of his features in a small chip, but it has a need to complete the majority of computer components: CPU, memory, internal and external bus system, most will have the Core. At the same time, such as integrated communication interfaces, timers, real-time clock and other peripheral equipment. And now the most powerful single-chip microcomputer system can even voice, image, networking, input and output complex system integration on a single chip.Also known as single-chip MCU (Microcontroller), because it was first used in the field of industrial control. Only by the single-chip CPU chip developed from the dedicated processor. The design concept is the first by a large number of peripherals and CPU in a single chip, the computer system so that smaller, more easily integrated into the complex and demanding on the volume control devices. INTEL the Z80 is one of the first design in accordance with the idea of the processor, From then on, the MCU and the development of a dedicated processor parted ways.Early single-chip 8-bit or all of the four. One of the most successful is INTEL's 8031, because the performance of a simple and reliable access to a lot of good praise. Since then in 8031 to develop a single-chip microcomputer system MCS51 series. Based on single-chip microcomputer system of the system is still widely used until now. As the field of industrial control requirements increase in the beginning of a 16-bit single-chip, but not ideal because the price has not been very widely used. After the 90's with the big consumer electronics product development, single-chip technology is a huge improvement. INTEL i960 Series with subsequent ARM in particular, a broad range of applications, quickly replaced by 32-bit single-chip 16-bit single-chip high-end status, and enter the mainstream market. Traditional 8-bit single-chip performance has been the rapid increase in processing power compared to the 80's to raise a few hundred times. At present, the high-end 32-bit single-chip frequency over 300MHz, the performance of the mid-90's close on the heels of a special processor, while the ordinary price of the model dropped to one U.S. dollars, the most high-end models, only 10 U.S. dollars. Contemporary single-chip microcomputer system is no longer only the bare-metal environment in the development and use of a large number of dedicated embedded operating system is widely used in the full range of single-chip microcomputer. In PDAs and cell phones as the core processing of high-end single-chip or even a dedicated direct access to Windows and Linux operating systems.More than a dedicated single-chip processor suitable for embedded systems, so it was up to the application. In fact the number of single-chip is the world's largest computer. Modern human life used in almost every piece of electronic and mechanical products will have a single-chip integration. Phone, telephone, calculator, home appliances, electronic toys, handheld computers and computer accessories such as a mouse in the Department are equipped with 1-2 single chip. And personal computers also have a large number of single-chip microcomputer in the workplace. Vehicles equipped with more than 40 Department of the general single-chip, complex industrial control systems and even single-chip may have hundreds of work at the same time! SCM is not only far exceeds the number of PC and other integrated computing, even more than the numberof human beings.Hardwave introductionThe 8051 family of micro controllers is based on an architecture which is highly optimized for embedded control systems. It is used in a wide variety of applications from military equipment to automobiles to the keyboard on your PC. Second only to the Motorola 68HC11 in eight bit processors sales, the 8051 family of microcontrollers is available in a wide array of variations from manufacturers such as Intel, Philips, and Siemens. These manufacturers have added numerous features and peripherals to the 8051 such as I2C interfaces, analog to digital converters, watchdog timers, and pulse width modulated outputs. Variations of the 8051 with clock speeds up to 40MHz and voltage requirements down to 1.5 volts are available. This wide range of parts based on one core makes the 8051 family an excellent choice as the base architecture for a company's entire line of products since it can perform many functions and developers will only have to learn this one platform.The basic architecture consists of the following features:·an eight bit ALU·32 descrete I/O pins (4 groups of 8) which can be individually accessed·two 16 bit timer/counters·full duplex UART· 6 interrupt sources with 2 priority levels·128 bytes of on board RAM·separate 64K byte address spaces for DA TA and CODE memoryOne 8051 processor cycle consists of twelve oscillator periods. Each of the twelve oscillator periods is used for a special function by the 8051 core such as op code fetches and samples of the interrupt daisy chain for pending interrupts. The time required for any 8051 instruction can be computed by dividing the clock frequency by 12, inverting that result and multiplying it by the number of processor cycles required by the instruction in question. Therefore, if you have a system which is using an 11.059MHz clock, you can compute the number of instructions per second by dividing this value by 12. This gives an instruction frequency of 921583 instructions per second. Inverting this will provide the amount of time taken by each instruction cycle (1.085 microseconds).单片机原理单片机是指一个集成在一块芯片上的完整计算机系统。
单片机的外文文献及中文翻译教学内容
单片机的外文文献及中文翻译SCM is an integrated circuit chip, is the use of large scale integrated circuit technology to a data processing capability of CPU CPU random access memory RAM, read-only memory ROM, a variety of I / O port and interrupt system, timers / timer functions (which may also include display driver circuitry, pulse width modulation circuit, analog multiplexer, A / D converter circuit) integrated into a silicon constitute a small and complete computer systems.SCM is also known as micro-controller (Microcontroller), because it is the first to be used in industrial control. Only a single chip by the CPU chip developed from a dedicated processor. The first design is by a large number of peripherals and CPU on a chip in the computer system, smaller, more easily integrated into a complex and demanding on the volume control device which. The Z80 INTEL is the first designed in accordance with this idea processor, then on the development of microcontroller and dedicated processors will be parting ways.Are 8-bit microcontroller early or 4 bits. One of the most successful is the INTEL 8031, for a simple, reliable and good performance was a lot of praise. Then developed in 8031 out of MCS51 MCU Systems. SCM systems based on this system until now is still widely used. With the increased requirements of industrial control field, began a 16-bit microcontroller, but not ideal because the cost has not been very widely used. After 90 years with the great development of consumer electronics, microcontroller technology has been a huge increase. With INTEL i960 series, especially the later series of widely used ARM, 32-bit microcontroller quickly replace high-end 16-bit MCU status and enter the mainstream market. The traditional 8-bit microcontroller performance have been the rapid increase capacity increase compared to 80 the number of times. Currently, high-end 32-bit microcontroller clocked over 300MHz, the performance catching the mid-90s dedicated processor, while the average model prices fall to one U.S. dollar, the most high-end [1] model only 10 dollars. Modern SCM systems are no longer only in the development and use of bare metal environment, a large number of proprietary embedded operating system is widely used in the full range of SCM. The handheld computers and cell phones as the core processing of high-end microcontroller can even use a dedicated Windows and Linux operating systems.SCM is more suitable than the specific processor used in embedded systems, so it was up to the application. In fact the number of SCM is the world's largest computer. Modern human life used in almost every piece of electronic and mechanical products will be integrated single chip. Phone, telephone, calculator, home appliances, electronic toys, handheld computers and computer accessories such as a mouse with a 1-2 in both the Department of SCM. Personal computer will have a large number of SCM in the work. General car with more than 40 microcontroller, a complex industrial control systems may even hundreds of single chip at the same time work! SCM is notonly far exceeds the number of PC and other computing the sum, or even more than the number of human beings.Single chip, also known as single-chip microcontroller, it is not complete a certain logic chips, but to a computer system integrated into a chip. Equivalent to a micro-computer, and computer than just the lack of a microcontroller I / O devices. General talk: a chip becomes a computer. Its small size, light weight, cheap, for the study, application and development of facilities provided. At the same time, learning to use the MCU is to understand the principle and structure of the computer the best option.Microcontroller and the computer functions internally with similar modules, such as CPU, memory, parallel bus, the same effect there, and hard disk memory device, is it different properties of these components are relatively weak many of our home computer, but the price is low , usually not more than 10 yuan you can do with it ...... some control for a class is not very complicated electrical work is enough of. We are using automatic drum washing machine, smoke hood, VCD and so on appliances which could see its shadow! ...... It is mainly part of the core components as the control.t is an online real-time control computer, on-line is on-site control, need to have strong anti-interference ability, low cost, and this is, and off-line computer (such as home PC), the main difference. Single chipMCU is through running, and can be modified. Through different procedures to achieve different functions, in particular special unique features, this is another device much effort needs to be done, some are great efforts are very difficult to achieve. A not very complex functions if the 50's with the United States developed 74 series, or the 60's CD4000 series of these pure hardware buttoned, then the circuit must be a large PCB board! But if the United States if the 70's with a series of successful SCM market, the result will be a drastic change! Just because you are prepared by microcomputer programs can achieve high intelligence, high efficiency and high reliability!As the microcontroller on the cost-sensitive, so now the dominant software or the lowest level assembly language, which is the lowest level in addition to more than binary machine code language, and as so low why is the use? Many high-level language has reached the level of visual programming Why is not it? The reason is simply that there is no home computer as a single chip CPU, not as hard as a mass storage device. A visualization of small high-level language program is only one button on it though, will reach tens of K in size! For the home PC's hard drive in terms of nothing but speaking for the MCU is not acceptable. SCM in the utilization of hardware resources to be very high for the job so although the original is still in the compilation of a lot of use. The same token, if the giant computer operating system and applications run up get home PC, home PC, also bear not work.Can be said that the twentieth century across the three "power" era, that is, the age of electricity, the electronic age and has entered into the computer age. However, this computer, usually refers to the personal computer, referred to as PC. It consists of the host, keyboard, monitor and other components. Another type of computer, most people do not know how. This computer is to give all kinds of machinery, intelligent single chip (also known as micro-controller). As the name suggests, this computer system took only a minimal integrated circuit, can be a simple operation and control. Because it is small, usually in the charged with possession of mechanical "stomach" in. It is in the device, like the human brain plays a role, it goes wrong, the whole plant was paralyzed. Now, this microcontroller has been very widely used in the field, such as smart meters, real-time industrial control, communications equipment, navigation systems, and household appliances. Once all kinds of products were using SCM, can serve to upgrade the effectiveness of products, often in the product name preceded by the adjective - "intelligent", such as intelligent washing machines. Now some technical personnel of factories or other amateur electronics developers to engage in out of certain products, not the circuit is too complicated, that function is too simple and can easily be copied. The reason may be stuck in the product did not use a microcontroller or other programmable logic device.外文文献的翻译:单片机是一种集成在电路芯片,是采用超大规模集成电路技术把具有数据处理能力的中央处理器CPU随机存储器RAM、只读存储器ROM、多种I/O口和中断系统、定时器/计时器等功能(可能还包括显示驱动电路、脉宽调制电路、模拟多路转换器、A/D转换器等电路)集成到一块硅片上构成的一个小而完善的计算机系统。
单片机英文文献及翻译)
Validation and Testing of Design Hardening for Single Event Effects Using the 8051 MicrocontrollerAbstractWith the dearth of dedicated radiation hardened foundries, new and novel techniques are being developed for hardening designs using non-dedicated foundry services. In this paper, we will discuss the implications of validating these methods for the single event effects (SEE) in the space environment. Topics include the types of tests that are required and the design coverage (i.e., design libraries: do they need validating for each application?). Finally, an 8051 microcontroller core from NASA Institute of Advanced Microelectronics (IAμE) CMOS Ultra Low Power Radiation Tolerant (CULPRiT) design is evaluated for SEE mitigative techniques against two commercial 8051 devices.Index TermsSingle Event Effects, Hardened-By-Design, microcontroller, radiation effects.I. INTRODUCTIONNASA constantly strives to provide the best capture of science while operating in a space radiation environment using a minimum of resources [1,2]. With a relatively limited selection of radiation-hardened microelectronic devices that are often two or more generations of performance behind commercialstate-ofthe-art technologies, NASA’s performance of this task is quite challenging. One method of alleviating this is by the use of commercial foundry alternatives with no or minimally invasive design techniques for hardening. This is often called hardened-by-design (HBD).Building custom-type HBD devices using design libraries and automated design tools may provide NASA the solution it needs to meet stringent science performance specifications in a timely,cost-effective, and reliable manner.However, one question still exists: traditional radiation-hardened devices have lot and/or wafer radiation qualification tests performed; what types of tests are required for HBD validation?II. TESTING HBD DEVICES CONSIDERATIONSTest methodologies in the United States exist to qualify individual devices through standards and organizations such as ASTM, JEDEC, and MIL-STD- 883. Typically, TID (Co-60) and SEE (heavy ion and/or proton) are required for device validation. So what is unique to HBD devices?As opposed to a “regular” commercial-off-the-shelf (COTS) device or application specific integrated circuit (ASIC) where no hardening has been performed, one needs to determine how validated is the design library as opposed to determining the device hardness. That is, by using test chips, can we “qualify” a future device using the same library?Consider if Vendor A has designed a new HBD library portable to foundries B and C. A test chip is designed, tested, and deemed acceptable. Nine months later a NASA flight project enters the mix by designing a new device using Vendor A’s library. Does this device require complete radiation qualification testing? To answer this, other questions must be asked.How complete was the test chip? Was there sufficient statistical coverage of all library elements to validate each cell? If the new NASA design uses a partially or insufficiently characterized portion of the design library, full testing might be required. Of course, if part of the HBD was relying on inherent radiation hardness of a process, some of the tests (like SEL in the earlier example) may be waived.Other considerations include speed of operation and operating voltage. For example, if the test chip was tested statically for SEE at a power supply voltage of 3.3V, is the data applicable to a 100 MHz operating frequency at 2.5V? Dynamic considerations (i.e., nonstatic operation) include the propagated effects of Single Event Transients (SETs). These can be a greater concern at higher frequencies.The point of the considerations is that the design library must be known, the coverage used during testing is known, the test application must be thoroughly understood and the characteristics of the foundry must be known. If all these are applicable or have been validated by the test chip, then no testing may be necessary. A task within NASA’s Electronic Parts and Packaging (NEPP) Program was performed to explore these types of considerations.III. HBD TECHNOLOGY EVALUATION USING THE 8051 MICROCONTROLLERWith their increasing capabilities and lower power consumption, microcontrollers are increasingly being used in NASA and DOD system designs. There are existing NASA and DoD programs that are doing technology development to provide HBD. Microcontrollers are one such vehicle that is being investigated to quantify the radiation hardness improvement. Examples of these programs are the 8051 microcontroller being developed by Mission Research Corporation (MRC) and the IAμE (the focus of this study). As these HBD technologies become available, validation of the technology, in the natural space radiation environment, for NASA’s use in spaceflight systems is required.The 8051 microcontroller is an industry standard architecture that has broad acceptance, wide-ranging applications and development tools available. There are numerous commercial vendors that supply this controller or have it integrated into some type of system-on-a-chip structure. Both MRC and IAμE chose this device to demonstrate two distinctly different technologies for hardening. The MRC example of this is to use temporal latches that require specific timing to ensure that single event effects are minimized. The IAμE technology uses ultra low power, and layout and architecture HBD design rules to achieve their results. These are fundamentally different than the approach by Aeroflex-United Technologies Microelectronics Center (UTMC), the commercial vendor of a radiation–hardened 8051, that built their 8051 microcontroller using radiationhardened processes. This broad range of technology within one device structure makes the 8051an ideal vehicle for performing this technology evaluation.The objective of this work is the technology evaluation of the CULPRiT process [3] from IAμE. The process has been baselined against two other processes, the standard 8051 commercial device from Intel and a version using state-of-the-art processing from Dallas Semiconductor. By performing this side-by-side comparison, the cost benefit, performance, and reliability trade study can be done.In the performance of the technology evaluation, this task developed hardware and software for testing microcontrollers. A thorough process was done to optimize the test process to obtain as complete an evaluation as possible. This included taking advantage of the available hardware and writing software that exercised the microcontroller such that all substructures of the processor were evaluated. This process is also leading to a more complete understanding of how to test complex structures, such as microcontrollers, and how to more efficiently test these structures in the future.IV. TEST DEVICESThree devices were used in this test evaluation. The first is the NASA CULPRiT device, which is the primary device to be evaluated. The other two devices are two versions of a commercial 8051, manufactured by Intel and Dallas Semiconductor, respectively.The Intel devices are the ROMless, CMOS version of the classic 8052 MCS-51 microcontroller. They are rated for operation at +5V, over a temperature range of 0 to 70 °C and at a clock speeds of 3.5 MHz to 24 MHz. They are manufactured in Intel’s P629.0 CHMOS III-E process.The Dallas Semiconductor devices are similar in that they are ROMless 8052 microcontrollers, but they are enhanced in various ways. They are rated for operation from 4.25 to 5.5 Volts over 0 to 70 °C at clock speeds up to 25 MHz. They have a second full serial port built in, seven additional interrupts, a watchdog timer, a power fail reset, dual data pointers and variable speed peripheral access. In addition, the core is redesigned so that the machine cycle is shortened for most instructions, resulting in an effective processing ability that is roughly 2.5 times greater (faster) than the standard 8052 device. None of these features, other than those inherent in the device operation, were utilized in order to maximize the similarity between the Dallas and Intel test codes.The CULPRiT technology device is a version of the MSC-51 family compatible C8051 HDL core licensed from the Ultra Low Power (ULP) process foundry. The CULPRiT technology C8051 device is designed to operate at a supply voltage of 500 mV and includes an on-chip input/output signal level-shifting interface with conventional higher voltage parts. The CULPRiT C8051 device requires two separate supply voltages; the 500 mV and the desired interface voltage. The CULPRiT C8051 is ROMless and is intended to be instruction set compatible with the MSC-51 family.V. TEST HARDWAREThe 8051 Device Under Test (DUT) was tested as a component of a functional computer. Aside from DUT itself, the other componentsof the DUT computer were removed from the immediate area of the irradiation beam.A small card (one per DUT package type) with a unique hard-wired identifier byte contained the DUT, its crystal, and bypass capacitors (and voltage level shifters for the CULPRiT DUTs). This "DUT Board" was connected to the "Main Board" by a short 60-conductor ribbon cable. The Main Board had all other components required to complete the DUT Computer, including some which nominally are not necessary in some designs (such as external RAM, external ROM and address latch). The DUT Computer and the Test Control Computer were connected via a serial cable and communications were established between the two by the Controller (that runs custom designed serial interface software). This Controller software allowed for commanding of the DUT, downloading DUT Code to the DUT, and real-time error collection from the DUT during and post irradiation. A 1 Hz signal source provided an external watchdog timing signal to the DUT, whose watchdog output was monitored via an oscilloscope. The power supply was monitored to provide indication of latchup.VI. TEST SOFTWAREThe 8051 test software concept is straightforward. It was designed to be a modular series of small test programs each exercising a specific part of the DUT. Since each test was stand alone, they were loaded independently of each other for execution on the DUT. This ensured that only the desired portion of the 8051 DUT was exercised during the test and helped pinpoint location of errors that occur during testing. All test programs resided on the controller PC until loaded via the serial interface to the DUT computer. In this way, individual tests could have been modified at any time without the necessity of burning PROMs. Additional tests could have also been developed and added without impacting the overall test design. The only permanent code, which was resident on the DUT, was the boot code and serial code loader routines that established communications between the controller PC and the DUT.All test programs implemented:• An external Universal Asynchronous Receive and Transmit device (UART) for transmission of error information and communication to controller computer.• An external real-time clock for data error tag.•A watchdog routine designed to provide visual verification of 8051 health and restart test code if necessary.• A "foul-up" routine to reset program counter if it wanders out of code space.• An external telemetry data storage memory to provide backup of data in the event of an interruption in data transmission.The brief description of each of the software tests used is given below. It should be noted that for each test, the returned telemetry (including time tag) was sent to both the test controller and the telemetry memory, giving the highest reliability that all data is captured.Interrupt –This test used 4 of 6 available interrupt vectors (Serial, External, Timer0 Overflow, and Timer1 Overflow) to trigger routines that sequentially modified a value in the accumulator which was periodically compared to a known value. Unexpected values were transmitted with register information.Logic –This test performed a series of logic and math computations and provided three types of error identifications: 1) addition/subtraction, 2) logic and 3) multiplication/division. All miscompares of computations and expected results were transmitted with other relevant register information.Memory – This test loaded internal data memory at locations D:0x20 through D:0xff (or D:0x20 through D:0x080 for the CULPRiT DUT), indirectly, with an 0x55 pattern. Compares were performed continuously and miscompares were corrected while error information and register values were transmitted.Program Counter -The program counter was used to continuously fetch constants at various offsets in the code. Constants were compared with known values and miscompares were transmitted along with relevant register information. Registers – This test loaded each of four (0,1,2,3) banks of general-purpose registers with either 0xAA (for banks 0 and 2) or 0x55 (for banks 1 and 3). The pattern was alternated in order to test the Program Status Word (PSW) special function register, which controls general-purpose register bank selection. General-purpose register banks were then compared with their expected values. All miscompares were corrected and error information was transmitted.Special Function Registers (SFR) – This test used learned static values of 12 out 21 available SFRs and then constantly compared the learned value with the current one. Miscompares were reloaded with learned value and error information was transmitted.Stack – This test performed arithmetic by pushing and popping operands on the stack. Unexpected results were attributed to errors on the stack or to the stack pointer itself and were transmitted with relevant register information.VII. TEST METHODOLOGYThe DUT Computer booted by executing the instruction code located at address 0x0000. Initially, the device at this location was an EPROM previously loaded with "Boot/Serial Loader" code. This code initialized the DUT Computer and interface through a serial connection to the controlling computer, the "Test Controller". The DUT Computer downloaded Test Code and put it into Program Code RAM (located on the Main Board of the DUT Computer). It then activated a circuit which simultaneously performed two functions: held the DUT reset line active for some time (~10 ms); and, remapped the Test Code residing in the Program Code RAM to locate it to address 0x0000 (the EPROM will no longer be accessible in the DUT Computer's memory space). Upon awaking from the reset, the DUT computer again booted by executing the instruction code at address 0x0000, except this time that code was not be the Boot/Serial Loader code but the Test Code.The Test Control Computer always retained the ability to force the reset/remap function, regardless of the DUT Computer's functionality. Thus, if the test ran without a Single Event Functional Interrupt (SEFI) either the DUT Computer itselfor the Test Controller could have terminated the test and allowed the post-test functions to be executed. If a SEFI occurred, the Test Controller forced a reboot into Boot/Serial Loader code and then executed the post-test functions. During any test of the DUT, the DUT exercised a portion of its functionality (e.g., Register operations or Internal RAM check, or Timer operations) at the highest utilization possible, while making a minimal periodic report to the Test Control Computer to convey that the DUT Computer was still functional. If this reportceased, the Test Controller knew that a SEFI had occurred. This periodic data was called "telemetry". If the DUT encountered an error that was not interrupting the functionality (e.g., a data register miscompare) it sent a more lengthy report through the serial port describing that error, and continued with the test.VIII.DISCUSSIONA. Single Event LatchupThe main argument for why latchup is not an issue for the CULPRiT devices is that the operating voltage of 0.5 volts should be below the holding voltage required for latchup to occur. In addition to this, the cell library used also incorporates the heavy dual guard-barring scheme [4]. This scheme has been demonstrated multiple times to be very effective in rendering CMOS circuits completely immune to SEL up to test limits of 120 MeV-cm2/mg. This is true in circuits operating at 5, 3.3, and 2.5 Volts, as well as the 0.5 Volt CULPRiT circuits. In one case, a 5 Volt circuit fabricated on noncircuits wafers even exhibited such SEL immunity.B. Single Event UpsetThe primary structure of the storage unit used in the CULPRiT devices is the Single Event Resistant Topology (SERT) [5]. Given the SERT cell topology and a single upset node assumption, it is expected that the SERT cell will be completely immune to SEUs occurring internal to the memory cell itself. Obviously there are other things going on. The CULPRiT 8051 results reported here are quite similar to some resultsobtained with a CULPRiT CCSDS lossless compression chip (USES) [6]. The CULPRiT USES was synthesized using exactly the same tools and library as the CULPRiT 8051.With the CULPRiT USES, the SEU cross section data [7] was taken as a function of frequency at two LET values, 37.6 and 58.5 MeV-cm2/mg. In both cases the data fit well to a linear model where cross section is proportional to clock. In the LET 37.6 case, the zero frequency intercept occurred essentially at the zero cross section point, indicating that virtually all of these SEUs are captured SETs from the combinational logic. The LET 58.5 data indicated that the SET (frequency dependent) component is sitting on top of a "dc-bias" component –presumably a second upset mechanism is occurring internal to the SERT cells only at a second, higher LET threshold.The SET mitigation scheme used in the CULPRiT devices is based on the SERT cell's fault tolerant input property when redundant input data is provided to separate storage nodes. The idea is that the redundant input data is provided through a total duplication of combinational logic (referred to as “dual rail design”) such that a simple SET on one rail cannot produce an upset. Therefore, some other upset mechanism must be happening. It is possible that a single particle strike is placing an SET on both halves of the logic streams, allowing an SET to produce an upset. Care was taken to separate the dual sensitive nodes in the SERT cell layouts but the automated place-and-route of the combinatorial logic paths may have placed dual sensitive nodes close enough.At this point, the theory for the CULPRiT SEU response is that at about an LET of 20, the energy deposition is sufficiently wide enough (and in the right locations) to produce an SET in both halves of the combinatorial logic streams. Increasing LET allows for more regions to be sensitive to this effect, yielding a larger cross section. Further, the second SEU mechanism that starts at an LET of about 40-60 has to do with when the charge collection disturbance cloud gets large enough to effectively upset multiples of the redundant storage nodes within the SERT cell itself. In this 0.35 μm library, the node separation is several microns. However, since it takes less charge to upset a node operating at 0.5 Volts, with transistors having effective thresholds around 70 mV, this is likely the effect being observed. Also the fact that the per-bit memory upset cross section for the CULPRiT devices and the commercial technologies are approximately equal, as shown in Figure 9, indicates that the cell itself has become sensitive to upset.IX. SUMMARYA detailed comparison of the SEE sensitivity of a HBD technology (CULPRiT) utilizing the 8051 microcontroller as a test vehicle has been completed. This paper discusses the test methodology used and presents a comparison of the commercial versus CULPRiT technologies based on the data taken. The CULPRiT devices consistently show significantly higher threshold LETs and an immunity to latchup. In all but the memory test at the highest LETs, the cross section curves for all upset events is one to two orders of magnitude lower than the commercial devices. Additionally, theory is presented, based on the CULPRiT technology, that explain these results.This paper also demonstrates the test methodology for quantifying the level of hardness designed into a HBD technology. By using the HBD technology in a real-world device structure (i.e., not just a test chip), and comparing results to equivalent commercial devices, one can have confidence in the level of hardness that would be available from that HBD technology in any circuit application.ACKNOWLEDGEMENTSThe authors of this paper would like to acknowledge the sponsors of this work. These are the NASA Electronic Parts and Packaging Program (NEPP), NASA Flight Programs, and the Defense Threat Reduction Agency (DTRA).。
(完整版)_毕业设计英文文献51单片机中英文文献翻译_
AT89C51的概况The General Situation of AT89C51Chapter 1 The application of AT89C51Microcontrollers are used in a multitude of commercial applications such as modems, motor-control systems, air conditioner control systems, automotive engine and among others. The domains also require that these microcontrollers are be ensured by a robust testing process and a proper tools environment for the validation of these microcontrollers both at the component and at the system level. Intel Plaform Engineering department developed anobject-oriented multi-threaded test environment for the validation of its AT89C51 automotive microcontrollers. The goals of thisenvironment was not only to provide a robust testing environment for the AT89C51 automotive microcontrollers, but to develop an environment which can be easily extended and reused for the validation of several other future microcontrollers. The environment was developed in conjunction with Microsoft Foundation Classes (AT89C51). The paper describes the design and mechanism of this test environment, its interactions with variousThe 8-bit AT89C51 CHMOS microcontrollers are designed to engine-control systems, airbags, suspension systems, and antilock braking systems (ABS). The AT89C51 is especially well suited to applications that benefit from its processing speed and enhancedon-chip peripheral functions set, such as automotive power-train control, vehicle dynamic suspension, antilock braking, and stability control applications. Because of these critical applications, the market requires a reliable cost-effective controller with a low interrupt latency response, ability to service the integrated peripherals needed in real time applications, and a CPU with above average processing power in a single package. The financial and legal risk of the market, particularly in mission criticalapplications such as an autopilot or anti-lock braking system, mistakes are financiallyprohibitive. Redesign costs can run as flaw. In addition, field replacements of components is extremely expensive, as the devices are typically sealed in modules with a total value several times that of the component. To mitigate these problems, it is essential that comprehensive testing of the controllers be carried out at both the component level and system level under worst case environmental and voltage conditions.This complete and thorough validation necessitates not only a well-defined process but also a proper environment and tools to facilitate and execute the mission successfully.Intel Chandler Platform Engineering group provides post silicon system validation (SV) of various micro-controllers and processors. The system validation process can be broken into three major parts.The type of the device and its application requirements determine which types of testing are performed on the device.1.2 The AT89C51 provides the following standard features:4Kbytes of Flash, 128 bytes of RAM, 32 IO lines, two 16-bittimercounters, a five vector two-level interrupt architecture,a full duple ser -ial port, on-chip oscillator and clock circuitry.In addition, the AT89C51 is designed with static logic for operation down to zero frequency and supports two software selectable power saving modes. The Idle Mode stops the CPU while allowing the RAM, timercounters,serial port and interrupt sys -tem to continue functioning. The Power-down Mode saves the RAM contents but freezes the oscil –lator disabling all other chip functions until the next DescriptionVCC Supply voltage.GND Ground.Port 0:Port 0 is an 8-bit open-drain bi-directional IO port. As an output port, each pin cansink eight TTL inputs. When 1s are written to port 0 pins, the pins can be used as this mode P0 . External pullups are required during programverification.Port 1:Port 1 is an 8-bit bi-directional IO port with internal pullups.The Port 1 output buffers can sinkso -urce four TTL inputs.When 1s are written to Port 1 pins they are pulled be used as inputs. As inputs, Port 1 pins that are externally being pulled low will source current (IIL) because of the internal pullups.Port 1 also receives the low-order address bytes during Flash programming and verification.Port 2:Port 2 is an 8-bit bi-directional IO port with internal pullups.The Port 2 outputbuffers can sinksource four TTL inputs.When 1s are written to Port 2 pins they arepulled be used as inputs. As inputs, Port 2 pins that are externally being pulled low will source current (IIL) because of the internal pullups. Port 2 emits the this application, it uses strong internal pull-ups when emitting 1s. During accesses to external data memory that use 8-bit addresses (MOVX @ RI), Port 2 emits the contents of the P2 Special Function Register.Port 2 also receives the Flash programming and verification.Port 3:Port 3 is an 8-bit bi-directional IO port with internal pullups.The Port 3 outputbuffers can sinksou -rce four TTL inputs.When 1s are written to Port 3 pins they are pulled be used as inputs. As inputs,Port 3 pins that are externally being pulled low will source current (IIL) because of the pullups.Port 3 also serves the functions of various special featuresof the AT89C51 as listed below:RST:Reset input. A this pin for two machine cycles while the oscillator is running resets the device.ALEPROG:Address Latch Enable output pulse for latching the low byte of the address duringaccesses to external memory.This pin is also the program pulse input (PROG) during Flash programming.In normal operation ALE is emitted at a constant rate of 16 the oscillator frequency,and may be used for external timing or clocking purposes. Note, be disabled by setting bit 0 of SFR location 8EH.With the bit set, ALE is active onlyduring a MOVX or MOVC instruction. Otherwise, the pin is weakly pulled external execution mode.PSEN:Program Store Enable is the read strobe to external program memory. When theAT89C51 is executing code from external program memory, PSEN is activated twiceeach machine cycle, except that two PSEN activations are skipped during each access toexternal data memory.EAVPP:External Access Enable. EA must be strapped to GND in order to enable the deviceto fetch code from external program memory locations starting at 0000H up to FFFFH.Note, alsreceives the 12-volt programming enable voltage (VPP) during Flash programming, forparts that require 12-volt VPP.XTAL1:Input to the inverting oscillator amplifier and input to the internal clock operatingcircuit.XTAL2:Output from the inverting oscillator amplifier.Oscillator CharacteristicsXTAL1 and XTAL2 are the input and output, respectively, of an inverting amplifierwhich can be configured for use as an on-chip oscillator, as shown in Figure 1. Either aquartz crystal or ceramic resonator may be used. To drive the device from an externalclock source, XTAL2 should be left unconnected while XTAL1 is driven as shown in Figure 2.There are no requirements on the duty cycle of the external clock signal, since the input to the internal clocking circuitry is through a divide-by-two flip-flop, but minimum and maximum voltage idle mode, the CPU puts itself to sleep while all the onchip peripherals remainactive. The mode is invoked by software. The content of the on-chip RAM and all the special functions registers remain unchanged during this mode. The idle mode can be terminated by any enabled interrupt or by a idle is terminated by a ,from where it left off, up to two machine cycles before the internal reset algorithm takes control. On-chip this event, but access to the port pins is not inhibited. To eliminate the possibility of an unexpected write to a port pin when Idle is terminated by reset, the instruction following the one that invokes Idle should not be one that writes to a port pin or to external memory.Power-down ModeIn the power-down mode, the oscillator is stopped, and the instruction that invokes power-down is the last instruction executed. The on-chip RAM and Special Function Registers retain their values until the power-down mode is terminated. The only exit from power-down is a -chip RAM. The reset should not be activated before VCC is restored to its normal operating level and must be either programming mode. To program any nonblank byte in the on-chip Flash Memory, the entire memory must be erased using the Chip Erase Mode.2 Programming AlgorithmBefore programming the AT89C51, the address, data and control signals should be set up according to the Flash programming mode table and Figure 3 and Figure 4. To program the AT89C51, take the following steps.1. Input the desired memory location on the addresslines.2. Input the appropriate data byte on the data lines. 3. Activate the correct combination of control signals. 4. Raise EAVPP to 12V for the the Flash array or the lock bits. The byte-write cycle is self-timed and typically takes no more than 1.5 ms. Repeat steps 1 through 5, changing the address and data for the entire array or until the end of the object file is reached. Data Polling: The AT89C51 features Data Polling to indicate the end of a write cycle. During a write cycle, an attempted read of the last byte written will result in the complement of the written datum on PO.7. Once the write cycle completed, true data are valid on all outputs, and the next cycle may begin. Data Polling may begin any time after a write cycle initiated.2.1ReadyBusy:The progress of byte programming can also be monitored by the RDYBSY output signal. P3.4 is pulled low after ALE goes when programming is done to indicate READY.Program Verify:If lock bits LB1 and LB2 programmed, the programmed code data can be read back via the address and data lines for verification. The lock bits cannot be verified directly. Verification of the lock bits is achieved by observing that their features are enabled.Figure 2-1-1 Programming the Flash Figure 2-2-2 Verifying the Flash2.2 Chip Erase:The entire Flash array is erased electrically by using the proper combination of control signals and by with all “1”s. The chip erase operation must be executed before the code memory can be re-programmed.2.3 Reading the Signature Bytes:The signature bytes are read by the same procedure as a normal verification of locations 030H, 031H, and 032H, except that P3.6 and P3.7 must be pulled to a logic low. The values returned areas follows.(030H) = 1EH indicates manufactured by Atmel(031H) = 51H indicates 89C51(032H) = FFH indicates 12V programming(032H) = 05H indicates 5V programming2.4 Programming InterfaceEvery code byte in the Flash array can be written and the entire array can be erased by using the appropriate combination of control signals. The write operation cycle is selftimed and once initiated, will automatically time itself to completion. A microcomputer interface converts information between two forms. Outside the microcomputer the information electronic system exists as a physical signal, but within the program, it is represented numerically. The function of any interface can be broken down into a number of operations which modify the data in some way, so that the process of conversion between the external and internal forms is carried out in a number of steps. An analog-to-digital converter(ADC) is used to convert a continuously variable signal to a corresponding digital form which can take any one of a fixed number of possible binary values. If the output of the transducer does not vary continuously, no ADC is necessary. In this case the signal conditioning section must convert the incoming signal to a form which can be connected directly to the next part of the interface, the inputoutput section of the microcomputer itself. Output interfaces take a similar form, the obvious difference being that is in the opposite direction; it is passed from the program to the outside world. In this case the program may call an output subroutine which supervises the operation of the interface andperforms the scaling numbers which may be needed for digital-to-analog converter(DAC). This subroutine passesinformation in turn to an output device which produces a corresponding electrical signal, which could be converted into analog form using a DAC. Finally the signal is conditioned(usually amplified) to a form suitable for operating an actuator.The signals used within microcomputer circuits are almost always too small to be connected directly to the outside world” and some kind of interface must be used to translate them to a more appropriate form. The design of section of interface circuits is one of the most important tasks facing the engineer wishing to apply microcomputers. We that in microcomputers information is represented as discrete patterns of bits; this digital form is most useful when the microcomputer is to be connected to equipment which can only be switched on or off, where each bit might represent the state of a switch or actuator. To solve real-world problems, a microcontroller must just a CPU, a program, and a data memory. In addition, it must contain from the outside world. Once the CPU gathers information and processes the data, it must also be able to effect change on some portion of the outside world. These microcontrollers is the general purpose I70 port. Each of the IO pins can be used as either an input or an output. The function of each pin is determined by setting or clearing corresponding bits in a corresponding data direction register during the initialization stage of a program. Each output pin may be driven to either a logic one or a logic zeroby using CPU instructions to pin may be viewed (or read.) by the CPU using program instructions. Some type of serial unit is included on microcontrollers to allow the CPU to communicate bit-serially with external devices. Using a bit serial format instead of bit-parallel format requires fewer IO pins to perform the communication function, which makes it less expensive, but slower.Serial transmissions are performed either synchronously orasynchronously.翻译AT89C51的概况1 AT89C51应用单片机广泛应用于商业:诸如调制解调器,电动机控制系统,空调控制系统,汽车发动机和其他一些领域。
单片机 外文翻译 外文文献 英文文献 中英对照 基于C51兼容微(宝典)
附录A 英文原文Design of PWM Controller in a MCS-51 Compatible MCUAuthor . Yue-Li Hu Wei Wang Microelectronic Research Development Center CampusP.O.B.221 149 Yanchang Rd Shanghai 200072 China Introduction PWM technology is a kind of voltage regulation method by controlling the switchfrequency of DC power with fixed voltage to modify the two-end voltage of load. Thistechnology can be used for a variety of applications including motor control temperaturecontrol and pressure control and so on. In the motor control system shown as Fig. 1 throughadjusting the duty cycle of power switch the speed of motor can be controlled. As shown inFig. 2 under the control of PWM signal the average of voltage that controls the speed ofmotor changes with Duty-cycle D t1/T in this Figure thus the motor speed can beincreased when motor power turn on decreased when power turn off.Fig.1: The Relationship between Voltage of Armature and Fig.2 Architecture of PWM Module Therefore the motor speed can be controlled with regularly adjusting the time of turn-onand turn-off. There are three methods could achieve the adjustment of duty cycle: 1 Adjustfrequency with fixed pulse-width. 2 Adjust both frequency and pulse-width. 3 Adjustpulse-width with fixed frequency. Generally there are four methods to generate the PWM signals as the following: 1Generated by the device composed of separate logic components. This method is the originalmethod which now has been discarded. 2 Generated by software. This method need CPU tocontinuously operate instructions to control I/O pins for generating PWM output signals sothat CPU can not do anything other. Therefore the method also has been discarded gradually.3 Generated by ASIC. The ASIC makes a decrease of CPU burden and steady workgenerally has several functions such as over-current protection dead-time adjustment and soon. Then the method has been widely used in many kinds of occasion now. 4 Generated byPWM function module of MCU. Through embedding PWM function module in MCU andinitializing the function PWM pins of MCU can also automatically generate PWM outsignals without CPU controlling only when need to change duty-cycle. It is the method thatwill be implemented in this paper. In this paper we propose a PWM module embedded in a 8051 microcontroller. ThePWM module can support PWM pulse signals by initializing the control register andduty-cycle register with three methods just mentioned above to adjust the duty cycle andseveral operation modes to add flexibility for user. The following section explains the architecture of the PWM module and the architecturesof basic functional blocks. Section3 describes two operation modes. Experimental andsimulation results verifying proper system operation are also shown in that section.Depending on mode of operation the PWM module creates one or more pulse-widthmodulated signals whose duty ratios can be independently adjusted. Implementation of PWM module in MCU Overview of the PWM module A block diagram of PWM module is shown in Fig.3. It is clearly from the diagram thatthe whole module is composed of two sections: PWM signal generator and dead-timegenerator with channel select logic. The PWM function can be started by the user throughimplementing some instructions for initializing the PWM module. In particular the followingpower and motion control applications are supported: DC Motor Uninterruptablel Power Supply UPSThe PWM module also has the following features: Two PWM signal outputs with complementary or independent operation Hardware dead-time generators for complementary mode Duty cycle updates are configurable to be immediated or synchronized to the PWM Fig.3 Architecture of PWM Module Details of the architecture PMW generator The architecture of the 2-output PWM generator shownin Fig.4 is based on a 16-bitresolution counter which creates a pulse-width modulated signal. The system is synthesizedby a system clock signal whose frequency can be divided by 4 times or 12 times throughsetting the value of T3M for PWM0 or T4M for PWM1 in the special register PWMCON asshown in Fig.4. To PWM0 generator the clock to 16-bit counter will be pre-divided by 4times by default when T3M is set to zero. And the clock will be divided by 12 times whenT3M is set to 1. This is also true for PWM1. The other bits in PWMCON are explained indetail in Table 1. Fig .4 Bit Mapping of PWMCON Table 1: The Bit Definition in PWMCONChannel-select logic The follow Fig. 5 shows the channel-select logic which is useful in ComplementaryMode. From this diagram it is clear to know that signal CP and CPWM control the source ofPWMH and PWML. And the details about the two control signals will be discussed in thesection 3 and the architecture of dead-time generator will also be discussed in section 5 forthe continuity of Complementary Mode. Fig. 5 Diagram of Channel-select LogicOperation Mode and Simulation Results The design has two operation modes: Independent Mode and Complimentary Mode. Bysetting the corresponding bit CPWM in register PWMCON shown in Fig.6 user can select oneof the two operation modes. When CPWM is set to zero PWM module will work inIndependent Mode whereas PWM module will work in Complimentary Mode. In thefollowing of this section the two operation mode will be explained respectively in detail andthe simulation results of the PWM module from the Synoposys VCS EDA platform whichverify the design will also be shown.Independent PWM Output Mode An Independent PWM Output mode is useful for driving loads such as the one shown inFigure 6. A particular PWM output is in the Independent Output mode when thecorresponding CP bit in the PWMCON register is set to zero.In this case two-channel PWMoutputs are independent of each other. The signal on pin PWM0/PWMH is from PWM0generator and the signal on pin PWM1/PWML is from PWM0 generator. The separate case isachieved by the channel-select logic shown in Fig. 6. The PWM I/O pins are set toindependent mode by default upon advice reset. The dead-time generator is disabled in theIndependent mode. The simulation result is shown in Figure 6 as the following Fig.6 Tr4 andtr3 are run bits to PWM0 and PWM1 respectively. Actually from this diagram Pin P15/P14 of MCU is used for PWMH/ PWML or normal I/O alternatively. Fig6 the Waveform of PWM Outputs in Independent ModeComplementary PWM Output Mode The Complementary Output mode is used to drive inverter loads similar to the oneshown in Figure 7. This inverter topology is typical for DC applications. In ComplementaryOutput Mode the pair of PWM outputs cannot be active simultaneously. The PWM channeland output pin pair are internally configured through channel-select logic as shown in Figure7.A dead-time may be optionally inserted during device switching where both outputs areinactive for a short period. Fig 7 : Typical Load for Complementary PWM Outputs The Complementary mode is selected for PWM I/O pin pair by setting the appropriateCPWM bit in PWMCON. In this case PSEL is in effect. PWMH and PWML will come fromPWM0 generator when PSEL is set to zero when the signals from PWM1 generator is uselesswhereas PWMH and PWML will come from PWM1 generator when PSEL is set to 1 whenthe signals from PWM0 generator is useless. In the process of producing the PWM outputs inComplementary Mode the dead-time will be inserted to be discussed in the following section.Dead-time Control Dead-time generation is automatically enabled when PWM I/O pin pair is operating inthe Complementary Output mode. Because the power output devices cannotswitchinstantaneously some amount of time must be provided between the turn-off event of onePWM output in a complementary pair and the turn-on event of the other transistor. The2-output PWM module has one programmable dead-time with 8-bitregister.Thecomplementary output pair for the PWM module has an 8-bit down counter that is used toproduce the dead-time insertion. As shown in Figure 8 the dead time unit has a rising andfalling edge detector connected to PWM signal from one of PWM generator. The dead timesis loaded into the timer on the detected PWM edge event. Depending on whether the edge isrising or falling one of the transitions on the complementary outputs is delayed until the timercounts down to zero. A timing diagram indicating the dead time insertion for the pair of PWMoutputs is shown in Figure 8a. Fig 8a Dead-time Unit Block Diagram Fig. 8b the Waveforms of PWM Outputs in Complementary ModeConclusions In this paper we have designed PWM module based on an 8-bit MCU compatible with8051 family. The design can generate 2-channel programmable periodic PWM signals withtwo operation mode Independent Mode and Complementary Mode in which dead-time willbe inserted. The simulation results on the EDA platform have proven its correctness andusefulness. 附录B 汉语翻译基于C51 兼容微处理器单片机的PWM 控制器设计Yue-Li Hu Wei Wa 单片机研究与开发中心Campus P.O.B.221 149Yanchang Rd Shanghai 200072 China 导言PWM 技术,是一种电压调节方法,通过控制具有固定电压的直流电源的开关频率来调整两端负荷电压。
单片机的外文文献及中文翻译
单片机的外文文献及中文翻译一、外文文献Title: The Application and Development of SingleChip Microcontrollers in Modern ElectronicsSinglechip microcontrollers have become an indispensable part of modern electronic systems They are small, yet powerful integrated circuits that combine a microprocessor core, memory, and input/output peripherals on a single chip These devices offer significant advantages in terms of cost, size, and power consumption, making them ideal for a wide range of applicationsThe history of singlechip microcontrollers can be traced back to the 1970s when the first microcontrollers were developed Since then, they have undergone significant advancements in technology and performance Today, singlechip microcontrollers are available in a wide variety of architectures and capabilities, ranging from simple 8-bit devices to complex 32-bit and 64-bit systemsOne of the key features of singlechip microcontrollers is their programmability They can be programmed using various languages such as C, Assembly, and Python This flexibility allows developers to customize the functionality of the microcontroller to meet the specific requirements of their applications For example, in embedded systems for automotive, industrial control, and consumer electronics, singlechip microcontrollers can be programmed to control sensors, actuators, and communication interfacesAnother important aspect of singlechip microcontrollers is their low power consumption This is crucial in batterypowered devices and portable electronics where energy efficiency is of paramount importance Modern singlechip microcontrollers incorporate advanced power management techniques to minimize power consumption while maintaining optimal performanceIn addition to their use in traditional electronics, singlechip microcontrollers are also playing a significant role in the emerging fields of the Internet of Things (IoT) and wearable technology In IoT applications, they can be used to collect and process data from various sensors and communicate it wirelessly to a central server Wearable devices such as smartwatches and fitness trackers rely on singlechip microcontrollers to monitor vital signs and perform other functionsHowever, the design and development of systems using singlechip microcontrollers also present certain challenges Issues such as realtime performance, memory management, and software reliability need to be carefully addressed to ensure the successful implementation of the applications Moreover, the rapid evolution of technology requires developers to constantly update their knowledge and skills to keep up with the latest advancements in singlechip microcontroller technologyIn conclusion, singlechip microcontrollers have revolutionized the field of electronics and continue to play a vital role in driving technological innovation Their versatility, low cost, and small form factor make them an attractive choice for a wide range of applications, and their importance is expected to grow further in the years to come二、中文翻译标题:单片机在现代电子领域的应用与发展单片机已成为现代电子系统中不可或缺的一部分。
at89c52单片机中英文资料对照外文翻译文献综述
D.htmlat89c52单片机中英文资料对照外文翻译文献综述at89c52单片机简介中英文资料对照外文翻译文献综述AT89C52 Single-chip microprocessor introductionSelection of Single-chip microprocessor1. Development of Single-chip microprocessorThe main component part of Single-chip microprocessor as a result of by such centralize to be living to obtain on the chip,In immediate future middle processor CPU。
Storage RAM immediately﹑memoy readROM﹑Interrupt system、Timer /'s counter along with I/O's rim electric circuit awaits the main microcomputer section,The lumping is living on the chip。
Although the Single-chip microprocessor r is only a chip,Yet through makes up and the meritorous service be able to on sees,It had haveed the calculating machine system property,calling it for this reason act as Single-chip microprocessor r minisize calculating machine SCMS and abbreviate the Single-chip microprocessor。
单片机英文参考文献
单片机英文参考文献篇一:5-单片机+外文文献+英文文献+外文翻译中英对照AT89C51的介绍(原文出处:http:///resource/)描述AT89C51是一个低电压,高性能CMOS8位单片机带有4K字节的可反复擦写的程序存储器(PENROM)。
和128字节的存取数据存储器(RAM),这种器件采用ATMEL公司的高密度、不容易丢失存储技术生产,并且能够与MCS-51系列的单片机兼容。
片内含有8位中央处理器和闪烁存储单元,有较强的功能的AT89C51单片机能够被应用到控制领域中。
功能特性AT89C51提供以下的功能标准:4K字节闪烁存储器,128字节随机存取数据存储器,32个I/O口,2个16位定时/计数器,1个5向量两级中断结构,1个串行通信口,片内震荡器和时钟电路。
另外,AT89C51还可以进行0HZ的静态逻辑操作,并支持两种软件的节电模式。
闲散方式停止中央处理器的工作,能够允许随机存取数据存储器、定时/计数器、串行通信口及中断系统继续工作。
掉电方式保存随机存取数据存储器中的内容,但震荡器停止工作并禁止其它所有部件的工作直到下一个复位。
引脚描述VCC:电源电压 GND:地 P0口:P0口是一组8位漏极开路双向I/O口,即地址/数据总线复用口。
作为输出口时,每一个管脚都能够驱动8个TTL电路。
当“1”被写入P0口时,每个管脚都能够作为高阻抗输入端。
P0口还能够在访问外部数据存储器或程序存储器时,转换地址和数据总线复用,并在这时激活内部的上拉电阻。
P0口在闪烁编程时,P0口接收指令,在程序校验时,输出指令,需要接电阻。
沈阳航空工业学院电子工程系毕业设计(外文翻译)P1口:P1口一个带内部上拉电阻的8位双向I/O口,P1的输出缓冲级可驱动4个TTL电路。
对端口写“1”,通过内部的电阻把端口拉到高电平,此时可作为输入口。
因为内部有电阻,某个引脚被外部信号拉低时输出一个电流。
闪烁编程时和程序校验时,P1口接收低8位地址。
(完整word版)单片机外文文献翻译
中文资料原文单片机单片机也被称为微控制器(Microcontroller Unit),常用英文字母的缩写MCU表示单片机,它最早是被用在工业控制领域。
单片机由芯片内仅有CPU的专用处理器发展而来。
最早的设计理念是通过将大量外围设备和CPU集成在一个芯片中,使计算机系统更小,更容易集成进复杂的而对体积要求严格的控制设备当中。
INTEL的Z80是最早按照这种思想设计出的处理器,从此以后,单片机和专用处理器的发展便分道扬镳。
早期的单片机都是8位或4位的。
其中最成功的是INTEL的8031,因为简单可靠而性能不错获得了很大的好评。
此后在8031上发展出了MCS51系列单片机系统。
基于这一系统的单片机系统直到现在还在广泛使用。
随着工业控制领域要求的提高,开始出现了16位单片机,但因为性价比不理想并未得到很广泛的应用。
90年代后随着消费电子产品大发展,单片机技术得到了巨大提高。
随着INTEL i960系列特别是后来的ARM系列的广泛应用,32位单片机迅速取代16位单片机的高端地位,并且进入主流市场。
而传统的8位单片机的性能也得到了飞速提高,处理能力比起80年代提高了数百倍。
目前,高端的32位单片机主频已经超过300MHz,性能直追90年代中期的专用处理器,而普通的型号出厂价格跌落至1美元,最高端[1]的型号也只有10美元。
当代单片机系统已经不再只在裸机环境下开发和使用,大量专用的嵌入式操作系统被广泛应用在全系列的单片机上。
而在作为掌上电脑和手机核心处理的高端单片机甚至可以直接使用专用的Windows和Linux操作系统。
单片机比专用处理器更适合应用于嵌入式系统,因此它得到了最多的应用。
事实上单片机是世界上数量最多的计算机。
现代人类生活中所用的几乎每件电子和机械产品中都会集成有单片机。
手机、电话、计算器、家用电器、电子玩具、掌上电脑以及鼠标等电脑配件中都配有1-2部单片机。
而个人电脑中也会有为数不少的单片机在工作。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
单片机英文文献资料及翻译
单片机(英文:Microcontroller)
Microcontroller is a small computer on a single integrated circuit that contains a processor core, memory, and programmable input/output peripherals. Microcontrollers are designed for embedded applications, in contrast to the microprocessors used in personal computers or other general purpose applications.
A microcontroller's processor core is typically a small, low-power computer dedicated to controlling the operation of the device in which it is embedded. It is often designed to provide efficient and reliable control of simple and repetitive tasks, such as switching on and off lights, or monitoring temperature or pressure sensors.
MEMORY
Microcontrollers typically have a limited amount of memory, divided into program memory and data memory. The program memory is where the software that controls the device is stored, and is often a type of Read-Only Memory (ROM). The data memory, on the other hand, is used to store data that is used by the program, and is often volatile, meaning that it loses its contents when power is removed.
INPUT/OUTPUT
Microcontrollers typically have a number of programmable input/output (I/O) pins that can be used to interface with external sensors, switches, actuators, and other devices. These pins can be programmed to perform specific functions,
such as reading a sensor value, controlling a motor, or generating a signal. Many microcontrollers also support communication protocols like serial, parallel, and USB, allowing them to interface with other devices, including other microcontrollers, computers, and smartphones.
APPLICATIONS
Microcontrollers are widely used in a variety of applications, including:
- Home automation systems
- Automotive electronics
- Medical devices
- Industrial control systems
- Consumer electronics
- Robotics
CONCLUSION
In conclusion, microcontrollers are powerful and versatile devices that have become an essential component in many embedded systems. With their small size, low power consumption, and high level of integration, microcontrollers offer an effective and cost-efficient solution for controlling a wide range of devices and applications.。