(完整word版)单片机外文文献翻译

合集下载

单片机英文文献资料及翻译

单片机英文文献资料及翻译

单片机英文文献资料及翻译单片机(英文:Microcontroller)Microcontroller is a small computer on a single integrated circuit that contains a processor core, memory, and programmable input/output peripherals. Microcontrollers are designed for embedded applications, in contrast to the microprocessors used in personal computers or other general purpose applications.A microcontroller's processor core is typically a small, low-power computer dedicated to controlling the operation of the device in which it is embedded. It is often designed to provide efficient and reliable control of simple and repetitive tasks, such as switching on and off lights, or monitoring temperature or pressure sensors.MEMORYMicrocontrollers typically have a limited amount of memory, divided into program memory and data memory. The program memory is where the software that controls the device is stored, and is often a type of Read-Only Memory (ROM). The data memory, on the other hand, is used to store data that is used by the program, and is often volatile, meaning that it loses its contents when power is removed.INPUT/OUTPUTMicrocontrollers typically have a number of programmable input/output (I/O) pins that can be used to interface with external sensors, switches, actuators, and other devices. These pins can be programmed to perform specific functions,such as reading a sensor value, controlling a motor, or generating a signal. Many microcontrollers also support communication protocols like serial, parallel, and USB, allowing them to interface with other devices, including other microcontrollers, computers, and smartphones.APPLICATIONSMicrocontrollers are widely used in a variety of applications, including:- Home automation systems- Automotive electronics- Medical devices- Industrial control systems- Consumer electronics- RoboticsCONCLUSIONIn conclusion, microcontrollers are powerful and versatile devices that have become an essential component in many embedded systems. With their small size, low power consumption, and high level of integration, microcontrollers offer an effective and cost-efficient solution for controlling a wide range of devices and applications.。

完整word版,51单片机英文及其翻译

完整word版,51单片机英文及其翻译

英文翻译原文:51 Microcontroller IntroductionMicrocontrollers basic component is a central processing unit (CPU in the computing device and controller), read-only memory (usually expressed as a ROM), read-write memory (also known as Random Access Memory MRAM is usually expressed as a RAM) , input / output port (also divided into parallel port and serial port, expressed as I / O port), and so composed. In fact there is also a clock circuit microcontroller, so that during operation and control of the microcontroller, can rhythmic manner. In addition, there are so-called "break system", the system is a "janitor" role, when the microcontroller control object parameters that need to be intervention to reach a particular state, can after this "janitor" communicated to the CPU, so that CPU priorities of the external events to take appropriate counter-measures.Microcontrollers are used in a multitude of commercial applications such as modems, motor-control systems, air conditioner control systems, automotive engine and among others. The high processing speed and enhanced peripheral set of these microcontrollers make them suitable for such high-speed event-based applications. However, these critical application domains also require that these microcontrollers are highly reliable. The high reliability and low market risks can be ensured by a robust testing process and a proper tools environment for the validation of these microcontrollers both at the component and at the system level. Intel Platform Engineering department developed an object-oriented multi-threaded test environment for the validation of its AT89C51 automotive microcontrollers. The goals of this environment was not only to provide a robust testing environment for the AT89C51 automotive microcontrollers, but to develop an environment which can be easily extended and reused for the validation of several other future microcontrollers. The environment was developed in conjunction with Microsoft Foundation Classes (AT89C51). The paper describes the design and mechanism of this test environment, its interactions with various hardware/software environmental components, and how to use AT89C51.Are 8-bit microcontroller early or 4 bits. One of the most successful is the INTEL 8031, for a simple, reliable and good performance was a lot of praise. Then developed in 8031 out of MCS51 MCU Systems. SCM systems based on this system until now is still widely used. With the increased requirements of industrial control field, began a 16-bit microcontroller, but not ideal because the cost has not been very widely used. After 90 years with the great development of consumer electronics, microcontroller technology has been a huge increase. With INTEL i960 series, especially the later series of widely used ARM, 32-bit microcontroller quickly replace high-end 16-bit MCU status and enter the mainstream market. The traditional 8-bit microcontroller performance have been therapid increase capacity increase compared to 80 the number of times. Currently, high-end 32-bit microcontroller clocked over 300MHz, the performance catching the mid-90s dedicated processor, while the average model prices fall to one U.S. dollar, the most high-end model is only 10 dollars. Modern SCM systems are no longer only in the development and use of bare metal environment, a large number of proprietary embedded operating system is widely used in the full range of SCM. The handheld computers and cell phones as the core processing of high-end microcontroller can even use a dedicated Windows and Linux operating systems.SCM relies on the program, and can be modified. Through different procedures to achieve different functions, in particular special unique features, this is another device much effort needs to be done, some are great efforts are very difficult to achieve. A not very complex functions if the 50's with the United States developed 74 series, or the 60's CD4000 series of these pure hardware buttoned, then the circuit must be a large PCB board! But if the United States if the 70's with a series of successful SCM market, the result will be a drastic change! Just because you are prepared by microcomputer programs can achieve high intelligence, high efficiency and high reliability!IntroductionThe 8-bit AT89C51 CHMOS microcontrollers are designed to handle high-speed calculations and fast input/output operations. MCS 51 microcontrollers are typically used for high-speed event control systems. Commercial applications include modems, motor-control systems, printers, photocopiers, air conditioner control systems, disk drives, and medical instruments. The automotive industry use MCS 51 microcontrollers in engine-control systems, airbags, suspension systems, and antilock braking systems (ABS). The AT89C51 is especially well suited to applications that benefit from its processing speed and enhanced on-chip peripheral functions set, such as automotive power-train control, vehicle dynamic suspension, antilock braking, and stability control applications. Because of these critical applications, the market requires a reliable cost-effective controller with a low interrupt latency response, ability to service the high number of time and event driven integrated peripherals needed in real time applications, and a CPU with above average processing power in a single package. The financial and legal risk of having devices that operate unpredictably is very high. Once in the market, particularly in mission critical applications such as an autopilot or anti-lock braking system, mistakes are financiallyProhibitive. Redesign costs can run as high as a $500K, much more if the fix means back annotating it across a product family that share the same core and/or peripheral design flaw. In addition, field replacements of components are extremely expensive, as the devices are typically sealed in modules with a total value several times that of the component. To mitigate these problems, it is essential that comprehensive testing of the controllers be carried out at both the component level and system level under worst case environmental and voltage conditions. This complete and thorough validation necessitates not only a well-defined process but also a proper environment and tools to facilitate and execute the mission successfully.Intel Chandler Platform Engineering group provides postSilicon system validation (SV) of various micro-controllers and processors. The system validation process can be broken into three major parts. The type of the device and its application requirements determine which types of testing are performed on the device.The AT89C51 provides the following standard features: 4Kbytes of flash, 128 bytes of RAM, 32 I/O lines, two 16-bittimer/counters, five vector two-level interrupt architecture, a full duple ser -ail port, on-chip oscillator and clock circuitry. In addition, the AT89C51 is designed with static logic for operation down to zero frequency and supports two software selectable power saving modes. The Idle Mode stops the CPU while allowing the RAM, timer/counters, serial port and interrupt sys -tem to continue functioning. The Power-down Mode saves the RAM contents but freezes the social -labor disabling all other chip functions until the next hardware reset.Pin DescriptionVCC Supply voltage.GND Ground.Port 0Port 0 is an 8-bit open-drain bi-directional I/O port. As an output port, each pin can sink eight TTL inputs. When 1s are written to port 0 pins, the pins can be used as high impedance inputs.Port 0 may also be configured to be the multiplexed lowered address/data bus during accesses to external program and data memory. In this mode P0 has internal pull-ups’.Port 0 also receives the code bytes during Flash programming, and outputs the code bytes during program verification. External pull-ups are required during program verification.Port 1Port 1 is an 8-bit bi-directional I/O port with internal pullups.The Port 1 output buffers can sink/so -urge four TTL inputs. When 1s are written to Port 1 pins they are pulled high by the internal pull-ups and can be used as inputs. As inputs, Port 1 pins that are externally being pulled low will source current (IIL) because of the internal pull-ups.Port 1 also receives the low-order address bytes during Flash programming and verification.Port 2Port 2 is an 8-bit bi-directional I/O port with internal pullups.The Port 2 output buffers can sink/source four TTL inputs. When 1s are written to Port 2 pins they are pulled high by the internal pull-ups and can be used as inputs. As inputs, Port 2 pins that are externally being pulled low will source current (IIL) because of the internal pull-ups.Port 2 emits the high-order address byte during fetches from external program memory and during accesses to Port 2 pins that are externally being pulled low will source current (IIL) because of the internal pull-ups.Port 2 emits the high-order address byte during fetches from external program memory and during accesses to external data memory that uses 16-bit addresses (MOVX @DPTR). In this application, it uses strong internal pull-ups when emitting 1s. During accesses to external data memory that uses 8-bit addresses (MOVX @ RI); Port 2 emits the contents of the P2 Special Function Register.Port 2 also receives the high-order address bits and some control signals during Flash programming and verification.Port 3Port 3 is an 8-bit bi-directional I/O port with internal pullups.The Port 3 output buffers can sink/soul -race four TTL inputs. When 1s are written to Port 3 pins they are pulled high by the internal pull-ups and can be used as inputs. As inputs, Port 3 pins that are externally being pulled low will source current (IIL) because of the pull-ups.RSTReset input. A high on this pin for two machine cycles while the oscillator is running resets the device.ALE/PROGAddress Latch Enable output pulse for latching the low byte of the address during accesses to external memory.This pin is also the program pulse input (PROG) during Flash programming.In normal operation ALE is emitted at a constant rate of 1/6 the oscillator frequency, and may be used for external timing or clocking purposes. Note, however, that one ALE pulse is skipped dui -nag each access to external DataMemory.If desired, ALE operation can be disabled by setting bit 0 of SFR location 8EH. With the bit set, ALE is active only during a MOVX or MOVC instruction. Otherwise, the pin is weakly pulled high. Setting the ALE-disable bit has no effect if the microcontroller is in external execution mode.PSENProgram Store Enable is the read strobe to external program memory. When the AT89C51 is executing code from external program memory, PSEN is activated twice each machine cycle, except that two PSEN activations are skipped during each access to external data memory.EA/VPPExternal Access Enable. EA must be strapped to GND in order to enable the device to fetch code from external program memory locations starting at 0000H up to FFFFH. Note, however, that if lock bit 1 is programmed, EA will be internally latched on reset. A should be strapped to VCC for internal program executions. This pin also receives the 12-volt programming enable voltage (VPP) during Flash programming, for parts that require 12-volt VPP.The AT89C51 code memory array is programmed byte-by byte in either programming mode. To program any nonblank byte in the on-chip Flash Memory, the entire memory must be erased using the Chip Erase Mode.Data Polling: The AT89C51 features Data Polling to indicate the end of a write cycle. During a write cycle, an attempted read of the last byte written will result in the complement of the written datum on PO.7. Once the write cycle has been completed, true data are valid on all outputs, andThe next cycle may begin. Data Polling may begin any time after a write cycle has been initiated.Ready/Busy: The progress of byte programming can also be monitored by the RDY/BSY output signal. P3.4 is pulled low after ALE goes high during programming to indicate BUSY. P3.4 is pulled high again when programming is done to indicate READY.Program Verify: If lock bits LB1 and LB2 have not been programmed, the programmed code data can be read back via the address and data lines for verification. The lock bits cannot be verified directly. Verification of the lock bits is achieved byobserving that their features are enabled.A microcomputer interface converts information between two forms. Outside the microcomputer the information handled by an electronic system exists as a physical signal, but within the program, it is represented numerically. The function of any interface can be broken down into a number of operations which modify the data in some way, so that the process of conversion between the external and internal forms is carried out in a number of steps.An analog-to-digital converter (ADC) is used to convert a continuously variable signal to a corresponding digital form which can take any one of a fixed number of possible binary values. If the output of the transducer does not vary continuously, no ADC is necessary. In this case the signal conditioning section must convert the incoming signal to a form which can be connected directly to the next part of the interface, the input/output section of the microcomputer itself.Output interfaces take a similar form, the obvious difference being that here the flow of information is in the opposite direction; it is passed from the program to the outside world. In this case the program may call an output subroutine which supervises the operation of the interface and performs the scaling numbers which may be needed for a digital-to-analog converter (DAC). This subroutine passes information in turn to an output device which produces a corresponding electrical signal, which could be converted into analog form using a DAC. Finally the signal is conditioned (usually amplified) to a form suitable for operating an actuator.The signals used within microcomputer circuits are almost always too small to be connected directly to the “outside world” and some kind of interface must be used to translate them to a more appropriate form. The design of section of interface circuits is one of the most important tasks facing the engineer wishing to apply microcomputers. We have seen that in microcomputers information is represented as discrete patterns of bits; this digital form is most useful when the microcomputer is to be connected to equipment which can only be switched on or off, where each bit might represent the state of a switch or actuator.To solve real-world problems, a microcontroller must have more than just a CPU, a program, and a data memory. In addition, it must contain hardware allowing the CPU to access information from the outside world. Once the CPU gathers information and processes the data, it must also be able to effect change on some portion of the outside world. T hese hardware devices, called peripherals, are the CPU’s window t o the outside.The most basic form of peripheral available on microcontrollers is the general purpose I70 port. Each of the I/O pins can be used as either an input or an output. The function of each pin is determined by setting or clearing corresponding bits in a corresponding data direction register during the initialization stage of a program. Each output pin may be driven to either a logic one or a logic zero by using CPU instructions to pin may be viewed (or read.) by the CPU using program instructions.Some type of serial unit is included on microcontrollers to allow the CPU to communicate bit-serially with external devices. Using a bit serial format instead of bit-parallel format requires fewer I/O pins to perform the communication function, which makes it less expensive, but slower. Serial transmissions are performed either synchronously or asynchronously.Its applicationsSCM is widely used in instruments and meters, household appliances, medical equipment, aerospace, specialized equipment, intelligent management and process control fields, roughly divided into the following several areas:SCM has a small size, low power consumption, controlling function, expansion flexibility, the advantages of miniaturization and ease of use, widely used instrument, combining different types of sensors can be realized, such as voltage, power, frequency, humidity, temperature, flow, speed, thickness, angle, length, hardness, elemental, physical pressure measurement. SCM makes use of digital instruments, intelligence, miniaturization, and functionality than the use of more powerful electronic or digital circuits. Such as precision measuring equipment (power meter, oscilloscope, various analytical instrument).译文:51单片机简介单片机的基本组成是由中央处理器(即CPU中的运算器和控制器)、只读存贮器(通常表示为ROM)、读写存贮器(又称随机存贮器通常表示为RAM)、输入/输出口(又分为并行口和串行口,表示为I/O口)等等组成。

单片机英文文献及翻译)

单片机英文文献及翻译)

Validation and Testing of Design Hardening for Single Event Effects Using the 8051 MicrocontrollerAbstractWith the dearth of dedicated radiation hardened foundries, new and novel techniques are being developed for hardening designs using non-dedicated foundry services. In this paper, we will discuss the implications of validating these methods for the single event effects (SEE) in the space environment. Topics include the types of tests that are required and the design coverage (i.e., design libraries: do they need validating for each application?). Finally, an 8051 microcontroller core from NASA Institute of Advanced Microelectronics (IAμE) CMOS Ultra Low Power Radiation Tolerant (CULPRiT) design is evaluated for SEE mitigative techniques against two commercial 8051 devices.Index TermsSingle Event Effects, Hardened-By-Design, microcontroller, radiation effects.I. INTRODUCTIONNASA constantly strives to provide the best capture of science while operating in a space radiation environment using a minimum of resources [1,2]. With a relatively limited selection of radiation-hardened microelectronic devices that are often two or more generations of performance behind commercialstate-ofthe-art technologies, NASA’s performance of this task is quite challenging. One method of alleviating this is by the use of commercial foundry alternatives with no or minimally invasive design techniques for hardening. This is often called hardened-by-design (HBD).Building custom-type HBD devices using design libraries and automated design tools may provide NASA the solution it needs to meet stringent science performance specifications in a timely,cost-effective, and reliable manner.However, one question still exists: traditional radiation-hardened devices have lot and/or wafer radiation qualification tests performed; what types of tests are required for HBD validation?II. TESTING HBD DEVICES CONSIDERATIONSTest methodologies in the United States exist to qualify individual devices through standards and organizations such as ASTM, JEDEC, and MIL-STD- 883. Typically, TID (Co-60) and SEE (heavy ion and/or proton) are required for device validation. So what is unique to HBD devices?As opposed to a “regular” commercial-off-the-shelf (COTS) device or application specific integrated circuit (ASIC) where no hardening has been performed, one needs to determine how validated is the design library as opposed to determining the device hardness. That is, by using test chips, can we “qualify” a future device using the same library?Consider if Vendor A has designed a new HBD library portable to foundries B and C. A test chip is designed, tested, and deemed acceptable. Nine months later a NASA flight project enters the mix by designing a new device using Vendor A’s library. Does this device require complete radiation qualification testing? To answer this, other questions must be asked.How complete was the test chip? Was there sufficient statistical coverage of all library elements to validate each cell? If the new NASA design uses a partially or insufficiently characterized portion of the design library, full testing might be required. Of course, if part of the HBD was relying on inherent radiation hardness of a process, some of the tests (like SEL in the earlier example) may be waived.Other considerations include speed of operation and operating voltage. For example, if the test chip was tested statically for SEE at a power supply voltage of 3.3V, is the data applicable to a 100 MHz operating frequency at 2.5V? Dynamic considerations (i.e., nonstatic operation) include the propagated effects of Single Event Transients (SETs). These can be a greater concern at higher frequencies.The point of the considerations is that the design library must be known, the coverage used during testing is known, the test application must be thoroughly understood and the characteristics of the foundry must be known. If all these are applicable or have been validated by the test chip, then no testing may be necessary. A task within NASA’s Electronic Parts and Packaging (NEPP) Program was performed to explore these types of considerations.III. HBD TECHNOLOGY EVALUATION USING THE 8051 MICROCONTROLLERWith their increasing capabilities and lower power consumption, microcontrollers are increasingly being used in NASA and DOD system designs. There are existing NASA and DoD programs that are doing technology development to provide HBD. Microcontrollers are one such vehicle that is being investigated to quantify the radiation hardness improvement. Examples of these programs are the 8051 microcontroller being developed by Mission Research Corporation (MRC) and the IAμE (the focus of this study). As these HBD technologies become available, validation of the technology, in the natural space radiation environment, for NASA’s use in spaceflight systems is required.The 8051 microcontroller is an industry standard architecture that has broad acceptance, wide-ranging applications and development tools available. There are numerous commercial vendors that supply this controller or have it integrated into some type of system-on-a-chip structure. Both MRC and IAμE chose this device to demonstrate two distinctly different technologies for hardening. The MRC example of this is to use temporal latches that require specific timing to ensure that single event effects are minimized. The IAμE technology uses ultra low power, and layout and architecture HBD design rules to achieve their results. These are fundamentally different than the approach by Aeroflex-United Technologies Microelectronics Center (UTMC), the commercial vendor of a radiation–hardened 8051, that built their 8051 microcontroller using radiationhardened processes. This broad range of technology within one device structure makes the 8051an ideal vehicle for performing this technology evaluation.The objective of this work is the technology evaluation of the CULPRiT process [3] from IAμE. The process has been baselined against two other processes, the standard 8051 commercial device from Intel and a version using state-of-the-art processing from Dallas Semiconductor. By performing this side-by-side comparison, the cost benefit, performance, and reliability trade study can be done.In the performance of the technology evaluation, this task developed hardware and software for testing microcontrollers. A thorough process was done to optimize the test process to obtain as complete an evaluation as possible. This included taking advantage of the available hardware and writing software that exercised the microcontroller such that all substructures of the processor were evaluated. This process is also leading to a more complete understanding of how to test complex structures, such as microcontrollers, and how to more efficiently test these structures in the future.IV. TEST DEVICESThree devices were used in this test evaluation. The first is the NASA CULPRiT device, which is the primary device to be evaluated. The other two devices are two versions of a commercial 8051, manufactured by Intel and Dallas Semiconductor, respectively.The Intel devices are the ROMless, CMOS version of the classic 8052 MCS-51 microcontroller. They are rated for operation at +5V, over a temperature range of 0 to 70 °C and at a clock speeds of 3.5 MHz to 24 MHz. They are manufactured in Intel’s P629.0 CHMOS III-E process.The Dallas Semiconductor devices are similar in that they are ROMless 8052 microcontrollers, but they are enhanced in various ways. They are rated for operation from 4.25 to 5.5 Volts over 0 to 70 °C at clock speeds up to 25 MHz. They have a second full serial port built in, seven additional interrupts, a watchdog timer, a power fail reset, dual data pointers and variable speed peripheral access. In addition, the core is redesigned so that the machine cycle is shortened for most instructions, resulting in an effective processing ability that is roughly 2.5 times greater (faster) than the standard 8052 device. None of these features, other than those inherent in the device operation, were utilized in order to maximize the similarity between the Dallas and Intel test codes.The CULPRiT technology device is a version of the MSC-51 family compatible C8051 HDL core licensed from the Ultra Low Power (ULP) process foundry. The CULPRiT technology C8051 device is designed to operate at a supply voltage of 500 mV and includes an on-chip input/output signal level-shifting interface with conventional higher voltage parts. The CULPRiT C8051 device requires two separate supply voltages; the 500 mV and the desired interface voltage. The CULPRiT C8051 is ROMless and is intended to be instruction set compatible with the MSC-51 family.V. TEST HARDWAREThe 8051 Device Under Test (DUT) was tested as a component of a functional computer. Aside from DUT itself, the other componentsof the DUT computer were removed from the immediate area of the irradiation beam.A small card (one per DUT package type) with a unique hard-wired identifier byte contained the DUT, its crystal, and bypass capacitors (and voltage level shifters for the CULPRiT DUTs). This "DUT Board" was connected to the "Main Board" by a short 60-conductor ribbon cable. The Main Board had all other components required to complete the DUT Computer, including some which nominally are not necessary in some designs (such as external RAM, external ROM and address latch). The DUT Computer and the Test Control Computer were connected via a serial cable and communications were established between the two by the Controller (that runs custom designed serial interface software). This Controller software allowed for commanding of the DUT, downloading DUT Code to the DUT, and real-time error collection from the DUT during and post irradiation. A 1 Hz signal source provided an external watchdog timing signal to the DUT, whose watchdog output was monitored via an oscilloscope. The power supply was monitored to provide indication of latchup.VI. TEST SOFTWAREThe 8051 test software concept is straightforward. It was designed to be a modular series of small test programs each exercising a specific part of the DUT. Since each test was stand alone, they were loaded independently of each other for execution on the DUT. This ensured that only the desired portion of the 8051 DUT was exercised during the test and helped pinpoint location of errors that occur during testing. All test programs resided on the controller PC until loaded via the serial interface to the DUT computer. In this way, individual tests could have been modified at any time without the necessity of burning PROMs. Additional tests could have also been developed and added without impacting the overall test design. The only permanent code, which was resident on the DUT, was the boot code and serial code loader routines that established communications between the controller PC and the DUT.All test programs implemented:• An external Universal Asynchronous Receive and Transmit device (UART) for transmission of error information and communication to controller computer.• An external real-time clock for data error tag.•A watchdog routine designed to provide visual verification of 8051 health and restart test code if necessary.• A "foul-up" routine to reset program counter if it wanders out of code space.• An external telemetry data storage memory to provide backup of data in the event of an interruption in data transmission.The brief description of each of the software tests used is given below. It should be noted that for each test, the returned telemetry (including time tag) was sent to both the test controller and the telemetry memory, giving the highest reliability that all data is captured.Interrupt –This test used 4 of 6 available interrupt vectors (Serial, External, Timer0 Overflow, and Timer1 Overflow) to trigger routines that sequentially modified a value in the accumulator which was periodically compared to a known value. Unexpected values were transmitted with register information.Logic –This test performed a series of logic and math computations and provided three types of error identifications: 1) addition/subtraction, 2) logic and 3) multiplication/division. All miscompares of computations and expected results were transmitted with other relevant register information.Memory – This test loaded internal data memory at locations D:0x20 through D:0xff (or D:0x20 through D:0x080 for the CULPRiT DUT), indirectly, with an 0x55 pattern. Compares were performed continuously and miscompares were corrected while error information and register values were transmitted.Program Counter -The program counter was used to continuously fetch constants at various offsets in the code. Constants were compared with known values and miscompares were transmitted along with relevant register information. Registers – This test loaded each of four (0,1,2,3) banks of general-purpose registers with either 0xAA (for banks 0 and 2) or 0x55 (for banks 1 and 3). The pattern was alternated in order to test the Program Status Word (PSW) special function register, which controls general-purpose register bank selection. General-purpose register banks were then compared with their expected values. All miscompares were corrected and error information was transmitted.Special Function Registers (SFR) – This test used learned static values of 12 out 21 available SFRs and then constantly compared the learned value with the current one. Miscompares were reloaded with learned value and error information was transmitted.Stack – This test performed arithmetic by pushing and popping operands on the stack. Unexpected results were attributed to errors on the stack or to the stack pointer itself and were transmitted with relevant register information.VII. TEST METHODOLOGYThe DUT Computer booted by executing the instruction code located at address 0x0000. Initially, the device at this location was an EPROM previously loaded with "Boot/Serial Loader" code. This code initialized the DUT Computer and interface through a serial connection to the controlling computer, the "Test Controller". The DUT Computer downloaded Test Code and put it into Program Code RAM (located on the Main Board of the DUT Computer). It then activated a circuit which simultaneously performed two functions: held the DUT reset line active for some time (~10 ms); and, remapped the Test Code residing in the Program Code RAM to locate it to address 0x0000 (the EPROM will no longer be accessible in the DUT Computer's memory space). Upon awaking from the reset, the DUT computer again booted by executing the instruction code at address 0x0000, except this time that code was not be the Boot/Serial Loader code but the Test Code.The Test Control Computer always retained the ability to force the reset/remap function, regardless of the DUT Computer's functionality. Thus, if the test ran without a Single Event Functional Interrupt (SEFI) either the DUT Computer itselfor the Test Controller could have terminated the test and allowed the post-test functions to be executed. If a SEFI occurred, the Test Controller forced a reboot into Boot/Serial Loader code and then executed the post-test functions. During any test of the DUT, the DUT exercised a portion of its functionality (e.g., Register operations or Internal RAM check, or Timer operations) at the highest utilization possible, while making a minimal periodic report to the Test Control Computer to convey that the DUT Computer was still functional. If this reportceased, the Test Controller knew that a SEFI had occurred. This periodic data was called "telemetry". If the DUT encountered an error that was not interrupting the functionality (e.g., a data register miscompare) it sent a more lengthy report through the serial port describing that error, and continued with the test.VIII.DISCUSSIONA. Single Event LatchupThe main argument for why latchup is not an issue for the CULPRiT devices is that the operating voltage of 0.5 volts should be below the holding voltage required for latchup to occur. In addition to this, the cell library used also incorporates the heavy dual guard-barring scheme [4]. This scheme has been demonstrated multiple times to be very effective in rendering CMOS circuits completely immune to SEL up to test limits of 120 MeV-cm2/mg. This is true in circuits operating at 5, 3.3, and 2.5 Volts, as well as the 0.5 Volt CULPRiT circuits. In one case, a 5 Volt circuit fabricated on noncircuits wafers even exhibited such SEL immunity.B. Single Event UpsetThe primary structure of the storage unit used in the CULPRiT devices is the Single Event Resistant Topology (SERT) [5]. Given the SERT cell topology and a single upset node assumption, it is expected that the SERT cell will be completely immune to SEUs occurring internal to the memory cell itself. Obviously there are other things going on. The CULPRiT 8051 results reported here are quite similar to some resultsobtained with a CULPRiT CCSDS lossless compression chip (USES) [6]. The CULPRiT USES was synthesized using exactly the same tools and library as the CULPRiT 8051.With the CULPRiT USES, the SEU cross section data [7] was taken as a function of frequency at two LET values, 37.6 and 58.5 MeV-cm2/mg. In both cases the data fit well to a linear model where cross section is proportional to clock. In the LET 37.6 case, the zero frequency intercept occurred essentially at the zero cross section point, indicating that virtually all of these SEUs are captured SETs from the combinational logic. The LET 58.5 data indicated that the SET (frequency dependent) component is sitting on top of a "dc-bias" component –presumably a second upset mechanism is occurring internal to the SERT cells only at a second, higher LET threshold.The SET mitigation scheme used in the CULPRiT devices is based on the SERT cell's fault tolerant input property when redundant input data is provided to separate storage nodes. The idea is that the redundant input data is provided through a total duplication of combinational logic (referred to as “dual rail design”) such that a simple SET on one rail cannot produce an upset. Therefore, some other upset mechanism must be happening. It is possible that a single particle strike is placing an SET on both halves of the logic streams, allowing an SET to produce an upset. Care was taken to separate the dual sensitive nodes in the SERT cell layouts but the automated place-and-route of the combinatorial logic paths may have placed dual sensitive nodes close enough.At this point, the theory for the CULPRiT SEU response is that at about an LET of 20, the energy deposition is sufficiently wide enough (and in the right locations) to produce an SET in both halves of the combinatorial logic streams. Increasing LET allows for more regions to be sensitive to this effect, yielding a larger cross section. Further, the second SEU mechanism that starts at an LET of about 40-60 has to do with when the charge collection disturbance cloud gets large enough to effectively upset multiples of the redundant storage nodes within the SERT cell itself. In this 0.35 μm library, the node separation is several microns. However, since it takes less charge to upset a node operating at 0.5 Volts, with transistors having effective thresholds around 70 mV, this is likely the effect being observed. Also the fact that the per-bit memory upset cross section for the CULPRiT devices and the commercial technologies are approximately equal, as shown in Figure 9, indicates that the cell itself has become sensitive to upset.IX. SUMMARYA detailed comparison of the SEE sensitivity of a HBD technology (CULPRiT) utilizing the 8051 microcontroller as a test vehicle has been completed. This paper discusses the test methodology used and presents a comparison of the commercial versus CULPRiT technologies based on the data taken. The CULPRiT devices consistently show significantly higher threshold LETs and an immunity to latchup. In all but the memory test at the highest LETs, the cross section curves for all upset events is one to two orders of magnitude lower than the commercial devices. Additionally, theory is presented, based on the CULPRiT technology, that explain these results.This paper also demonstrates the test methodology for quantifying the level of hardness designed into a HBD technology. By using the HBD technology in a real-world device structure (i.e., not just a test chip), and comparing results to equivalent commercial devices, one can have confidence in the level of hardness that would be available from that HBD technology in any circuit application.ACKNOWLEDGEMENTSThe authors of this paper would like to acknowledge the sponsors of this work. These are the NASA Electronic Parts and Packaging Program (NEPP), NASA Flight Programs, and the Defense Threat Reduction Agency (DTRA).。

单片机英文文献及翻译

单片机英文文献及翻译

附录A英文文献翻译原文Temperature Control Using a Microcontroller:An Interdisciplinary Undergraduate Engineering Design ProjectJames S. McDonaldDepartment of Engineering ScienceTrinity UniversitySan Antonio, TX 78212AbstractThis paper describes an interdisc iplinary design project which was done under the author’s supervision by a group of four senior students in the Department of Engineering Science at Trinity University. The objective of the project was to develop a temperature control system for an air-filled chamber. The system was to allow entry of a desired chamber temperature in a prescribed range and to exhibit overshoot and steady-state temperature error of less than 1 degree Kelvin in the actual chamber temperature step response. The details of the design developed by this group of students, based on a Motorola MC68HC05 family microcontroller, are described. The pedagogical value of the problem is also discussed through a description of some of the key steps in the design process. It is shown that the solution requires broad knowledge drawn from several engineering disciplines including electrical, mechanical, and control systems engineering.1 IntroductionThe design project which is the subject of this paper originated from a real-world application.A prototype of a microscope slide dryer had been developed around an OmegaTM modelCN-390 temperature controller, and the objective was to develop a custom temperature control system to replace the Omega system. The motivation was that a custom controller targeted specifically for the application should be able to achieve the same functionality at a much lower cost, as the Omega system is unnecessarily versatile and equipped to handle a wide variety of applications.The mechanical layout of the slide dryer prototype is shown in Figure 1. The main element of the dryer is a large, insulated, air-filled chamber in which microscope slides, each with a tissue sample encased in paraffin, can be set on caddies. In order that the paraffin maintain the proper consistency, the temperature in the slide chamber must be maintained at a desired (constant) temperature. A second chamber (the electronics enclosure) houses a resistive heater and the temperature controller, and a fan mounted on the end of the dryer blows air across theheater, carrying heat into the slide chamber. This design project was carried out during academic year 1996–97 by four students under the author’s supervision as a Senior Design project in the Department of Engineering Science at Trinity University. The purpose of this paper isto describe the problem and the students’ solution in some detail, and to discuss some of the pedagogical opportunities offered by an interdisciplinary design project of this type. The students’ own report was presented a t the 1997 National Conference on Undergraduate Research [1]. Section 2 gives a more detailed statement of the problem, including performance specifications, and Section 3 describes the students’ design. Section 4 makes up the bulk of the paper, and discusses in some detail several aspects of the design process which offer unique pedagogical opportunities. Finally, Section 5 offers some conclusions.2 Problem StatementThe basic idea of the project is to replace the relevant parts of the functionality of an Omega CN-390 temperature controller using a custom-designed system. The application dictates that temperature settings are usually kept constant for long periods of time, but it’s nonetheless important that step changes be tracked in a “reasonable” manner. Thus the main requirements boil down to·allowing a chamber temperature set-point to be entered,·displaying both set-point and actual temperatures, and·tracking step changes in set-point temperature with acceptable rise time, steady-state error, and overshoot.Although not explicitly a part of the specifications in Table 1, it was clear that the customer desired digital displays of set-point and actual temperatures, and that set-point temperature entry should be digital as well (as opposed to, say, through a potentiometer setting).3 System DesignThe requirements for digital temperature displays and setpoint entry alone are enough to dictate that a microcontrollerbased design is likely the most appropriate. Figure 2 shows a block diagram of the stude nts’ design.The microcontroller, a MotorolaMC68HC705B16 (6805 for short), is the heart of the system. It accepts inputs from a simple four-key keypad which allow specification of the set-point temperature, and it displays both set-point and measured chamber temperatures using two-digit seven-segment LED displays controlled by a display driver. All these inputs and outputs are accommodated by parallel ports on the 6805. Chamber temperature is sensed using apre-calibrated thermistor and input via one of the 6805’s analog-to-digital inputs. Finally, a pulse-width modulation (PWM) output on the 6805 is used to drive a relay which switches line power to the resistive heater off and on.Figure 3 shows a more detailed schematic of the electronics and their interfacing to the 6805. The keypad, a Storm 3K041103, has four keys which are interfaced to pins PA0{ PA3 of Port A, configured as inputs. One key functions as a mode switch. Two modes are supported: set mode and run mode. In set mode two of the other keys are used to specify the set-point temperature: one increments it and one decrements. The fourth key is unused at present. The LED displays are driven by a Harris Semiconductor ICM7212 display driver interfaced to pins PB0{PB6 of Port B, configured as outputs. The temperature-sensing thermistor drives, through a voltage divider, pin AN0 (one of eight analog inputs). Finally, pin PLMA (one of two PWM outputs) drives the heater relay.Software on the 6805 implements the temperature control algorithm, maintains the temperature displays, and alters the set-point in response to keypad inputs. Because it is not complete at this writing, software will not be discussed in detail in this paper. The control algorithm in particular has not been determined, but it is likely to be a simple proportional controller and certainly not more complex than a PID. Some control design issues will be discussed in Section 4, however.4 The Design ProcessAlthough essentially the project is just to build a thermostat, it presents many nice pedagogical opportunities. The knowledge and experience base of a senior engineering undergraduate are just enough to bring him or her to the brink of a solution to various aspects of the problem. Yet, in each case, realworld considerations complicate the situation significantly.Fortunately these complications are not insurmountable, and the result is a very beneficial design experience. The remainder of this section looks at a few aspects of the problem which present the type of learning opportunity just described. Section 4.1 discusses some of the features of a simplified mathematical model of the thermal properties of the system and how it can beeasily validated experimentally. Section 4.2 describes how realistic control algorithm designs can be arrived at using introductory concepts in control design. Section 4.3 points out some important deficiencies of such a simplified modeling/control design process and how they can be overcome through simulation. Finally, Section 4.4 gives an overview of some of the microcontroller-related design issues which arise and learning opportunities offered.4.1 MathematicalModelLumped-element thermal systems are described in almost any introductory linear control systems text, and just this sort of model is applicable to the slide dryer problem. Figure 4 shows a second-order lumped-element thermal model of the slide dryer. The state variables are the temperatures Ta of the air in the box and Tb of the box itself. The inputs to the system are the power output q(t) of the heater and the ambient temperature T¥. ma and mb are the masses of the air and the box, respectively, and Ca and Cb their specific heats. μ1 and μ2 are heat transfer coefficients from the air to the box and from the box to the external world, respectively.It’s not hard to show that the (linearized) state equationscorresponding to Figure 4 areTaking Laplace transforms of (1) and (2) and solving for Ta(s), which is the output of interest, gives the following open-loop model of the thermal system:where K is a constant and D(s) is a second-order polynomial.K, tz, and the coefficients ofD(s) are functions of the variousparameters appearing in (1) and (2).Of course the various parameters in (1) and (2) are completely unknown, but it’s not hard to show that, regardless of their values, D(s) has two real zeros. Therefore the main transfer function of interest (which isthe one from Q(s), since we’ll assume constant ambient temperature) can be writtenMoreover, it’s not too hard to show that 1=tp1 <1=tz <1=tp2, i.e., that the zero lies between the two poles. Both of these are excellent exercises for the student, and the result is the openloop pole-zero diagram of Figure 5.Obtaining a complete thermal model, then, is reduced to identifying the constant K and the three unknown time constants in (3). Four unknown parameters is quite a few, but simple experiments show that 1=tp1 _ 1=tz;1=tp2 so that tz;tp2 _ 0 are good approximations. Thus the open-loop system is essentially first-order and can therefore be written(where the subscript p1 has been dropped).Simple open-loop step response experiments show that,for a wide range of initial temperatures and heat inputs, K _0:14 _=W and t _ 295 s.14.2 Control System DesignUsing the first-order model of (4) for the open-loop transfer function Gaq(s) and assuming for the moment that linear control of the heater power output q(t) is possible, the block diagram of Figure 6 represents the closed-loop system. Td(s) is the desired, or set-point, temperature,C(s) is the compensator transfer function, and Q(s) is the heater output in watts.Given this simple situation, introductory linear control design tools such as the root locus method can be used to arrive at a C(s) which meets the step response requirements on rise time, steady-state error, and overshoot specified in Table 1. The upshot, of course, is that a proportional controller with sufficient gain can meet all specifications. Overshoot is impossible, and increasing gains decreases both steady-state error and rise time.Unfortunately, sufficient gain to meet the specifications may require larger heat outputs than the heater is capable of producing. This was indeed the case for this system, and the result is that the rise time specification cannot be met. It is quite revealing to the student how useful such an oversimplified model, carefully arrived at, can be in determining overall performance limitations.4.3 Simulation ModelGross performance and its limitations can be determined using the simplified model of Figure 6, but there are a number of other aspects of the closed-loop system whose effects on performance are not so simply modeled. Chief among these are·quantization error in analog-to-digital conversion of the measured temperature and· the use of PWM to control the heater.Both of these are nonlinear and time-varying effects, and the only practical way to study them is through simulation (or experiment, of course).Figure 7 shows a SimulinkTM block diagram of the closed-loop system which incorporates these effects. A/D converter quantization and saturation are modeled using standard Simulink quantizer and saturation blocks. Modeling PWM is more complicated and requires a customS-function to represent it.This simulation model has proven particularly useful in gauging the effects of varying thebasic PWM parameters and hence selecting them appropriately. (I.e., the longer the period, the larger the temperature error PWM introduces. On the other hand, a long period is desirable to avoid excessiv e relay “chatter,” among other things.) PWM is often difficult for students to grasp, and the simulation model allows an exploration of its operation and effects which is quite revealing.4.4 The MicrocontrollerSimple closed-loop control, keypad reading, and display control are some of the classic applications of microcontrollers, and this project incorporates all three. It is therefore an excellent all-around exercise in microcontroller applications. In addition, because the project isto produce an actua l packaged prototype, it won’t do to use a simple evaluation board with theI/O pins jumpered to the target system. Instead, it’s necessary to develop a complete embedded application. This entails the choice of an appropriate part from the broad range offered in a typical microcontroller family and learning to use a fairly sophisticated development environment. Finally, a custom printed-circuit board for the microcontroller and peripherals must be designed and fabricated.Microcontroller Selection. In view of existing local expertise, the Motorola line of microcontrollers was chosen for this project. Still, this does not narrow the choice down much. A fairly disciplined study of system requirements is necessary to specify which microcontroller, out of scores of variants, is required for the job. This is difficult for students, as they generally lack the experience and intuition needed as well as the perseverance to wade through manufacturers’ selection guides.Part of the problem is in choosing methods for interfacing the various peripherals (e.g., what kind of display driver should be used?). A study of relevant Motorola application notes [2, 3, 4] proved very helpful in understandingwhat basic approaches are available, and what microcontroller/peripheral combinations should be considered.The MC68HC705B16 was finally chosen on the basis of its availableA/D inputs and PWMoutputs as well as 24 digital I/O lines. In retrospect this is probably overkill, as only oneA/D channel, one PWM channel, and 11 I/O pins are actually required (see Figure 3). The decision was made to err on the safe side because a complete development system specific to the chosen part was necessary, and the project budget did not permit a second such system to be purchased should the firstprove inadequate.Microcontroller Application Development. Breadboarding of the peripheral hardware, development of microcontroller software, and final debugging and testing of a customprinted-circuit board for the microcontroller and peripherals all require a development environment of some kind. The choice of a development environment, like that of themicrocontroller itself, can be bewildering and requires some faculty expertise. Motorola makes three grades of development environment ranging from simple evaluation boards (at around $100) to full-blown real-time in-circuit emulators (at more like $7500). The middle option was chosen for this project: the MMEVS, which consists of _ a platform board (which supports all 6805-family parts), _ an emulator module (specific to B-series parts), and _ a cable and target head adapter (package-specific). Overall, the system costs about $900 and provides, with some limitations, in-circuit emulation capability. It also comes with the simple but sufficient software development environment RAPID [5].Students find learning to use this type of system challenging, but the experience they gain in real-world microcontroller application development greatly exceeds the typical first-course experience using simple evaluation boards.Printed-Circuit Board. The layout of a simple (though definitely not trivial) printed-circuit board is another practical learning opportunity presented by this project. The final board layout, with package outlines, is shown (at 50% of actual size) in Figure 8. The relative simplicity of the circuit makes manual placement and routing practical—in fact, it likely gives better results than automatic in an application like this—and the student is therefore exposed to fundamental issues of printed-circuit layout and basic design rules. The layout software used was the very nice package pcb,2 and the board was fabricated in-house with the aid of our staff electronics technician.5 ConclusionThe aim of this paper has been to describe an interdisciplinary, undergraduate engineering design project: a microcontroller- based temperature control system with digital set-point entry and set-point/actual temperature display. A particular design of such a system has been described, and a number of design issues which arise—from a variety of engineering disciplines—have been discussed. Resolution of these issues generally requires knowledge beyond that acquired in introductory courses, but realistically accessible to advance undergraduate students, especiallywith the advice and supervision of faculty.Desirable features of the problem, from a pedagogical viewpoint, include the use of a microcontroller with simple peripherals, the opportunity to usefully apply introductorylevel modeling of physical systems and design of closed-loop controls, and the need for relatively simple experimentation (for model validation) and simulation (for detailed performance prediction). Also desirable are some of the technologyrelated aspects of the problem including practical use of resistive heaters and temperature sensors (requiring knowledge of PWM and calibration techniques, respectively), microcontroller selection and use of development systems, and printedcircuit design.AcknowledgementsThe author would like to acknowledge the hard work, dedication, and ability shown by the students involved in this project: Mark Langsdorf, Matt Rall, PamRinehart, and David Schuchmann. It is their project, and credit for its success belongs to them.References[1] M. Langsdorf, M. Rall, D. Schuchmann, and P. Rinehart,“Temperature control of a microscope slide dryer,” in1997 National Conference on Undergraduate Research,(Austin, TX), April 1997. Poster presentation.[2] Motorola, Inc., Phoenix, AZ, Temperature Measurementand Display Using the MC68HC05B4 and the MC14489,1990. Motorola SemiconductorApplicationNote AN431.[3] Motorola, Inc., Phoenix, AZ, HC05 MCU LED DriveTechniques Using the MC68HC705J1A, 1995. MotorolaSemiconductor Application Note AN1238.[4] Motorola, Inc., Phoenix, AZ, HC05MCU Keypad DecodingTechniques Using the MC68HC705J1A, 1995. MotorolaSemiconductor Application Note AN1239.[5] Motorola, Inc., Phoenix, AZ, RAPID Integrated DevelopmentEnvironment User’s Manual, 1993. (RAPID wasdeveloped by P & E Microcomputer Systems, Inc.).附录B英文文献翻译中文单片机温度控制:一个跨学科的本科生工程设计项目JamesS.McDonald工程科学系三一大学德克萨斯州圣安东尼奥市78212摘要本文所描述的是作者领导由四个三一大学高年级学生组成的团队进行的一个跨学科工程项目的设计。

单片机的外文文献及中文翻译

单片机的外文文献及中文翻译

SCM is an integrated circuit chip,is the use of large scale integrated circuit technology to a data processing capability of CPU CPU random access memory RAM,read-only memory ROM,a variety of I / O port and interrupt system, timers / timer functions (which may also include display driver circuitry,pulse width modulation circuit,analog multiplexer,A / D converter circuit)integrated into a silicon constitute a small and complete computer systems.SCM is also known as micro—controller (Microcontroller), because it is the first to be used in industrial control。

Only a single chip by the CPU chip developed from a dedicated processor。

The first design is by a large number of peripherals and CPU on a chip in the computer system, smaller, more easily integrated into a complex and demanding on the volume control device which。

单片机外文文献翻译(2024)

单片机外文文献翻译(2024)

引言:单片机(Microcontroller)是一种广泛应用于嵌入式系统中的小型计算机芯片。

它集成了处理器核心、存储器、外设接口和时钟电路等核心部件,可以独立运行。

随着全球化的发展,外文文献对于学习和研究单片机领域来说至关重要。

本文翻译的外文文献《MicrocontrollerbasedTrafficLightControlSystem》详细介绍了基于单片机的交通信号灯控制系统。

概述:交通信号灯控制是现代都市交通系统中至关重要的一环。

传统的交通信号灯控制系统通常由定时器控制,不能根据实际交通情况动态调整信号灯的时间。

而基于单片机的交通信号灯控制系统可以实现根据实时交通流量来动态调整信号灯的时间,优化交通效率。

本文将详细介绍该系统的设计和实现。

正文:一、单片机选型1.1.CPU性能:本文选择了一款高性能的32位单片机作为控制核心,它具有较高的处理能力和较大的存储器容量,可以同时处理多条交通路口的信号控制。

1.2.外设接口:该单片机具有丰富的外设接口,可以与交通信号灯、传感器和通信设备等进行连接,实现信号控制和数据交互。

1.3.低功耗设计:为了节约能源和延长系统寿命,在单片机选型时考虑了低功耗设计,降低系统运行的能耗。

二、硬件设计2.1.交通信号灯:在设计交通信号灯时,考虑了日夜可见性和能耗。

采用了高亮度LED作为信号灯光源,同时添加了光敏传感器控制信号灯的亮度,以满足不同时间段的亮度需求。

2.2.传感器:通过安装车辆感应器和行人感应器等传感器,可以在实时监测交通流量的基础上,智能调整信号灯时间,提高路口的交通效率。

2.3.通信设备:在交通信号灯控制系统中引入了通信设备,可以实现各交通路口之间的信息交互和协调控制,提高整体交通系统的效率。

三、软件设计3.1.程序架构:采用了多任务的实时操作系统,将交通信号灯控制、传感器数据处理和通信设备控制等功能分别封装成不同的任务,实现了系统的高效运行和任务调度。

(完整)AT89C51单片机英文文献附带翻译

(完整)AT89C51单片机英文文献附带翻译

AT89C51的概况一 AT89C51应用单片机广泛应用于商业:诸如调制解调器,电动机控制系统,空调控制系统,汽车发动机和其他一些领域。

这些单片机的高速处理速度和增强型外围设备集合使得它们适合于这种高速事件应用场合.然而,这些关键应用领域也要求这些单片机高度可靠。

健壮的测试环境和用于验证这些无论在元部件层次还是系统级别的单片机的合适的工具环境保证了高可靠性和低市场风险。

Intel 平台工程部门开发了一种面向对象的用于验证它的AT89C51 汽车单片机多线性测试环境。

这种环境的目标不仅是为AT89C51 汽车单片机提供一种健壮测试环境,而且开发一种能够容易扩展并重复用来验证其他几种将来的单片机。

开发的这种环境连接了AT89C51.本文讨论了这种测试环境的设计和原理,它的和各种硬件、软件环境部件的交互性,以及如何使用AT89C51。

1.1 介绍8 位AT89C51 CHMOS 工艺单片机被设计用于处理高速计算和快速输入/输出。

MCS51 单片机典型的应用是高速事件控制系统。

商业应用包括调制解调器,电动机控制系统,打印机,影印机,空调控制系统,磁盘驱动器和医疗设备。

汽车工业把MCS51 单片机用于发动机控制系统,悬挂系统和反锁制动系统.AT89C51 尤其很好适用于得益于它的处理速度和增强型片上外围功能集,诸如:汽车动力控制,车辆动态悬挂,反锁制动和稳定性控制应用。

由于这些决定性应用,市场需要一种可靠的具有低干扰潜伏响应的费用-效能控制器,服务大量时间和事件驱动的在实时应用需要的集成外围的能力,具有在单一程序包中高出平均处理功率的中央处理器。

拥有操作不可预测的设备的经济和法律风险是很高的。

一旦进入市场,尤其任务决定性应用诸如自动驾驶仪或反锁制动系统,错误将是财力上所禁止的。

重新设计的费用可以高达500K 美元,如果产品族享有同样内核或外围设计缺陷的话,费用会更高。

另外,部件的替代品领域是极其昂贵的,因为设备要用来把模块典型地焊接成一个总体的价值比各个部件高几倍.为了缓和这些问题,在最坏的环境和电压条件下对这些单片机进行无论在部件级别还是系统级别上的综合测试是必需的。

单片机设计外文文献翻译(含中英文)

单片机设计外文文献翻译(含中英文)

附录A 外文翻译——AT89S52/AT89S51技术手册AT89S52译文主要性能与MCS-51单片机产品兼容8K字节在系统可编程Flash存储器1000次擦写周期全静态操作:0Hz~33Hz三级加密程序存储器32个可编程I/O口线三个16位定时器/计数器八个中断源全双工UART串行通道低功耗空闲和掉电模式掉电后中断可唤醒看门狗定时器双数据指针掉电标识符功能特性描述AT89S52是一种低功耗、高性能CMOS8位微控制器,具有8K在系统可编程Flash 存储器。

使用Atmel公司高密度非易失性存储器技术制造,与工业80C51产品指令和引脚完全兼容。

片上Flash 允许程序存储器在系统可编程,亦适于常规编程器。

在单芯片上,拥有灵巧的8位CPU和在系统可编程Flash,使得AT89S52为众多嵌入式控制应用系统提供高灵活、超有效的解决方案。

AT89S52具有以下标准功能:8k字节Flash,256字节RAM,32位I/O口线,看门狗定时器,2个数据指针,三个16位定时器/计数器,一个6向量2级中断结构,全双工串行口,片内晶振及时钟电路。

另外,AT89S52可降至0Hz静态逻辑操作,支持2种软件可选择节电模式。

空闲模式下,CPU停止工作,允许RAM、定时器/计数器、串口、中断继续工作。

掉电保护方式下,RAM内容被保存,振荡器被冻结,单片机一切工作停止,直到下一个中断或硬件复位为止。

引脚结构方框图VCC : 电源GND :地P0口:P0口是一个8位漏极开路的双向I/O口。

作为输出口,每位能驱动8个TTL逻辑电平。

对P0端口写“1”时,引脚用作高阻抗输入。

当访问外部程序和数据存储器时,P0口也被作为低8位地址/数据复用。

在这种模式下,P0具有内部上拉电阻。

在flash编程时,P0口也用来接收指令字节;在程序校验时,输出指令字节。

程序校验时,需要外部上拉电阻。

P1口:P1 口是一个具有内部上拉电阻的8位双向I/O 口,p1 输出缓冲器能驱动4个TTL 逻辑电平。

单片机 英文参考文献 翻译

单片机 英文参考文献 翻译

天津科技大学毕业生外文资料翻译姓名:学院:电子信息与自动化学院专业:测控技术与仪器一.英文原文Progress in ComputersPrestige Lecture delivered to IEE, Cambridge, on 5 February 2009Maurice WilkesThe first stored program computers began to work around 1950. The one we built in Cambridge, the EDSAC was first used in the summer of 1949.These early experimental computers were built by people like myself with varying backgrounds. We all had extensive experience in electronic engineering and were confident that that experience would stand us in good stead. This proved true, although we had some new things to learn. The most important of these was that transients must be treated correctly; what would cause a harmless flash on the screen of a television set could lead to a serious error in a computer.As far as computing circuits were concerned, we found ourselves with an embarass de richess. For example, we could use vacuum tube diodes for gates as we did in the EDSAC or pentodes with control signals on both grids, a system widely used elsewhere. This sort of choice persisted and the term families of logic came into use. Those who have worked in the computer field will remember TTL, ECL and CMOS. Of these, CMOS has now become dominant.In those early years, the IEE was still dominated by power engineering and we had to fight a number of major battles in order to get radio engineering along with the rapidly developing subject of electronics.dubbed in the IEE light current electrical engineering.properly recognised as an activity in its own right. I remember that we had some difficulty in organising a conference because the power engineers’ ways of doing things were not our ways. A minor source of irritation was that all IEE published papers were expected to start with a lengthy statement of earlier practice, something difficult to do when there was no earlier practice Consolidation in the 1960sBy the late 50s or early 1960s, the heroic pioneering stage was over and the computer field was starting up in real earnest. The number of computers in the world had increased and they were much more reliable than the very early ones . To those years we can ascribe the first steps in high level languages and the firstoperating systems. Experimental time-sharing was beginning, and ultimately computer graphics was to come along.Above all, transistors began to replace vacuum tubes. This change presented a formidable challenge to the engineers of the day. They had to forget what they knew about circuits and start again. It can only be said that they measured up superbly well to the challenge and that the change could not have gone more smoothly.Soon it was found possible to put more than one transistor on the same bit of silicon, and this was the beginning of integrated circuits. As time went on, a sufficient level of integration was reached for one chip to accommodate enough transistors for a small number of gates or flip flops. This led to a range of chips known as the 7400 series. The gates and flip flops were independent of one another and each had its own pins. They could be connected by off-chip wiring to make a computer or anything else.These chips made a new kind of computer possible. It was called a minicomputer. It was something less that a mainframe, but still very powerful, and much more affordable. Instead of having one expensive mainframe for the whole organisation, a business or a university was able to have a minicomputer for each major department.Before long minicomputers began to spread and become more powerful. The world was hungry for computing power and it had been very frustrating for industry not to be able to supply it on the scale required and at a reasonable cost. Minicomputers transformed the situation.The fall in the cost of computing did not start with the minicomputer; it had always been that way. This was what I meant when I referred in my abstract to inflation in the computer industry ‘going the other way’. As time goes on people get more for their money, not less.Research in Computer Hardware.The time that I am describing was a wonderful one for research in computer hardware. The user of the 7400 series could work at the gate and flip-flop level and yet the overall level of integration was sufficient to give a degree of reliability far above that of discreet transistors. The researcher, in a university or elsewhere, could build any digital device that a fertile imagination could conjure up. In the Computer Laboratory we built the Cambridge CAP, a full-scale minicomputer with fancy capability logic.The 7400 series was still going strong in the mid 1970s and was used for the Cambridge Ring, a pioneering wide-band local area network. Publication of the design study for the Ring came just before the announcement of the Ethernet. Until these two systems appeared, users had mostly been content with teletype-based local area networks.Rings need high reliability because, as the pulses go repeatedly round the ring, they must be continually amplified and regenerated. It was the high reliability provided by the 7400 series of chips that gave us the courage needed to embark on the project for the Cambridge Ring.The RISC Movement and Its AftermathEarly computers had simple instruction sets. As time went on designers of commercially available machines added additional features which they thought would improve performance. Few comparative measurements were done and on the whole the choice of features depended upon the designer’s intuition.In 1980, the RISC movement that was to change all this broke on the world. The movement opened with a paper by Patterson and Ditzel entitled The Case for the Reduced Instructions Set Computer.Apart from leading to a striking acronym, this title conveys little of the insights into instruction set design which went with the RISC movement, in particular the way it facilitated pipelining, a system whereby several instructions may be in different stages of execution within the processor at the same time. Pipelining was not new, but it was new for small computers The RISC movement benefited greatly from methods which had recently become available for estimating the performance to be expected from a computer design without actually implementing it. I refer to the use of a powerful existing computer to simulate the new design. By the use of simulation, RISC advocates were able to predict with some confidence that a good RISC design would be able to out-perform the best conventional computers using the same circuit technology. This prediction was ultimately born out in practice.Simulation made rapid progress and soon came into universal use by computer designers. In consequence, computer design has become more of a science and less of an art. Today, designers expect to have a roomful of, computers available to do their simulations, not just one. They refer to such a roomful by the attractive name of computer farm.The x86 Instruction SetLittle is now heard of pre-RISC instruction sets with one major exception, namely that of the Intel 8086 and its progeny, collectively referred to as x86. This has become the dominant instruction set and the RISC instruction sets that originally had a considerable measure of success are having to put up a hard fight for survival.This dominance of x86 disappoints people like myself who come from the research wings.both academic and industrial.of the computer field. No doubt, business considerations have a lot to do with the survival of x86, but there are other reasons as well. However much we research oriented people would like to think otherwise. high level languages have not yet eliminated the use of machine code altogether. We need to keep reminding ourselves that there is much to be said for strict binary compatibility with previous usage when that can be attained. Nevertheless, things might have been different if Intel’s major attempt to produce a good RISC chip had been more successful. I am referring to the i860 (not the i960, which was something different). In many ways the i860 was an excellent chip, but its software interface did not fit it to be used in a workstation.There is an interesting sting in the tail of this apparently easy triumph of the x86 instruction set. It proved impossible to match the steadily increasing speed of RISC processors by direct implementation of the x86 instruction set as had been done in the past. Instead, designers took a leaf out of the RISC book; although it is not obvious, on the surface, a modern x86 processor chip contains hidden within it a RISC-style processor with its own internal RISC coding. The incoming x86 code is, after suitable massaging, converted into this internal code and handed over to the RISC processor where the critical execution is performed.In this summing up of the RISC movement, I rely heavily on the latest edition of Hennessy and Patterson’s books on computer design as my supporting authority; see in particular Computer Architecture, third edition, 2003, pp 146, 151-4, 157-8.The IA-64 instruction set.Some time ago, Intel and Hewlett-Packard introduced the IA-64 instruction set. This was primarily intended to meet a generally recognised need for a 64 bit address space. In this, it followed the lead of the designers of the MIPS R4000 and Alpha. However one would have thought that Intel would have stressed compatibility with the x86; the puzzle is that they did the exact opposite.Moreover, built into the design of IA-64 is a feature known as predicationwhich makes it incompatible in a major way with all other instruction sets. In particular, it needs 6 extra bits with each instruction. This upsets the traditional balance between instruction word length and information content, and it changes significantly the brief of the compiler writer.In spite of having an entirely new instruction set, Intel made the puzzling claim that chips based on IA-64 would be compatible with earlier x86 chips. It was hard to see exactly what was meant.Chips for the latest IA-64 processor, namely, the Itanium, appear to have special hardware for compatibility. Even so, x86 code runs very slowly.Because of the above complications, implementation of IA-64 requires a larger chip than is required for more conventional instruction sets. This in turn implies a higher cost. Such at any rate, is the received wisdom, and, as a general principle, it was repeated as such by Gordon Moore when he visited Cambridge recently to open the Betty and Gordon Moore Library. I have, however, heard it said that the matter appears differently from within Intel. This I do not understand. But I am very ready to admit that I am completely out of my depth as regards the economics of the semiconductor industry.AMD have defined a 64 bit instruction set that is more compatible with x86 and they appear to be making headway with it. The chip is not a particularly large one. Some people think that this is what Intel should have done. [Since the lecture was delivered, Intel have announced that they will market a range of chips essentially compatible with those offered by AMD.]The Relentless Drive towards Smaller TransistorsThe scale of integration continued to increase. This was achieved by shrinking the original transistors so that more could be put on a chip. Moreover, the laws of physics were on the side of the manufacturers. The transistors also got faster, simply by getting smaller. It was therefore possible to have, at the same time, both high density and high speed.There was a further advantage. Chips are made on discs of silicon, known as wafers. Each wafer has on it a large number of individual chips, which are processed together and later separated. Since shrinkage makes it possible to get more chips on a wafer, the cost per chip goes down.Falling unit cost was important to the industry because, if the latest chips are cheaper to make as well as faster, there is no reason to go on offering the old ones, at least not indefinitely. There can thus be one product for the entiremarket.However, detailed cost calculations showed that, in order to maintain this advantage as shrinkage proceeded beyond a certain point, it would be necessary to move to larger wafers. The increase in the size of wafers was no small matter. Originally, wafers were one or two inches in diameter, and by 2000 they were as much as twelve inches. At first, it puzzled me that, when shrinkage presented so many other problems, the industry should make things harder for itself by going to larger wafers. I now see that reducing unit cost was just as important to the industry as increasing the number of transistors on a chip, and that this justified the additional investment in foundries and the increased risk.The degree of integration is measured by the feature size, which, for a given technology, is best defined as the half the distance between wires in the densest chips made in that technology. At the present time, production of 90 nm chips is still building upSuspension of LawIn March 1997, Gordon Moore was a guest speaker at the celebrations of the centenary of the discovery of the electron held at the Cavendish Laboratory. It was during the course of his lecture that I first heard the fact that you can have silicon chips that are both fast and low in cost described as a violation of Murphy’s law.or Sod’s law as it is usually called in the UK. Moore said that experience in other fields would lead you to expect to have to choose between speed and cost, or to compromise between them. In fact, in the case of silicon chips, it is possible to have both.In a reference book available on the web, Murphy is identified as an engineer working on human acceleration tests for the US Air Force in 1949. However, we were perfectly familiar with the law in my student days, when we called it by a much more prosaic name than either of those mentioned above, namely, the Law of General Cussedness. We even had a mock examination question in which the law featured. It was the type of question in which the first part asks for a definition of some law or principle and the second part contains a problem to be solved with the aid of it. In our case the first part was to define the Law of General Cussedness and the second was the problem;A cyclist sets out on a circular cycling tour. Derive an equation giving the direction of the wind at any time.The single-chip computerAt each shrinkage the number of chips was reduced and there were fewer wiresgoing from one chip to another. This led to an additional increment in overall speed, since the transmission of signals from one chip to another takes a long time.Eventually, shrinkage proceeded to the point at which the whole processor except for the caches could be put on one chip. This enabled a workstation to be built that out-performed the fastest minicomputer of the day, and the result was to kill the minicomputer stone dead. As we all know, this had severe consequences for the computer industry and for the people working in it.From the above time the high density CMOS silicon chip was Cock of the Roost. Shrinkage went on until millions of transistors could be put on a single chip and the speed went up in proportion.Processor designers began to experiment with new architectural features designed to give extra speed. One very successful experiment concerned methods for predicting the way program branches would go. It was a surprise to me how successful this was. It led to a significant speeding up of program execution and other forms of prediction followedEqually surprising is what it has been found possible to put on a single chip computer by way of advanced features. For example, features that had been developed for the IBM Model 91.the giant computer at the top of the System 360 range.are now to be found on microcomputersMurphy’s Law remained in a state of suspension. No longer did it make sense to build experimental computers out of chips with a small scale of integration, such as that provided by the 7400 series. People who wanted to do hardware research at the circuit level had no option but to design chips and seek for ways to get them made. For a time, this was possible, if not easyUnfortunately, there has since been a dramatic increase in the cost of making chips, mainly because of the increased cost of making masks for lithography, a photographic process used in the manufacture of chips. It has, in consequence, again become very difficult to finance the making of research chips, and this is a currently cause for some concern.The Semiconductor Road MapThe extensive research and development work underlying the above advances has been made possible by a remarkable cooperative effort on the part of the international semiconductor industry.At one time US monopoly laws would probably have made it illegal for UScompanies to participate in such an effort. However about 1980 significant and far reaching changes took place in the laws. The concept of pre-competitive research was introduced. Companies can now collaborate at the pre-competitive stage and later go on to develop products of their own in the regular competitive manner.The agent by which the pre-competitive research in the semi-conductor industry is managed is known as the Semiconductor Industry Association (SIA). This has been active as a US organisation since 1992 and it became international in 1998. Membership is open to any organisation that can contribute to the research effort.Every two years SIA produces a new version of a document known as the International Technological Roadmap for Semiconductors (ITRS), with an update in the intermediate years. The first volume bearing the title ‘Roadmap’ was issued in 1994 but two reports, written in 1992 and distributed in 1993, are regarded as the true beginning of the series.Successive roadmaps aim at providing the best available industrial consensus on the way that the industry should move forward. They set out in great detail.over a 15 year horizon. the targets that must be achieved if the number of components on a chip is to be doubled every eighteen months.that is, if Moore’s law is to be maintained.-and if the cost per chip is to fall.In the case of some items, the way ahead is clear. In others, manufacturing problems are foreseen and solutions to them are known, although not yet fully worked out; these areas are coloured yellow in the tables. Areas for which problems are foreseen, but for which no manufacturable solutions are known, are coloured red. Red areas are referred to as Red Brick Walls.The targets set out in the Roadmaps have proved realistic as well as challenging, and the progress of the industry as a whole has followed the Roadmaps closely. This is a remarkable achievement and it may be said that the merits of cooperation and competition have been combined in an admirable manner.It is to be noted that the major strategic decisions affecting the progress of the industry have been taken at the pre-competitive level in relative openness, rather than behind closed doors. These include the progression to larger wafers.By 1995, I had begun to wonder exactly what would happen when the inevitable point was reached at which it became impossible to make transistors any smaller. My enquiries led me to visit ARPA headquarters in Washington DC, where I was given a copy of the recently produced Roadmap for 1994. This made it plain that seriousproblems would arise when a feature size of 100 nm was reached, an event projected to happen in 2007, with 70 nm following in 2010. The year for which the coming of 100 nm (or rather 90 nm) was projected was in later Roadmaps moved forward to 2004 and in the event the industry got there a little sooner.I presented the above information from the 1994 Roadmap, along with such other information that I could obtain, in a lecture to the IEE in London, entitled The CMOS end-point and related topics in Computing and delivered on 8 February 1996.The idea that I then had was that the end would be a direct consequence of the number of electrons available to represent a one being reduced from thousands to a few hundred. At this point statistical fluctuations would become troublesome, and thereafter the circuits would either fail to work, or if they did work would not be any faster. In fact the physical limitations that are now beginning to make themselves felt do not arise through shortage of electrons, but because the insulating layers on the chip have become so thin that leakage due to quantum mechanical tunnelling has become troublesome.There are many problems facing the chip manufacturer other than those that arise from fundamental physics, especially problems with lithography. In an update to the 2001 Roadmap published in 2002, it was stated that the continuation of progress at present rate will be at risk as we approach 2005 when the roadmap projects that progress will stall without research break-throughs in most technical areas “. This was the most specific statement about the Red Brick Wall, that had so far come from the SIA and it was a strong one. The 2003 Roadmap reinforces this statement by showing many areas marked red, indicating the existence of problems for which no manufacturable solutions are known.It is satisfactory to report that, so far, timely solutions have been found to all the problems encountered. The Roadmap is a remarkable document and, for all its frankness about the problems looming above, it radiates immense confidence. Prevailing opinion reflects that confidence and there is a general expectation that, by one means or another, shrinkage will continue, perhaps down to 45 nm or even less.However, costs will rise steeply and at an increasing rate. It is cost that will ultimately be seen as the reason for calling a halt. The exact point at which an industrial consensus is reached that the escalating costs can no longer be met will depend on the general economic climate as well as on the financial strength of the semiconductor industry itself.。

单片机的外文文献及中文翻译

单片机的外文文献及中文翻译

SCM is an integrated circuit chip, is the use of large scale integrated circuit technology to a data processing capability of CPU CPU random access memory RAM, read-only memory ROM, a variety of I / O port and interrupt system, timers / timer functions (which may also include display driver circuitry, pulse width modulation circuit, analog multiplexer, A / D converter circuit) integrated into a silicon constitute a small and complete computer systems.SCM is also known as micro-controller (Microcontroller), because it is the first to be used in industrial control. Only a single chip by the CPU chip developed from a dedicated processor. The first design is by a large number of peripherals and CPU on a chip in the computer system, smaller, more easily integrated into a complex and demanding on the volume control device which. The Z80 INTEL is the first designed in accordance with this idea processor, then on the development of microcontroller and dedicated processors will be parting ways.Are 8-bit microcontroller early or 4 bits. One of the most successful is the INTEL 8031, for a simple, reliable and good performance was a lot of praise. Then developed in 8031 out of MCS51 MCU Systems. SCM systems based on this system until now is still widely used. With the increased requirements of industrial control field, began a 16-bit microcontroller, but not ideal because the cost has not been very widely used. After 90 years with the great development of consumer electronics, microcontroller technology has been a huge increase. With INTEL i960 series, especially the later series of widely used ARM, 32-bit microcontroller quickly replace high-end 16-bit MCU status and enter the mainstream market. The traditional 8-bit microcontroller performance have been the rapid increase capacity increase compared to 80 the number of times. Currently, high-end 32-bit microcontroller clocked over 300MHz, the performance catching the mid-90s dedicated processor, while the average model prices fall to one U.S. dollar, the most high-end [1] model only 10 dollars. Modern SCM systems are no longer only in the development and use of bare metal environment, a large number of proprietary embedded operating system is widely used in the full range of SCM. The handheld computers and cell phones as the core processing of high-end microcontroller can even use a dedicated Windows and Linux operating systems.SCM is more suitable than the specific processor used in embedded systems, so it was up to the application. In fact the number of SCM is the world's largest computer. Modern human life used in almost every piece of electronic and mechanical products will be integrated single chip. Phone, telephone, calculator, home appliances, electronic toys, handheld computers and computer accessories such as a mouse with a 1-2 in both the Department of SCM. Personal computer will have a large number of SCM in the work. General car with more than 40 microcontroller, a complex industrial control systems may even hundreds of single chip at the same time work! SCM is not only far exceeds thenumber of PC and other computing the sum, or even more than the number of human beings.Single chip, also known as single-chip microcontroller, it is not complete a certain logic chips, but to a computer system integrated into a chip. Equivalent to a micro-computer, and computer than just the lack of a microcontroller I / O devices. General talk: a chip becomes a computer. Its small size, light weight, cheap, for the study, application and development of facilities provided. At the same time, learning to use the MCU is to understand the principle and structure of the computer the best option.Microcontroller and the computer functions internally with similar modules, such as CPU, memory, parallel bus, the same effect there, and hard disk memory device, is it different properties of these components are relatively weak many of our home computer, but the price is low , usually not more than 10 yuan you can do with it ...... some control for a class is not very complicated electrical work is enough of. We are using automatic drum washing machine, smoke hood, VCD and so on appliances which could see its shadow! ...... It is mainly part of the core components as the control.t is an online real-time control computer, on-line is on-site control, need to have strong anti-interference ability, low cost, and this is, and off-line computer (such as home PC), the main difference. Single chipMCU is through running, and can be modified. Through different procedures to achieve different functions, in particular special unique features, this is another device much effort needs to be done, some are great efforts are very difficult to achieve. A not very complex functions if the 50's with the United States developed 74 series, or the 60's CD4000 series of these pure hardware buttoned, then the circuit must be a large PCB board! But if the United States if the 70's with a series of successful SCM market, the result will be a drastic change! Just because you are prepared by microcomputer programs can achieve high intelligence, high efficiency and high reliability!As the microcontroller on the cost-sensitive, so now the dominant software or the lowest level assembly language, which is the lowest level in addition to more than binary machine code language, and as so low why is the use? Many high-level language has reached the level of visual programming Why is not it? The reason is simply that there is no home computer as a single chip CPU, not as hard as a mass storage device. A visualization of small high-level language program is only one button on it though, will reach tens of K in size! For the home PC's hard drive in terms of nothing but speaking for the MCU is not acceptable. SCM in the utilization of hardware resources to be very high for the job so although the original is still in the compilation of a lot of use. The same token, if the giant computer operating system and applications run up get home PC, home PC, also bear not work.Can be said that the twentieth century across the three "power" era, that is, the age of electricity, the electronic age and has entered into the computer age. However, this computer, usually refers to the personal computer, referred to as PC. It consists of the host, keyboard, monitor and other components. Another type of computer, most people donot know how. This computer is to give all kinds of machinery, intelligent single chip (also known as micro-controller). As the name suggests, this computer system took only a minimal integrated circuit, can be a simple operation and control. Because it is small, usually in the charged with possession of mechanical "stomach" in. It is in the device, like the human brain plays a role, it goes wrong, the whole plant was paralyzed. Now, this microcontroller has been very widely used in the field, such as smart meters, real-time industrial control, communications equipment, navigation systems, and household appliances. Once all kinds of products were using SCM, can serve to upgrade the effectiveness of products, often in the product name preceded by the adjective - "intelligent", such as intelligent washing machines. Now some technical personnel of factories or other amateur electronics developers to engage in out of certain products, not the circuit is too complicated, that function is too simple and can easily be copied. The reason may be stuck in the product did not use a microcontroller or other programmable logic device.外文文献的翻译:单片机是一种集成在电路芯片,是采用超大规模集成电路技术把具有数据处理能力的中央处理器CPU随机存储器RAM、只读存储器ROM、多种I/O口和中断系统、定时器/计时器等功能(可能还包括显示驱动电路、脉宽调制电路、模拟多路转换器、A/D转换器等电路)集成到一块硅片上构成的一个小而完善的计算机系统。

(完整版)单片机毕业参考英文文献及翻译

(完整版)单片机毕业参考英文文献及翻译

Structure and function of the MCS-51 seriesStructure and function of the MCS-51 series one-chip computer MCS-51 is a name of a piece of one-chip computer series which Intel Company produces。

This company introduced 8 top-grade one—chip computers of MCS—51 series in 1980 after introducing 8 one-chip computers of MCS-48 series in 1976. It belong to a lot of kinds this line of one—chip computer the chips have,such as 8051, 8031, 8751, 80C51BH, 80C31BH,etc。

, their basic composition, basic performance and instruction system are all the same. 8051 daily representatives— 51 serial one-chip computers 。

An one—chip computer system is made up of several following parts: ( 1) One microprocessor of 8 (CPU)。

( 2) At slice data memory RAM (128B/256B),it use not depositting not can reading /data that write, such as result not middle of operation,final result and data wanted to show, etc. ( 3) Procedure memory ROM/EPROM (4KB/8KB ),is used to preserve the procedure , some initial data and form in slice. But does not take ROM/EPROM within some one-chip computers, such as 8031 , 8032, 80C ,etc。

(完整word版)关于单片机的英文文献

(完整word版)关于单片机的英文文献

The General Situation of AT89C51Microcontrollers are used in a multitude of commercial applications such as modems, motor—control systems, air conditioner control systems, automotive engine and among others. The high processing speed and enhanced peripheral set of these microcontrollers make them suitable for such high-speed event-based applications. However, these critical application domains also require that these microcontrollers are highly reliable. The high reliability and low market risks can be ensured by a robust testing process and a proper tools environment for the validation of these microcontrollers both at the component and at the system level。

Intel Platform Engineering department developed an object—oriented multi-threaded test environment for the validation of its AT89C51 automotive microcontrollers. The goals of this environment was not only to provide a robust testing environment for the AT89C51 automotive microcontrollers, but to develop an environment which can be easily extended and reused for the validation of several other future microcontrollers. The environment was developed in conjunction with Microsoft Foundation Classes (AT89C51)。

单片机英文文献外文翻译

单片机英文文献外文翻译

单片机英文文献Principle of MCUSingle-chip is an integrated on a single chip a complete computer system. Even though most of his features in a small chip, but it has a need to complete the majority of computer components: CPU, memory, internal and external bus system, most will have the Core. At the same time, such as integrated communication interfaces, timers, real-time clock and other peripheral equipment. And now the most powerful single-chip microcomputer system can even voice, image, networking, input and output complex system integration on a single chip.Also known as single-chip MCU (Microcontroller), because it was first used in the field of industrial control. Only by the single-chip CPU chip developed from the dedicated processor. The design concept is the first by a large number of peripherals and CPU in a single chip, the computer system so that smaller, more easily integrated into the complex and demanding on the volume control devices. INTEL the Z80 is one of the first design in accordance with the idea of the processor, From then on, the MCU and the development of a dedicated processor parted ways.Early single-chip 8-bit or all of the four. One of the most successful is INTEL's 8031, because the performance of a simple and reliable access to a lot of good praise. Since then in 8031 to develop a single-chip microcomputer system MCS51 series. Based on single-chip microcomputer system of the system is still widely used until now. As the field of industrial control requirements increase in the beginning of a 16-bit single-chip, but not ideal because the price has not been very widely used. After the 90's with the big consumer electronics product development, single-chip technology is a huge improvement. INTEL i960 Series with subsequent ARM in particular, a broad range of applications, quickly replaced by 32-bit single-chip 16-bit single-chip high-end status, and enter the mainstream market. Traditional 8-bit single-chip performance has been the rapid increase in processing power compared to the 80's to raise a few hundred times. At present, the high-end 32-bit single-chip frequency over 300MHz, the performance of the mid-90's close on the heels of a special processor, while the ordinary price of the model dropped to one U.S. dollars, the most high-end models, only 10 U.S. dollars. Contemporary single-chip microcomputer system is no longer only the bare-metal environment in the development and use of a large number of dedicated embedded operating system is widely used in the full range of single-chip microcomputer. In PDAs and cell phones as the core processing of high-end single-chip or even a dedicated direct access to Windows and Linux operating systems.More than a dedicated single-chip processor suitable for embedded systems, so it was up to the application. In fact the number of single-chip is the world's largest computer. Modern human life used in almost every piece of electronic and mechanical products will have a single-chip integration. Phone, telephone, calculator, home appliances, electronic toys, handheld computers and computer accessories such as a mouse in the Department are equipped with 1-2 single chip. And personal computers also have a large number of single-chip microcomputer in the workplace. Vehicles equipped with more than 40 Department of the general single-chip, complex industrial control systems and even single-chip may have hundreds of work at the same time! SCM is not only far exceeds the number of PC and other integrated computing, even more than the numberof human beings.Hardwave introductionThe 8051 family of micro controllers is based on an architecture which is highly optimized for embedded control systems. It is used in a wide variety of applications from military equipment to automobiles to the keyboard on your PC. Second only to the Motorola 68HC11 in eight bit processors sales, the 8051 family of microcontrollers is available in a wide array of variations from manufacturers such as Intel, Philips, and Siemens. These manufacturers have added numerous features and peripherals to the 8051 such as I2C interfaces, analog to digital converters, watchdog timers, and pulse width modulated outputs. Variations of the 8051 with clock speeds up to 40MHz and voltage requirements down to 1.5 volts are available. This wide range of parts based on one core makes the 8051 family an excellent choice as the base architecture for a company's entire line of products since it can perform many functions and developers will only have to learn this one platform.The basic architecture consists of the following features:·an eight bit ALU·32 descrete I/O pins (4 groups of 8) which can be individually accessed·two 16 bit timer/counters·full duplex UART· 6 interrupt sources with 2 priority levels·128 bytes of on board RAM·separate 64K byte address spaces for DA TA and CODE memoryOne 8051 processor cycle consists of twelve oscillator periods. Each of the twelve oscillator periods is used for a special function by the 8051 core such as op code fetches and samples of the interrupt daisy chain for pending interrupts. The time required for any 8051 instruction can be computed by dividing the clock frequency by 12, inverting that result and multiplying it by the number of processor cycles required by the instruction in question. Therefore, if you have a system which is using an 11.059MHz clock, you can compute the number of instructions per second by dividing this value by 12. This gives an instruction frequency of 921583 instructions per second. Inverting this will provide the amount of time taken by each instruction cycle (1.085 microseconds).单片机原理单片机是指一个集成在一块芯片上的完整计算机系统。

单片机的外文文献及中文翻译教学内容

单片机的外文文献及中文翻译教学内容

单片机的外文文献及中文翻译SCM is an integrated circuit chip, is the use of large scale integrated circuit technology to a data processing capability of CPU CPU random access memory RAM, read-only memory ROM, a variety of I / O port and interrupt system, timers / timer functions (which may also include display driver circuitry, pulse width modulation circuit, analog multiplexer, A / D converter circuit) integrated into a silicon constitute a small and complete computer systems.SCM is also known as micro-controller (Microcontroller), because it is the first to be used in industrial control. Only a single chip by the CPU chip developed from a dedicated processor. The first design is by a large number of peripherals and CPU on a chip in the computer system, smaller, more easily integrated into a complex and demanding on the volume control device which. The Z80 INTEL is the first designed in accordance with this idea processor, then on the development of microcontroller and dedicated processors will be parting ways.Are 8-bit microcontroller early or 4 bits. One of the most successful is the INTEL 8031, for a simple, reliable and good performance was a lot of praise. Then developed in 8031 out of MCS51 MCU Systems. SCM systems based on this system until now is still widely used. With the increased requirements of industrial control field, began a 16-bit microcontroller, but not ideal because the cost has not been very widely used. After 90 years with the great development of consumer electronics, microcontroller technology has been a huge increase. With INTEL i960 series, especially the later series of widely used ARM, 32-bit microcontroller quickly replace high-end 16-bit MCU status and enter the mainstream market. The traditional 8-bit microcontroller performance have been the rapid increase capacity increase compared to 80 the number of times. Currently, high-end 32-bit microcontroller clocked over 300MHz, the performance catching the mid-90s dedicated processor, while the average model prices fall to one U.S. dollar, the most high-end [1] model only 10 dollars. Modern SCM systems are no longer only in the development and use of bare metal environment, a large number of proprietary embedded operating system is widely used in the full range of SCM. The handheld computers and cell phones as the core processing of high-end microcontroller can even use a dedicated Windows and Linux operating systems.SCM is more suitable than the specific processor used in embedded systems, so it was up to the application. In fact the number of SCM is the world's largest computer. Modern human life used in almost every piece of electronic and mechanical products will be integrated single chip. Phone, telephone, calculator, home appliances, electronic toys, handheld computers and computer accessories such as a mouse with a 1-2 in both the Department of SCM. Personal computer will have a large number of SCM in the work. General car with more than 40 microcontroller, a complex industrial control systems may even hundreds of single chip at the same time work! SCM is notonly far exceeds the number of PC and other computing the sum, or even more than the number of human beings.Single chip, also known as single-chip microcontroller, it is not complete a certain logic chips, but to a computer system integrated into a chip. Equivalent to a micro-computer, and computer than just the lack of a microcontroller I / O devices. General talk: a chip becomes a computer. Its small size, light weight, cheap, for the study, application and development of facilities provided. At the same time, learning to use the MCU is to understand the principle and structure of the computer the best option.Microcontroller and the computer functions internally with similar modules, such as CPU, memory, parallel bus, the same effect there, and hard disk memory device, is it different properties of these components are relatively weak many of our home computer, but the price is low , usually not more than 10 yuan you can do with it ...... some control for a class is not very complicated electrical work is enough of. We are using automatic drum washing machine, smoke hood, VCD and so on appliances which could see its shadow! ...... It is mainly part of the core components as the control.t is an online real-time control computer, on-line is on-site control, need to have strong anti-interference ability, low cost, and this is, and off-line computer (such as home PC), the main difference. Single chipMCU is through running, and can be modified. Through different procedures to achieve different functions, in particular special unique features, this is another device much effort needs to be done, some are great efforts are very difficult to achieve. A not very complex functions if the 50's with the United States developed 74 series, or the 60's CD4000 series of these pure hardware buttoned, then the circuit must be a large PCB board! But if the United States if the 70's with a series of successful SCM market, the result will be a drastic change! Just because you are prepared by microcomputer programs can achieve high intelligence, high efficiency and high reliability!As the microcontroller on the cost-sensitive, so now the dominant software or the lowest level assembly language, which is the lowest level in addition to more than binary machine code language, and as so low why is the use? Many high-level language has reached the level of visual programming Why is not it? The reason is simply that there is no home computer as a single chip CPU, not as hard as a mass storage device. A visualization of small high-level language program is only one button on it though, will reach tens of K in size! For the home PC's hard drive in terms of nothing but speaking for the MCU is not acceptable. SCM in the utilization of hardware resources to be very high for the job so although the original is still in the compilation of a lot of use. The same token, if the giant computer operating system and applications run up get home PC, home PC, also bear not work.Can be said that the twentieth century across the three "power" era, that is, the age of electricity, the electronic age and has entered into the computer age. However, this computer, usually refers to the personal computer, referred to as PC. It consists of the host, keyboard, monitor and other components. Another type of computer, most people do not know how. This computer is to give all kinds of machinery, intelligent single chip (also known as micro-controller). As the name suggests, this computer system took only a minimal integrated circuit, can be a simple operation and control. Because it is small, usually in the charged with possession of mechanical "stomach" in. It is in the device, like the human brain plays a role, it goes wrong, the whole plant was paralyzed. Now, this microcontroller has been very widely used in the field, such as smart meters, real-time industrial control, communications equipment, navigation systems, and household appliances. Once all kinds of products were using SCM, can serve to upgrade the effectiveness of products, often in the product name preceded by the adjective - "intelligent", such as intelligent washing machines. Now some technical personnel of factories or other amateur electronics developers to engage in out of certain products, not the circuit is too complicated, that function is too simple and can easily be copied. The reason may be stuck in the product did not use a microcontroller or other programmable logic device.外文文献的翻译:单片机是一种集成在电路芯片,是采用超大规模集成电路技术把具有数据处理能力的中央处理器CPU随机存储器RAM、只读存储器ROM、多种I/O口和中断系统、定时器/计时器等功能(可能还包括显示驱动电路、脉宽调制电路、模拟多路转换器、A/D转换器等电路)集成到一块硅片上构成的一个小而完善的计算机系统。

单片机英文文献及翻译)

单片机英文文献及翻译)

Validation and Testing of Design Hardening for Single Event Effects Using the 8051 MicrocontrollerAbstractWith the dearth of dedicated radiation hardened foundries, new and novel techniques are being developed for hardening designs using non-dedicated foundry services. In this paper, we will discuss the implications of validating these methods for the single event effects (SEE) in the space environment. Topics include the types of tests that are required and the design coverage (i.e., design libraries: do they need validating for each application?). Finally, an 8051 microcontroller core from NASA Institute of Advanced Microelectronics (IAμE) CMOS Ultra Low Power Radiation Tolerant (CULPRiT) design is evaluated for SEE mitigative techniques against two commercial 8051 devices.Index TermsSingle Event Effects, Hardened-By-Design, microcontroller, radiation effects.I. INTRODUCTIONNASA constantly strives to provide the best capture of science while operating in a space radiation environment using a minimum of resources [1,2]. With a relatively limited selection of radiation-hardened microelectronic devices that are often two or more generations of performance behind commercialstate-ofthe-art technologies, NASA’s performance of this task is quite challenging. One method of alleviating this is by the use of commercial foundry alternatives with no or minimally invasive design techniques for hardening. This is often called hardened-by-design (HBD).Building custom-type HBD devices using design libraries and automated design tools may provide NASA the solution it needs to meet stringent science performance specifications in a timely,cost-effective, and reliable manner.However, one question still exists: traditional radiation-hardened devices have lot and/or wafer radiation qualification tests performed; what types of tests are required for HBD validation?II. TESTING HBD DEVICES CONSIDERATIONSTest methodologies in the United States exist to qualify individual devices through standards and organizations such as ASTM, JEDEC, and MIL-STD- 883. Typically, TID (Co-60) and SEE (heavy ion and/or proton) are required for device validation. So what is unique to HBD devices?As opposed to a “regular” commercial-off-the-shelf (COTS) device or application specific integrated circuit (ASIC) where no hardening has been performed, one needs to determine how validated is the design library as opposed to determining the device hardness. That is, by using test chips, can we “qualify” a future device using the same library?Consider if Vendor A has designed a new HBD library portable to foundries B and C. A test chip is designed, tested, and deemed acceptable. Nine months later a NASA flight project enters the mix by designing a new device using Vendor A’s library. Does this device require complete radiation qualification testing? To answer this, other questions must be asked.How complete was the test chip? Was there sufficient statistical coverage of all library elements to validate each cell? If the new NASA design uses a partially or insufficiently characterized portion of the design library, full testing might be required. Of course, if part of the HBD was relying on inherent radiation hardness of a process, some of the tests (like SEL in the earlier example) may be waived.Other considerations include speed of operation and operating voltage. For example, if the test chip was tested statically for SEE at a power supply voltage of 3.3V, is the data applicable to a 100 MHz operating frequency at 2.5V? Dynamic considerations (i.e., nonstatic operation) include the propagated effects of Single Event Transients (SETs). These can be a greater concern at higher frequencies.The point of the considerations is that the design library must be known, the coverage used during testing is known, the test application must be thoroughly understood and the characteristics of the foundry must be known. If all these are applicable or have been validated by the test chip, then no testing may be necessary. A task within NASA’s Electronic Parts and Packaging (NEPP) Program was performed to explore these types of considerations.III. HBD TECHNOLOGY EVALUATION USING THE 8051 MICROCONTROLLERWith their increasing capabilities and lower power consumption, microcontrollers are increasingly being used in NASA and DOD system designs. There are existing NASA and DoD programs that are doing technology development to provide HBD. Microcontrollers are one such vehicle that is being investigated to quantify the radiation hardness improvement. Examples of these programs are the 8051 microcontroller being developed by Mission Research Corporation (MRC) and the IAμE (the focus of this study). As these HBD technologies become available, validation of the technology, in the natural space radiation environment, for NASA’s use in spaceflight systems is required.The 8051 microcontroller is an industry standard architecture that has broad acceptance, wide-ranging applications and development tools available. There are numerous commercial vendors that supply this controller or have it integrated into some type of system-on-a-chip structure. Both MRC and IAμE chose this device to demonstrate two distinctly different technologies for hardening. The MRC example of this is to use temporal latches that require specific timing to ensure that single event effects are minimized. The IAμE technology uses ultra low power, and layout and architecture HBD design rules to achieve their results. These are fundamentally different than the approach by Aeroflex-United Technologies Microelectronics Center (UTMC), the commercial vendor of a radiation–hardened 8051, that built their 8051 microcontroller using radiationhardened processes. This broad range of technology within one device structure makes the 8051an ideal vehicle for performing this technology evaluation.The objective of this work is the technology evaluation of the CULPRiT process [3] from IAμE. The process has been baselined against two other processes, the standard 8051 commercial device from Intel and a version using state-of-the-art processing from Dallas Semiconductor. By performing this side-by-side comparison, the cost benefit, performance, and reliability trade study can be done.In the performance of the technology evaluation, this task developed hardware and software for testing microcontrollers. A thorough process was done to optimize the test process to obtain as complete an evaluation as possible. This included taking advantage of the available hardware and writing software that exercised the microcontroller such that all substructures of the processor were evaluated. This process is also leading to a more complete understanding of how to test complex structures, such as microcontrollers, and how to more efficiently test these structures in the future.IV. TEST DEVICESThree devices were used in this test evaluation. The first is the NASA CULPRiT device, which is the primary device to be evaluated. The other two devices are two versions of a commercial 8051, manufactured by Intel and Dallas Semiconductor, respectively.The Intel devices are the ROMless, CMOS version of the classic 8052 MCS-51 microcontroller. They are rated for operation at +5V, over a temperature range of 0 to 70 °C and at a clock speeds of 3.5 MHz to 24 MHz. They are manufactured in Intel’s P629.0 CHMOS III-E process.The Dallas Semiconductor devices are similar in that they are ROMless 8052 microcontrollers, but they are enhanced in various ways. They are rated for operation from 4.25 to 5.5 Volts over 0 to 70 °C at clock speeds up to 25 MHz. They have a second full serial port built in, seven additional interrupts, a watchdog timer, a power fail reset, dual data pointers and variable speed peripheral access. In addition, the core is redesigned so that the machine cycle is shortened for most instructions, resulting in an effective processing ability that is roughly 2.5 times greater (faster) than the standard 8052 device. None of these features, other than those inherent in the device operation, were utilized in order to maximize the similarity between the Dallas and Intel test codes.The CULPRiT technology device is a version of the MSC-51 family compatible C8051 HDL core licensed from the Ultra Low Power (ULP) process foundry. The CULPRiT technology C8051 device is designed to operate at a supply voltage of 500 mV and includes an on-chip input/output signal level-shifting interface with conventional higher voltage parts. The CULPRiT C8051 device requires two separate supply voltages; the 500 mV and the desired interface voltage. The CULPRiT C8051 is ROMless and is intended to be instruction set compatible with the MSC-51 family.V. TEST HARDWAREThe 8051 Device Under Test (DUT) was tested as a component of a functional computer. Aside from DUT itself, the other componentsof the DUT computer were removed from the immediate area of the irradiation beam.A small card (one per DUT package type) with a unique hard-wired identifier byte contained the DUT, its crystal, and bypass capacitors (and voltage level shifters for the CULPRiT DUTs). This "DUT Board" was connected to the "Main Board" by a short 60-conductor ribbon cable. The Main Board had all other components required to complete the DUT Computer, including some which nominally are not necessary in some designs (such as external RAM, external ROM and address latch). The DUT Computer and the Test Control Computer were connected via a serial cable and communications were established between the two by the Controller (that runs custom designed serial interface software). This Controller software allowed for commanding of the DUT, downloading DUT Code to the DUT, and real-time error collection from the DUT during and post irradiation. A 1 Hz signal source provided an external watchdog timing signal to the DUT, whose watchdog output was monitored via an oscilloscope. The power supply was monitored to provide indication of latchup.VI. TEST SOFTWAREThe 8051 test software concept is straightforward. It was designed to be a modular series of small test programs each exercising a specific part of the DUT. Since each test was stand alone, they were loaded independently of each other for execution on the DUT. This ensured that only the desired portion of the 8051 DUT was exercised during the test and helped pinpoint location of errors that occur during testing. All test programs resided on the controller PC until loaded via the serial interface to the DUT computer. In this way, individual tests could have been modified at any time without the necessity of burning PROMs. Additional tests could have also been developed and added without impacting the overall test design. The only permanent code, which was resident on the DUT, was the boot code and serial code loader routines that established communications between the controller PC and the DUT.All test programs implemented:• An external Universal Asynchronous Receive and Transmit device (UART) for transmission of error information and communication to controller computer.• An external real-time clock for data error tag.•A watchdog routine designed to provide visual verification of 8051 health and restart test code if necessary.• A "foul-up" routine to reset program counter if it wanders out of code space.• An external telemetry data storage memory to provide backup of data in the event of an interruption in data transmission.The brief description of each of the software tests used is given below. It should be noted that for each test, the returned telemetry (including time tag) was sent to both the test controller and the telemetry memory, giving the highest reliability that all data is captured.Interrupt –This test used 4 of 6 available interrupt vectors (Serial, External, Timer0 Overflow, and Timer1 Overflow) to trigger routines that sequentially modified a value in the accumulator which was periodically compared to a known value. Unexpected values were transmitted with register information.Logic –This test performed a series of logic and math computations and provided three types of error identifications: 1) addition/subtraction, 2) logic and 3) multiplication/division. All miscompares of computations and expected results were transmitted with other relevant register information.Memory – This test loaded internal data memory at locations D:0x20 through D:0xff (or D:0x20 through D:0x080 for the CULPRiT DUT), indirectly, with an 0x55 pattern. Compares were performed continuously and miscompares were corrected while error information and register values were transmitted.Program Counter -The program counter was used to continuously fetch constants at various offsets in the code. Constants were compared with known values and miscompares were transmitted along with relevant register information. Registers – This test loaded each of four (0,1,2,3) banks of general-purpose registers with either 0xAA (for banks 0 and 2) or 0x55 (for banks 1 and 3). The pattern was alternated in order to test the Program Status Word (PSW) special function register, which controls general-purpose register bank selection. General-purpose register banks were then compared with their expected values. All miscompares were corrected and error information was transmitted.Special Function Registers (SFR) – This test used learned static values of 12 out 21 available SFRs and then constantly compared the learned value with the current one. Miscompares were reloaded with learned value and error information was transmitted.Stack – This test performed arithmetic by pushing and popping operands on the stack. Unexpected results were attributed to errors on the stack or to the stack pointer itself and were transmitted with relevant register information.VII. TEST METHODOLOGYThe DUT Computer booted by executing the instruction code located at address 0x0000. Initially, the device at this location was an EPROM previously loaded with "Boot/Serial Loader" code. This code initialized the DUT Computer and interface through a serial connection to the controlling computer, the "Test Controller". The DUT Computer downloaded Test Code and put it into Program Code RAM (located on the Main Board of the DUT Computer). It then activated a circuit which simultaneously performed two functions: held the DUT reset line active for some time (~10 ms); and, remapped the Test Code residing in the Program Code RAM to locate it to address 0x0000 (the EPROM will no longer be accessible in the DUT Computer's memory space). Upon awaking from the reset, the DUT computer again booted by executing the instruction code at address 0x0000, except this time that code was not be the Boot/Serial Loader code but the Test Code.The Test Control Computer always retained the ability to force the reset/remap function, regardless of the DUT Computer's functionality. Thus, if the test ran without a Single Event Functional Interrupt (SEFI) either the DUT Computer itselfor the Test Controller could have terminated the test and allowed the post-test functions to be executed. If a SEFI occurred, the Test Controller forced a reboot into Boot/Serial Loader code and then executed the post-test functions. During any test of the DUT, the DUT exercised a portion of its functionality (e.g., Register operations or Internal RAM check, or Timer operations) at the highest utilization possible, while making a minimal periodic report to the Test Control Computer to convey that the DUT Computer was still functional. If this reportceased, the Test Controller knew that a SEFI had occurred. This periodic data was called "telemetry". If the DUT encountered an error that was not interrupting the functionality (e.g., a data register miscompare) it sent a more lengthy report through the serial port describing that error, and continued with the test.VIII.DISCUSSIONA. Single Event LatchupThe main argument for why latchup is not an issue for the CULPRiT devices is that the operating voltage of 0.5 volts should be below the holding voltage required for latchup to occur. In addition to this, the cell library used also incorporates the heavy dual guard-barring scheme [4]. This scheme has been demonstrated multiple times to be very effective in rendering CMOS circuits completely immune to SEL up to test limits of 120 MeV-cm2/mg. This is true in circuits operating at 5, 3.3, and 2.5 Volts, as well as the 0.5 Volt CULPRiT circuits. In one case, a 5 Volt circuit fabricated on noncircuits wafers even exhibited such SEL immunity.B. Single Event UpsetThe primary structure of the storage unit used in the CULPRiT devices is the Single Event Resistant Topology (SERT) [5]. Given the SERT cell topology and a single upset node assumption, it is expected that the SERT cell will be completely immune to SEUs occurring internal to the memory cell itself. Obviously there are other things going on. The CULPRiT 8051 results reported here are quite similar to some resultsobtained with a CULPRiT CCSDS lossless compression chip (USES) [6]. The CULPRiT USES was synthesized using exactly the same tools and library as the CULPRiT 8051.With the CULPRiT USES, the SEU cross section data [7] was taken as a function of frequency at two LET values, 37.6 and 58.5 MeV-cm2/mg. In both cases the data fit well to a linear model where cross section is proportional to clock. In the LET 37.6 case, the zero frequency intercept occurred essentially at the zero cross section point, indicating that virtually all of these SEUs are captured SETs from the combinational logic. The LET 58.5 data indicated that the SET (frequency dependent) component is sitting on top of a "dc-bias" component –presumably a second upset mechanism is occurring internal to the SERT cells only at a second, higher LET threshold.The SET mitigation scheme used in the CULPRiT devices is based on the SERT cell's fault tolerant input property when redundant input data is provided to separate storage nodes. The idea is that the redundant input data is provided through a total duplication of combinational logic (referred to as “dual rail design”) such that a simple SET on one rail cannot produce an upset. Therefore, some other upset mechanism must be happening. It is possible that a single particle strike is placing an SET on both halves of the logic streams, allowing an SET to produce an upset. Care was taken to separate the dual sensitive nodes in the SERT cell layouts but the automated place-and-route of the combinatorial logic paths may have placed dual sensitive nodes close enough.At this point, the theory for the CULPRiT SEU response is that at about an LET of 20, the energy deposition is sufficiently wide enough (and in the right locations) to produce an SET in both halves of the combinatorial logic streams. Increasing LET allows for more regions to be sensitive to this effect, yielding a larger cross section. Further, the second SEU mechanism that starts at an LET of about 40-60 has to do with when the charge collection disturbance cloud gets large enough to effectively upset multiples of the redundant storage nodes within the SERT cell itself. In this 0.35 μm library, the node separation is several microns. However, since it takes less charge to upset a node operating at 0.5 Volts, with transistors having effective thresholds around 70 mV, this is likely the effect being observed. Also the fact that the per-bit memory upset cross section for the CULPRiT devices and the commercial technologies are approximately equal, as shown in Figure 9, indicates that the cell itself has become sensitive to upset.IX. SUMMARYA detailed comparison of the SEE sensitivity of a HBD technology (CULPRiT) utilizing the 8051 microcontroller as a test vehicle has been completed. This paper discusses the test methodology used and presents a comparison of the commercial versus CULPRiT technologies based on the data taken. The CULPRiT devices consistently show significantly higher threshold LETs and an immunity to latchup. In all but the memory test at the highest LETs, the cross section curves for all upset events is one to two orders of magnitude lower than the commercial devices. Additionally, theory is presented, based on the CULPRiT technology, that explain these results.This paper also demonstrates the test methodology for quantifying the level of hardness designed into a HBD technology. By using the HBD technology in a real-world device structure (i.e., not just a test chip), and comparing results to equivalent commercial devices, one can have confidence in the level of hardness that would be available from that HBD technology in any circuit application.ACKNOWLEDGEMENTSThe authors of this paper would like to acknowledge the sponsors of this work. These are the NASA Electronic Parts and Packaging Program (NEPP), NASA Flight Programs, and the Defense Threat Reduction Agency (DTRA).。

(完整word版)基于单片机外文翻译

(完整word版)基于单片机外文翻译

河南理工大学毕业设计(论文)说明书Presented in this paper is a design of pulse measuring instrument based on MCU, as the circuit module plays an important role in the system, such as heart rate acquisition circuit, display circuit and STC89C52 microcontroller through the serial portto realize the connection. This design with STC89C52 microcontroller as the central control unit, through ST188 as infrared photoelectric sensor to collect the pulse signal, after the lm358 for op amp; again through before and after filtering, magnifying, shaping, and get stable signal; functions to achieve the rapid detection of heart. Youcan also through the button to set the pulse value scope; buzzer driver module In the range beyond the scope of the alarm prompt, the measurement results in the liquid crystal display.Experimental results show that the test results of the design and practical requirements are basically the same, STC89C52 MCU strong anti-interference ability and LCD1602 display control the advantages of more convenient so that thesefeatures can be successfully completed. The production cost less than 100 yuan,with low price, easy manipulation, low power consumption, high reliability, very applicable to families and individuals.Heart rate) in professional terms is used to describe the human heart beat cycle. Pulse of modern Chinese will be interpreted as "heart beating frequency value; so the heart rate can also said in a unit of time, heart rhythm speed.Everyone's heart rate signals mostly contains rich physiological and psychological information. This is due to the health of internal organs of the body can reflect in the pulse information. This discovery has gradually attracted the attention of many clinicians. In our country, pulse diagnosis has been regarded as the essence of Chinese medicine; so far the clinical practice has about 2600 years. However due to the use of fingers often there will be some sweat glands refers to pulse diagnosis in the presence of errors can not be ignored; and leads to inaccurate measurement. Then perhaps you would say and the ear vein measurement, instead of the previous is often used. Although by measuring ear ripple come to pulse signal relatively comparison Clean, but because the ear pulse signal is weak, especially when the seasonal changes, the measurement signal is vulnerable to the influence of environmental temperature,resulting in inaccurate measurement values.With the development and progress of world science and technology and economy, cherish life, health care has become a common pursuit of mankind throughout the world. According to the Health Bureau statistics, every year because of cardiovascular and cerebrovascular diseasesdeath of the highest number of human deaths first, not only the high cost of health care, back to the family, the government and the society caused great burden. In recent years, due to the accelerated pace of life, unreasonable eating habits and many junk food impact and other reasons, the incidence of cardiovascular and cerebrovascular is showing a trend of rising year by year. How to scientifically and harmless reduce cardiovascular disease morbidity and mortality, effectively reduce cardiovascular and cerebrovascular diseasesand social Family burden, has become a very serious problem faced by human beings all over the world.World first lever type pulse scanner is Vierordt was founded in 1854. It is a lever and a pressure drum scanner uses the notation to record the pulse waveform, also is the human for the first time through non invasion of recording the human body pulse, then caused a great sensation. However the starting point for the development of domestic relatively is relatively low, in the early 50s of the 20th century Zhu Yancaiwill pulse instrument reference to objective study on the pulse diagnosis in traditional Chinese medicine. In recent years, non invasive vascular function detection gradually attracted the eyes of the medical professionals. Since about 1980, no traumavascular function detection by using the small range, its principle is roughly Based on hemodynamic rheology theory and elastic chamber theory. Characterized in it by the module temperature, blood pressure cuff module, blood oxygen module of multi physiological signal acquisition module combination by of brachial artery blocking opening in the process of finger end temperature signal, oxygen saturation and pulse wave signal changes of the parameters of the, again according to the clinical trial data acquisition and through the method of signal processing and statistical analysis, establishment of vascular function quantitative evaluation formula and blood vessel function evaluation. It has noninvasive, simple operation, accurate, good repeatability and convenient for clinical application and automatically generates diagnosis ofcardiovascular function, health status analysis and gives related medical solution Release.Now pulse testing is no longer confined to the traditional manual testing or stethoscope test, only use electronic devices can be obtained more accurate data. In today's society, most of electronic measuring instruments has been directed towards the digitization, automation development direction. Pulse measuring instrument is not only good performance, simple structure, and has good value of application and popularization. In general, the pulse measurement instrument development is mainly the following trends: first, in the absence of human can automatically analyze the measurement of pulse value; the pulse of the traditional instrument need after experienced doctor pulse signal of the first initial analysis and then make a comprehensive analysis to final To confirm the results, this method of total said tonot only waste a lot of manpower and by factitious error is relatively large. The second: the wide application of digital technology and other advanced technology; pulse measuring instrument integration to want to achieve a higher degree, and are more convenient to carry must rely on the rapid development of digital science and technology; at the same time digital signal processing application will enable interference becomes smaller measurement result ismore accurate. The third: multi function of more and more obvious. The fourth: cheap, easy to carry and application and popularization value better, to the general public convenience.Has always been in the hospital for basis for clinical diagnosis and treatment of most source extracted from the human pulse wave in physiological and pathological information. In China, feeling the pulse is old doctor of traditional Chinese medicineis the most commonly used to diagnose disease, has been in use ever since. The pulse signal emitted by human body contains the velocity of heart rate, full waveform, period and amplitude, a full range of integrated information, in a large extent can reflect human body each part information (for example blood viscosity, blood velocity). Although these biological signals exist in the human body, the signal intensity relative to said is relatively weak; if in a noisy environment effect is more obvious.This graduation design principle is the use of single-chip microprocessorSTC89C52 as the center processor; pulse signal is collected by the sensor, through the microcontroller chip in the interior of the system timer to set the time; finally get the heart rate beat numerical by STC89C52 microcontroller to signal accumulation can be. Normal heartbeat is about 60-100 per minute times, circuit diagram of key module can through the button to set the scope of people's heart rate, above or below the setting range of possible heart there will be risks, buzzer driver module will drive buzzer alarm; the final measurement results Will be displayed on the LCD. The design can by viewing the IR indicates whether the lights and flashing, if sustained, stable flashing that test results are correct and error is small. With the assumption that the display results back and forth rock and numerical difference between the larger, there may exist error. Through the above steps, can roughly determine the body's own health, and is particularly suitable to be used to the individual or family, is also sometimes used in nursing homes and healthcare center.The design of the selection of SCM is STC89C52。

单片机的外文文献及中文翻译

单片机的外文文献及中文翻译

单片机的外文文献及中文翻译一、外文文献Title: The Application and Development of SingleChip Microcontrollers in Modern ElectronicsSinglechip microcontrollers have become an indispensable part of modern electronic systems They are small, yet powerful integrated circuits that combine a microprocessor core, memory, and input/output peripherals on a single chip These devices offer significant advantages in terms of cost, size, and power consumption, making them ideal for a wide range of applicationsThe history of singlechip microcontrollers can be traced back to the 1970s when the first microcontrollers were developed Since then, they have undergone significant advancements in technology and performance Today, singlechip microcontrollers are available in a wide variety of architectures and capabilities, ranging from simple 8-bit devices to complex 32-bit and 64-bit systemsOne of the key features of singlechip microcontrollers is their programmability They can be programmed using various languages such as C, Assembly, and Python This flexibility allows developers to customize the functionality of the microcontroller to meet the specific requirements of their applications For example, in embedded systems for automotive, industrial control, and consumer electronics, singlechip microcontrollers can be programmed to control sensors, actuators, and communication interfacesAnother important aspect of singlechip microcontrollers is their low power consumption This is crucial in batterypowered devices and portable electronics where energy efficiency is of paramount importance Modern singlechip microcontrollers incorporate advanced power management techniques to minimize power consumption while maintaining optimal performanceIn addition to their use in traditional electronics, singlechip microcontrollers are also playing a significant role in the emerging fields of the Internet of Things (IoT) and wearable technology In IoT applications, they can be used to collect and process data from various sensors and communicate it wirelessly to a central server Wearable devices such as smartwatches and fitness trackers rely on singlechip microcontrollers to monitor vital signs and perform other functionsHowever, the design and development of systems using singlechip microcontrollers also present certain challenges Issues such as realtime performance, memory management, and software reliability need to be carefully addressed to ensure the successful implementation of the applications Moreover, the rapid evolution of technology requires developers to constantly update their knowledge and skills to keep up with the latest advancements in singlechip microcontroller technologyIn conclusion, singlechip microcontrollers have revolutionized the field of electronics and continue to play a vital role in driving technological innovation Their versatility, low cost, and small form factor make them an attractive choice for a wide range of applications, and their importance is expected to grow further in the years to come二、中文翻译标题:单片机在现代电子领域的应用与发展单片机已成为现代电子系统中不可或缺的一部分。

(完整word版)单片机外文文献翻译

(完整word版)单片机外文文献翻译

中文资料原文单片机单片机也被称为微控制器(Microcontroller Unit),常用英文字母的缩写MCU表示单片机,它最早是被用在工业控制领域。

单片机由芯片内仅有CPU的专用处理器发展而来。

最早的设计理念是通过将大量外围设备和CPU集成在一个芯片中,使计算机系统更小,更容易集成进复杂的而对体积要求严格的控制设备当中。

INTEL的Z80是最早按照这种思想设计出的处理器,从此以后,单片机和专用处理器的发展便分道扬镳。

早期的单片机都是8位或4位的。

其中最成功的是INTEL的8031,因为简单可靠而性能不错获得了很大的好评。

此后在8031上发展出了MCS51系列单片机系统。

基于这一系统的单片机系统直到现在还在广泛使用。

随着工业控制领域要求的提高,开始出现了16位单片机,但因为性价比不理想并未得到很广泛的应用。

90年代后随着消费电子产品大发展,单片机技术得到了巨大提高。

随着INTEL i960系列特别是后来的ARM系列的广泛应用,32位单片机迅速取代16位单片机的高端地位,并且进入主流市场。

而传统的8位单片机的性能也得到了飞速提高,处理能力比起80年代提高了数百倍。

目前,高端的32位单片机主频已经超过300MHz,性能直追90年代中期的专用处理器,而普通的型号出厂价格跌落至1美元,最高端[1]的型号也只有10美元。

当代单片机系统已经不再只在裸机环境下开发和使用,大量专用的嵌入式操作系统被广泛应用在全系列的单片机上。

而在作为掌上电脑和手机核心处理的高端单片机甚至可以直接使用专用的Windows和Linux操作系统。

单片机比专用处理器更适合应用于嵌入式系统,因此它得到了最多的应用。

事实上单片机是世界上数量最多的计算机。

现代人类生活中所用的几乎每件电子和机械产品中都会集成有单片机。

手机、电话、计算器、家用电器、电子玩具、掌上电脑以及鼠标等电脑配件中都配有1-2部单片机。

而个人电脑中也会有为数不少的单片机在工作。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

中文资料原文单片机单片机也被称为微控制器(Microcontroller Unit),常用英文字母的缩写MCU表示单片机,它最早是被用在工业控制领域。

单片机由芯片内仅有CPU的专用处理器发展而来。

最早的设计理念是通过将大量外围设备和CPU集成在一个芯片中,使计算机系统更小,更容易集成进复杂的而对体积要求严格的控制设备当中。

INTEL的Z80是最早按照这种思想设计出的处理器,从此以后,单片机和专用处理器的发展便分道扬镳。

早期的单片机都是8位或4位的。

其中最成功的是INTEL的8031,因为简单可靠而性能不错获得了很大的好评。

此后在8031上发展出了MCS51系列单片机系统。

基于这一系统的单片机系统直到现在还在广泛使用。

随着工业控制领域要求的提高,开始出现了16位单片机,但因为性价比不理想并未得到很广泛的应用。

90年代后随着消费电子产品大发展,单片机技术得到了巨大提高。

随着INTEL i960系列特别是后来的ARM系列的广泛应用,32位单片机迅速取代16位单片机的高端地位,并且进入主流市场。

而传统的8位单片机的性能也得到了飞速提高,处理能力比起80年代提高了数百倍。

目前,高端的32位单片机主频已经超过300MHz,性能直追90年代中期的专用处理器,而普通的型号出厂价格跌落至1美元,最高端[1]的型号也只有10美元。

当代单片机系统已经不再只在裸机环境下开发和使用,大量专用的嵌入式操作系统被广泛应用在全系列的单片机上。

而在作为掌上电脑和手机核心处理的高端单片机甚至可以直接使用专用的Windows和Linux操作系统。

单片机比专用处理器更适合应用于嵌入式系统,因此它得到了最多的应用。

事实上单片机是世界上数量最多的计算机。

现代人类生活中所用的几乎每件电子和机械产品中都会集成有单片机。

手机、电话、计算器、家用电器、电子玩具、掌上电脑以及鼠标等电脑配件中都配有1-2部单片机。

而个人电脑中也会有为数不少的单片机在工作。

汽车上一般配备40多部单片机,复杂的工业控制系统上甚至可能有数百台单片机在同时工作!单片机的数量不仅远超过PC机和其他计算的总和,甚至比人类的数量还要多。

单片机又称单片微控制器,它不是完成某一个逻辑功能的芯片,而是把一个计算机系统集成到一个芯片上。

相当于一个微型的计算机,和计算机相比,单片机只缺少了I/O设备。

概括的讲:一块芯片就成了一台计算机。

它的体积小、质量轻、价格便宜、为学习、应用和开发提供了便利条件。

同时,学习使用单片机是了解计算机原理与结构的最佳选择。

单片机内部也用和电脑功能类似的模块,比如CPU,内存,并行总线,还有和硬盘作用相同的存储器件,不同的是它的这些部件性能都相对我们的家用电脑弱很多,不过价钱也是低的,一般不超过10元即可......用它来做一些控制电器一类不是很复杂的工作足矣了。

我们现在用的全自动滚筒洗衣机、排烟罩、VCD等等的家电里面都可以看到它的身影!......它主要是作为控制部分的核心部件。

它是一种在线式实时控制计算机,在线式就是现场控制,需要的是有较强的抗干扰能力,较低的成本,这也是和离线式计算机的(比如家用PC)的主要区别。

单片机芯片单片机是靠程序运行的,并且可以修改。

通过不同的程序实现不同的功能,尤其是特殊的独特的一些功能,这是别的器件需要费很大力气才能做到的,有些则是花大力气也很难做到的。

一个不是很复杂的功能要是用美国50年代开发的74系列,或者60年代的CD4000系列这些纯硬件来搞定的话,电路一定是一块大PCB板!但是如果要是用美国70年代成功投放市场的系列单片机,结果就会有天壤之别!只因为单片机的通过你编写的程序可以实现高智能,高效率,以及高可靠性!由于单片机对成本是敏感的,所以目前占统治地位的软件还是最低级汇编语言,它是除了二进制机器码以上最低级的语言了,既然这么低级为什么还要用呢?很多高级的语言已经达到了可视化编程的水平为什么不用呢?原因很简单,就是单片机没有家用计算机那样的CPU,也没有像硬盘那样的海量存储设备。

一个可视化高级语言编写的小程序里面即使只有一个按钮,也会达到几十K的尺寸!对于家用PC的硬盘来讲没什么,可是对于单片机来讲是不能接受的。

单片机在硬件资源方面的利用率必须很高才行,所以汇编虽然原始却还是在大量使用。

一样的道理,如果把巨型计算机上的操作系统和应用软件拿到家用PC上来运行,家用PC的也是承受不了的。

可以说,二十世纪跨越了三个“电”的时代,即电气时代、电子时代和现已进入的电脑时代。

不过,这种电脑,通常是指个人计算机,简称PC机。

它由主机、键盘、显示器等组成。

还有一类计算机,大多数人却不怎么熟悉。

这种计算机就是把智能赋予各种机械的单片机(亦称微控制器)。

顾名思义,这种计算机的最小系统只用了一片集成电路,即可进行简单运算和控制。

因为它体积小,通常都藏在被控机械的“肚子”里。

它在整个装置中,起着有如人类头脑的作用,它出了毛病,整个装置就瘫痪了。

现在,这种单片机的使用领域已十分广泛,如智能仪表、实时工控、通讯设备、导航系统、家用电器等。

各种产品一旦用上了单片机,就能起到使产品升级换代的功效,常在产品名称前冠以形容词——“智能型”,如智能型洗衣机等。

现在有些工厂的技术人员或其它业余电子开发者搞出来的某些产品,不是电路太复杂,就是功能太简单且极易被仿制。

究其原因,可能就卡在产品未使用单片机或其它可编程逻辑器件上。

单片机历史单片机诞生于20世纪70年代末,经历了SCM、MCU、SoC三大阶段。

起初模型1.SCM即单片微型计算机(Single Chip Microcomputer)阶段,主要是寻求最佳的单片形态嵌入式系统的最佳体系结构。

“创新模式”获得成功,奠定了SCM与通用计算机完全不同的发展道路。

在开创嵌入式系统独立发展道路上,Intel公司功不可没。

2.MCU即微控制器(Micro Controller Unit)阶段,主要的技术发展方向是:不断扩展满足嵌入式应用时,对象系统要求的各种外围电路与接口电路,突显其对象的智能化控制能力。

它所涉及的领域都与对象系统相关,因此,发展MCU的重任不可避免地落在电气、电子技术厂家。

从这一角度来看,Intel逐渐淡出MCU的发展也有其客观因素。

在发展MCU方面,最著名的厂家当数Philips公司。

Philips公司以其在嵌入式应用方面的巨大优势,将MCS-51从单片微型计算机迅速发展到微控制器。

因此,当我们回顾嵌入式系统发展道路时,不要忘记Intel和Philips的历史功绩。

嵌入式系统单片机是嵌入式系统的独立发展之路,向MCU阶段发展的重要因素,就是寻求应用系统在芯片上的最大化解决;因此,专用单片机的发展自然形成了SOC化趋势。

随着微电子技术、IC设计、EDA工具的发展,基于SOC的单片机应用系统设计会有较大的发展。

因此,对单片机的理解可以从单片微型计算机、单片微控制器延伸到单片应用系统。

单片机的应用领域目前单片机渗透到我们生活的各个领域,几乎很难找到哪个领域没有单片机的踪迹。

导弹的导航装置,飞机上各种仪表的控制,计算机的网络通讯与数据传输,工业自动化过程的实时控制和数据处理,广泛使用的各种智能IC卡,民用豪华轿车的安全保障系统,录像机、摄像机、全自动洗衣机的控制,以及程控玩具、电子宠物等等,这些都离不开单片机。

更不用说自动控制领域的机器人、智能仪表、医疗器械了。

因此,单片机的学习、开发与应用将造就一批计算机应用与智能化控制的科学家、工程师。

单片机广泛应用于仪器仪表、家用电器、医用设备、航空航天、专用设备的智能化管理及过程控制等领域,大致可分如下几个范畴:1.在智能仪器仪表上的应用单片机具有体积小、功耗低、控制功能强、扩展灵活、微型化和使用方便等优点,广泛应用于仪器仪表中,结合不同类型的传感器,可实现诸如电压、功率、频率、湿度、温度、流量、速度、厚度、角度、长度、硬度、元素、压力等物理量的测量。

采用单片机控制使得仪器仪表数字化、智能化、微型化,且功能比起采用电子或数字电路更加强大。

例如精密的测量设备(功率计,示波器,各种分析仪)。

2.在工业控制中的应用用单片机可以构成形式多样的控制系统、数据采集系统。

例如工厂流水线的智能化管3.在家用电器中的应用可以这样说,现在的家用电器基本上都采用了单片机控制,从电饭褒、洗衣机、电冰箱、空调机、彩电、其他音响视频器材、再到电子秤量设备,五花八门,无所不在。

4.在计算机网络和通信领域中的应用现代的单片机普遍具备通信接口,可以很方便地与计算机进行数据通信,为在计算机网络和通信设备间的应用提供了极好的物质条件,现在的通信设备基本上都实现了单片机智能控制,从手机,电话机、小型程控交换机、楼宇自动通信呼叫系统、列车无线通信、再到日常工作中随处可见的移动电话,集群移动通信,无线电对讲机等。

5.单片机在医用设备领域中的应用单片机在医用设备中的用途亦相当广泛,例如医用呼吸机,各种分析仪,监护仪,超声诊断设备及病床呼叫系统等等。

6.在各种大型电器中的模块化应用某些专用单片机设计用于实现特定功能,从而在各种电路中进行模块化应用,而不要求使用人员了解其内部结构。

如音乐集成单片机,看似简单的功能,微缩在纯电子芯片中(有别于磁带机的原理),就需要复杂的类似于计算机的原理。

如:音乐信号以数字的形式存于存储器中(类似于ROM),由微控制器读出,转化为模拟音乐电信号(类似于声卡)。

在大型电路中,这种模块化应用极大地缩小了体积,简化了电路,降低了损坏、错误率,也方便于更换。

7.单片机在汽车设备领域中的应用单片机在汽车电子中的应用非常广泛,例如汽车中的发动机控制器,基于CAN总线的汽车发动机智能电子控制器,GPS导航系统,abs防抱死系统,制动系统等等。

此外,单片机在工商,金融,科研、教育,国防航空航天等领域都有着十分广泛的用途。

学习应用六大重要部分单片机学习应用的六大重要部分一、总线:我们知道,一个电路总是由元器件通过电线连接而成的,在模拟电路中,连线并不成为一个问题,因为各器件间一般是串行关系,各器件之间的连线并不很多,但计算机电路却不一样,它是以微处理器为核心,各器件都要与微处理器相连,各器件之间的工作必须相互协调,所以需要的连线就很多了,如果仍如同模拟电路一样,在各微处理器和各器件间单独连线,则线的数量将多得惊人,所以在微处理机中引入了总线的概念,各个器件共同享用连线,所有器件的8根数据线全部接到8根公用的线上,即相当于各个器件并联起来,但仅这样还不行,如果有两个器件同时送出数据,一个为0,一个为1,那么,接收方接收到的究竟是什么呢?这种情况是不允许的,所以要通过控制线进行控制,使器件分时工作,任何时候只能有一个器件发送数据(可以有多个器件同时接收)。

相关文档
最新文档