数据采集系统中英文对照外文翻译文献

合集下载

完整word版基于STM32的数据采集系统英文文献

完整word版基于STM32的数据采集系统英文文献

Design of the Data Acquisition System Based on STM32ABSTRACTEarly detect ion of failures in machi nery equipments is one of the most important concerns to industry. In order to monitor effective of rotating machinery, we devel opment a micro-c on trolleruC/OS-II system of sig nal acquisiti on system based on STM32 in this paper, we have give n the whole desig n scheme of system and the multi-cha nnel vibrati on sig nal in axis X, Y and Z of the rotary shaft can be acquired rap idly and dis play in real-time. Our system has the character of sim pie structure,low po wer consump ti on, mi niaturizatio n.Keywords: STM32; data acquisition; embedded system;uC/OS-ll;1.1.IntroductionThe real-time acquisition of vibration in rotating machinery can effectively p redict, assessa nd diag nose equipment op erati on state, the in dustry gets vibratio n data acquisiti on Rap idly and an alysis in real-time can mon itor the rotati ng mach inery state and guara ntee the safe running of the equipmen t. I n order to p reve nt failure, reduce maintenance time, improve the econo mic efficie ncy, The purpose of fault diag no sis system can detect these devices through the vibratio n sig nal acquisiti on of rotating machinery, and process the data acquisition, then it will make timely judgme nt of running state of equipment .While the data acquisiti on module is the core part of the fault diag no sis system [1-4].The p ractical app licati on in the in dustrial field, is the equipment operating parameters willbe acquired to monitor equipment op erati ng state. In traditi onal data acquisiti on systems, the data from acquisiti on card are gen erally send into the compu ter, and sp ecific software will be devel oped for the data acquisition. The main contribution of this paper has designed the STM32 p latform with ARM tech no logy, that has become a traditi onal main stream tech no logy in embedded systems, and thecollect ing data toward the directi on of high real-time, multi-parameter, high-precision, while data storage become large capacity, more mini aturizati on and p ortable, and the devel opment of multicom muni cati on mode and Iong-distanee for data transmission. So as to meet the actual acquisition system multitask ing requireme nts, this article has desig ned based on STM32 micro-co ntroller uC/OS-ll system of sig nal acquisiti on system. Therefore, in order to meet the actual acquisiti on system multitask requireme nts, this no velty of this article has desig ned a sig nal acquisiti on system in micro-c on troller uC/OS-II based on STM32.2・Architecture of data acquisition systemData acquisiti on as key tech no logy for mon itori ng equipmen t, rece ntly a lot of work has been done on it. An embedded parallel data acquisition system based onFPGA is Optimized designed which will make it reasonableto divide and allocate high-s peed and low-s peed A/D [5]. I nstead, it has use a high-s peed A/Dcon verier and Stratix II series of FPGA for data collecti on and p rocess ing, in which the main contribution is used of the Compact Peripheral ComponentIn terc onn ect, the system has the characters of modularizati on, sturd in ess and scalability [6].But remote control will be needed in Special Conditions, this paper introduce the embedded operating system platform based on Windows CE and uC/OS-II to desig n a remote acquisiti on and con trol system with theGPRS wireless tech no logy [7-8]」n order to achieve the data shari ng of multi-user, it has build the embedded dyn amic website for data acquisiti on man ageme nt and dissem in ati on with the ARM9 and Linux operation system [9].A data collection terminal devices is designed based on ARM7 microprocessor LPC2290and embedded real-time op erati ng system uC/OS-II to solve the real-time acquisiti on of multicha nnel small sig nal and multi-cha nnel tran smissi on[ 10].O n the other han ds, two p arallelDSP-based system dedicated to the data acquisiti on on rotati ng machi nes, and the inner sig nal conditi oner is used to ada pt the sen sor out put to the input range of the acquisiti on, and the n sig nal p ost -p rocess in gby the desig n software, while the most frequently structure is to use DAS and FPGA-based, and such programs are also dependent on the DAS cost.In order to meet market requireme nts of low po wer consump ti on, low cost, and mobility, Fig.1 in this paper presents the design overall structure diagram of data acquisiti on system. Through SPI in terface, the system gets the data collectio n withthree axis acceleration sensor into the STM32 controller of inner A/D conversion module with 12-bit, this p rocess is non-i nterferi ng p arallel acquisiti on. Our system uses 240x400 LCD and touch scree n module real-time to dis play the collected data in real time.Fig J Headway Framework of System2.1.STM32 micro-controllerA 32 bit RISC STM32F103VET6, used as the processor in our system, com pared with similar p roducts, the STM32F103VET6 work at 72MHZ, with characters of stro ng p erforma nee and low po wer consump tio n, real-time and low-cost.The processor in cludes: 512K FLASH, 64K SRAM, and it will com mun icate by using five serial p ortswhich con tai n a CAN bus, a USB2.0 SLA VE mode and a Ethernet in terface, what s more two RS232 p orts are also in cluded. The system in our paper exte nd the SST25VF016B serial memory through the SPI bus in terface, that will regard as the temporary storage whe n collect large nu mber of data, furthermore, we have the A/D con verier with 12 bits resoluti on, and the fastest con versi on up to 1us, with 3.6 Vfull-scale of the system .In additi on to desig n of the system po wer supply circuit, the reset circuit, RTC circuit and GPIO port to assura ncesystem n eeds and no rmal op erati on.2.2.Data acquisitionThe machi ne state is no rmal or not is mai niy depen ded on the vibrati on sig nal.In this paper, to acquire the vibratio n data of rotati ng machi nery rotor, we have used vibrati on accelerati on tran sducers MMA7455L which could collect the data from axis x, y, and z of the company of Free-scale. The kind of vibration acceleration transducers has advantage of low cost and small size, high sensitivity and large dynamic range with small interferenee. MMA7455L is mainIy consists of gravity sensing unit and sig nal con diti oning circuit comp ositi on, and this sen sor will amp lify the tiny data before sig nal prep rocess ing. In data acquisiti on p rocess of our system, the error of samp li ng stage is mainly caused by qua ntified, and the error is depen ded on the bits of the A/D conv erter ,whe n we regard the maximum voltage as V max , theAD converter bits is n, and the quantization Q = V max/2n, then, the quantization error is obeyed uniform distributi on in [- q / 2, q / 2] [13].丘=1 ep(e)de = 0=匚仗一已)血F)尿M *血M告£ =卩I呼=血呼=口(S皿)=]z 垃/ b- 孑- z Se is aveiase enoi\ is enor variance . and —is SKR.「并NThe desig ned STM32 could built at most three 12-bit parallel ADC in this paper , whichtheoretical in dex is 72dB and the actual dyn amic range is betwee n 54 to 60dB while 2 or 3 bits is imp acted by no ise, the dyn amic range of measureme nt can up to 1000 times with 60dB. For the vast majority of the vibratio n sig nal, the maximum samp li ng rate of 10kHZ can meet actual dema nd, and the higher freque ncy of collecti on is gen erally used in the 8-12 bits AD, therefore one of con tributi on of thiswork is to choose a built-in 12-bit A/D to meet the accuracy of vibration signal acquisiti on and lower cost in this exp erime nt.3・Software design3.1. Trans plantation of C/OSIn order to ensure real-time and safety data collection requirements, in thissystem, a kind of RTOS whose source code is open and small is prop osed. It also canbe easily to be cut dow n, repo tted and solidified, and its basic functions in clud ing task management and resource management, storage management and systemmanagement. The RTOS embedded systemcould support 64 tasks, with at most 56user tasks, and four tasks of the highest and the lowest p riorities will be reta ined insystem. The uC/OS-II assig ns p riorities of the tasks accordi ng to their imp orta nee, theoperation system executive the task from the priority sequenceand each task haveindependent p riority. The op erati ng system kernel is streamli ned, and multi-task ingfun cti on is well comp ared with others, it can be transplan ted to p rocessors that from8-bit to 64-bit.The transplant in the system are to modify the three file systemstructure: OS_CPU_C.H OS_CPU.C, OS_CPU_A.ASM. Main transplan tati on p rocedure is as follows:A. OS_C PU_C.HIt has defi ned the data typ es, the len gth and growth direct ion of stack in theprocessor. Because different microprocessors have different word length , so the uC/OS-II transplan tati on in clude a series of type defi niti on to en sure its p ortability,and the revised code as follows:typ edef un sig ned char BOOLEAN;typ edef un sig ned char INT8U;typ edef sig ned char INT8S;typ edef un sig ned short INT16U;typ edef sig ned short INT16U;typ edef un sig ned int INT32U;typ edef signed int INT32S;typ edef float FP32;typ edef double FP64;typ edef un sig ned int OS_STK;typ edef un sig ned int OS_C PU_SR;Cortex-M3 p rocessor defi nes the OS_ENTER_CRITICAL () andOS_EXIT_CRITICAL () as opening and closi ng in terru pt, and they must set to 32 bit of the stack OS_STK and CPU register len gth. In additi on, that has defi ned the stack poin ter OS_STK_GROWTH stack growth direct ion from high address to lower address.B. OS_C PU.CTo modify the function OSTaskStklnit() according to the processor, the nine rema ining user in terface fun cti ons and hook fun cti ons can be n ull without sp ecial requirements, they will produce code for these functions only when theOS_C PU_HOOKS_EN is set to 1 in the file of OS_CFG.H. The stack ini tialization fun cti on OSTaskStk Init () retu rn to the new top of the stack poin ter.OS_C PU_A.ASMMost of the transplant work are comp leted in these docume nts, and modify the followi ng functions.OsStartHighRdy() is used for running the most priority ready task, it will be respon sible for stack poin ter SP from the highest p riority task of TCB con trol block, and restore the CPU, the n the task p rocess created by the user start to con trol the p rocess.OSCtxSw () is for task switch ing, When the curre nt task ready queue have a higher p riority task, the CPU will start OSCtxSw () task switchi ng to run the higher p riority task and the curre nt task stored in task stack.OSIntCtxSw () has the similar function with OSIntSw (), in order to ensure real-time p erforma nee of the system, it will run the higher p riority task directly whe n the in terr upt come, and will not store the curre nt task.OSTickISR () is use to han dle the clock in terr upt, which n eeds in terru pt to schedule its impi eme ntati on whe n a higher p riority task is wait ing for the clock sig nal.OS_CPU_SR_Save () and OS_CPU_SR_Restore () is completed to switch in terr upt while en teri ng and leav ing the critical code both functions imp leme nt by the critical p rotectio n fun ctio n OS_ENTER_CRITICAL () and OS_EXIT_CRITICAL ().After the completion ofthe above work, uC/OS-II can run on the processors.3.2.Software architectureFig.2 shows the system software architecture, so as to dis play the data visualized,uC/GUI3.90 anduC/OS-ll is transplan ted in the system, our system contains six tasks such data acquisiti on, data tran smissi on, LCD dis play, touch scree n driver, key-press management and uC/GUI interface.First of all, we should set the task priority and the task scheduling based on the priority. It needs compiete the required driver design before the data acquisition, such as A/D driver, touch panel driver and system initialization, while the initializations include: hardware platform in itializati on, system clock in itializati on, in terr upt source con figurati on, GPIO port configuration, serial port initialization and parameter configuration, and LCD in itializati on. The p rocess is that the cha nnel module sent samp li ng comma nd to theAD channel, then to inform the receiver module it has been sent the sample start comma nd, the receiver module is ready to receive and large data will store in the storage module, after the comp leti on of the first samp li ng, cha nnel module will send the compi ete comma nd of samp li ng to the receiver module, the receiver sends an in terr upt request to the storage module to sto p the data stori ng, the n the data will dis play on the LCD touch scree n. The data acquisiti on p rocess show n in Fig.3Htii-fti Zhang ami Kang /Pfticedia Cortipitier Sciifice 222 - 22Sin-pw fCOT_U>11Liil EibnsFig,2 Software Architecture ofSyMem rig J Dnia Acquisition of Flow Chart 4・Ex perimentsThe exp erime nt of the embedded system has bee n done and data acquisiti on comes from the accelerati on of MMA7455L, which is in stalled on the bench of rotat ing mach ine. The data acquisiti on have dis played as show n in Fig.4 and Fig.5, the system can select three cha nn els to collect the vibrati on sig nal from the three directi ons of X, Y and Z-axis , and in this paper the samp li ng freque ncy is 5KHZ and we have collect the vibratio n sig nal from no rmal state of un bala need state at the same cha nn el. The result shows that our system can dis play real-time data acquisiti on andp redict the p relimi nary diag no sis rapi dly.Fig 4 Nonnal Dnta AcqiiisittonFig,5 Unbalance Data AcqmsLtion5・ConclusionThis paper has designed an embeddedsignal acquisition system for real time according to the mechanical failure occurred with high frequency of in the rotating machines. The system is based on a low cost microcontroller, Vibration signals is pi cked by the three axis acceleratio n sen sor which has the p erforma nee of low cost and high sen sitivity, and the acquisiti on data from axis x, y, and z. We have desig ned the system hardware structure, and an alyses the work ing principle of data acquisiti on module. The proposed system of uC/OS-ll realize the data task management and scheduli ng, and it is comp acted with structure and low cost, what's more the system collects the vibrati on sig nal and an alysis in real-time of the rotati ng mach in es, and then quickly gives diag no stic results.AcknowledgementsThis work was supp orted by The Nati onal Natural Scie nee Foun dati on of China(51175169); Chi na Natio nal Key Tech no logy R&D P rogram(2012BAF02B01);Planned Scie nee and Tech no logy P roject of Hunan Provin ce(2009FJ4055);Scie ntificResearch Fu nd of Hu nan P rovi ncial Education Dep artme nt(10K023).REFERENCES[1] Cheng, L., Yu, H., Research on intelligent maintenance unit of rotarymachine, Computer Integrated Manufacturing Systems, vol. 10, Issue: 10, page1196-1198, 2004.[2] Yu, C., Zhong, Ou., Zhen, D., Wei, F., .Design and ImpIementation ofMon itori ng and Man ageme nt PI atform in Embedded Fault Diag no sis System,Comp uter En gi neeri ng, vol. 34 , Issue: 8, p age 264-266, 2008.[3]Bi, D., Gui, T., Jun, S., Dynam . Behavior of a High-speed Hybrid GasBeari ng-rotor System for a Rotat ing ramjet, Journal of Vibrati on and Shock, vol. 28,Issue: 9, p age 79-80, 2009.[4] Hai, L., Jun, S., Research of Driver Based on Fault Diag no sis System DataAcquisitio n Module, Mach ine Tool& Hydraulics, vol. 38 , Issue: 13, p age 166-168,2011.⑸ Hao, W., Qin, W., Xiao, S., Op timized. Desig n of Embedded P arallel Data Acquisition System, Computer Engineering and Design, vol. 32, Issue: 5, p age 1622-1625, 2011.[6] Lei, S., Mi ng, N., Desig n and Imp leme ntati on of High Sp eed Data Acquisiti on System Based on FP GA, Compu ter Engin eeri ng, vol. 37, Issue: 19, p age221-223, 2011.[7] Chao, T., Jun, Z., Ru, G, Design of remote data acquisition and controlsystem based on Win dow CE, Microco mpu ter& Its App licati ons , vol. 30, Issue: 14,page 21-27, 2011.[8]Xiao, W., Bin, W., SMS con trolled in formatio n collectio n system based onuC/OS-II, Comp uter App licatio n, vol. 12, Issue: 31, page 29-31,2011.[9]Ti ng,Y., Zhong, C., Con structio n of Data Collectio n& Release in EmbeddedSystem, Comp uter En gi neeri ng, vol. 33, Issue: 19, p age 270-272, 2007.[10]Yo ng, W., Hao, Z., Pen g,D., Desig n and Realization of Multi-fu nction DataAcquisition System Based on ARM, Process Automation Instrumentation, vol. 32,Issue: 1, page: 13-16, 2010.[11] Betta, G., Liguori, C., Paolillo, A., A DSP-Based FFT An alyzer for the FaultDiag no sis of Rotati ng Mach ine Based on Vibrati on An alysis, IEEE Tran sacti on onIn strume ntati on and Measureme nt, vol. 51, Issue: 6, 2002.[12]Con treras-Medi na LM., Romero Tron coso RJ., Millan Almarez JR., FPGABased Mult ip le-Cha nnel Vibrati on An alyzer Embedded System for In dustrialApp licati on in Automatic Failure Detect ion, IEEE tran sacti ons on Intern ati onal and measureme nt, vol. 59, Issue: 1, p age 63-67, 2008.[13]Ch on, W., Shua ng, C., Desig n and impi eme ntati on of sig nal detecti on system based on ARM for ship borne equipment, Compu ter Engin eeri ng and Desig n, vol. 32,Issue: 4, page: 1300-1301,2011.[14]Miao, L., Tia n, W., Hong, W., Real-time An alysis of Embedded CNC SystemBased on uC/OS-ll, Comp uter En gi neeri ng, vol. 32, Issue: 22, p age 222-223, 2006.。

外文资料翻译---工业控制系统与协同控制系统

外文资料翻译---工业控制系统与协同控制系统

外文资料翻译工业控制系统与协同控制系统当今的控制系统被广泛运用于许多领域。

从单纯的工业控制系统到协同控制系统(CCS),控制系统不停变化,不断升级,现在则趋向于家庭控制系统,而它则是这两者的变种。

被应用的控制系统的种类取决于技术要求。

而且,实践表明,经济和社会因素也对此很重要。

任何决定都有它的优缺点。

工业控制要求可靠性,完整的文献记载和技术支持。

经济因素使决定趋向于协同工具。

能够亲自接触源码并可以更快速地解决问题是家庭控制系统的要求。

多年的操作经验表明哪个解决方法是最主要的不重要,重要的是哪个可行。

由于异类系统的存在,针对不同协议的支持也是至关重要的。

本文介绍工业控制系统,PlC controlled turn key 系统,和CCS工具,以及它们之间的操作。

引言:80年代早期,随着为HERA(Hadron-Elektron-Ring-Anlage)加速器安装低温控制系统,德国电子同步加速器研究所普遍开始研究过程控制。

这项新技术是必需的,因为但是现有的硬件没有能力来处理标准过程控制信号,如4至20毫安的电流输入和输出信号。

而且软件无法在0.1秒的稳定重复率下运行PID控制回路。

此外,在实现对复杂的低温冷藏系统的开闭过程中,频率项目显得尤为重要。

有必要增加接口解决总线问题并增加运算能力,以便于低温控制。

因为已安装的D / 3系统[1] 只提供了与多总线板串行连接,以实现DMA与VME的连接并用其模拟多总线板的功能。

温度转换器的计算功能来自一个摩托罗拉MVME 167 CPU和总线适配器,以及一个MVME 162 CPU。

其操作系统是VxWorks,而应用程序是EPICS。

由于对它的应用相当成功,其还被运用于正在寻找一个通用的解决方案以监督他们的分布式PLC的公共事业管理。

德国电子同步加速器研究所对过程管理系统的筛选集散控制系统(D/ 3):市场调查表明:来自GSE的D / 3系统被HERA低温冷藏工厂选中。

数据库中英文对照外文翻译文献

数据库中英文对照外文翻译文献

中英文对照外文翻译Database Management SystemsA database (sometimes spelled data base) is also called an electronic database , referring to any collection of data, or information, that is specially organized for rapid search and retrieval by a computer. Databases are structured to facilitate the storage, retrieval , modification, and deletion of data in conjunction with various data-processing operations .Databases can be stored on magnetic disk or tape, optical disk, or some other secondary storage device.A database consists of a file or a set of files. The information in these files may be broken down into records, each of which consists of one or more fields. Fields are the basic units of data storage , and each field typically contains information pertaining to one aspect or attribute of the entity described by the database . Using keywords and various sorting commands, users can rapidly search , rearrange, group, and select the fields in many records to retrieve or create reports on particular aggregate of data.Complex data relationships and linkages may be found in all but the simplest databases .The system software package that handles the difficult tasks associated with creating ,accessing, and maintaining database records is called a database management system(DBMS).The programs in a DBMS package establish an interface between the database itself and the users of the database.. (These users may be applications programmers, managers and others with information needs, and various OS programs.)A DBMS can organize, process, and present selected data elements form the database. This capability enables decision makers to search, probe, and query database contents in order to extract answers to nonrecurring and unplanned questions that aren’t available in regular reports. These questions might initially be vague and/or poorly defined ,but people can “browse” through the database until they have the needed information. In short, the DBMS will “manage” the stored data items and assemble the needed items from the common database in response to the queries of those who aren’t programmers.A database management system (DBMS) is composed of three major parts:(1)a storage subsystemthat stores and retrieves data in files;(2) a modeling and manipulation subsystem that provides the means with which to organize the data and to add , delete, maintain, and update the data;(3)and an interface between the DBMS and its users. Several major trends are emerging that enhance the value and usefulness of database management systems;Managers: who require more up-to-data information to make effective decisionCustomers: who demand increasingly sophisticated information services and more current information about the status of their orders, invoices, and accounts.Users: who find that they can develop custom applications with database systems in a fraction of the time it takes to use traditional programming languages.Organizations : that discover information has a strategic value; they utilize their database systems to gain an edge over their competitors.The Database ModelA data model describes a way to structure and manipulate the data in a database. The structural part of the model specifies how data should be represented(such as tree, tables, and so on ).The manipulative part of the model specifies the operation with which to add, delete, display, maintain, print, search, select, sort and update the data.Hierarchical ModelThe first database management systems used a hierarchical model-that is-they arranged records into a tree structure. Some records are root records and all others have unique parent records. The structure of the tree is designed to reflect the order in which the data will be used that is ,the record at the root of a tree will be accessed first, then records one level below the root ,and so on.The hierarchical model was developed because hierarchical relationships are commonly found in business applications. As you have known, an organization char often describes a hierarchical relationship: top management is at the highest level, middle management at lower levels, and operational employees at the lowest levels. Note that within a strict hierarchy, each level of management may have many employees or levels of employees beneath it, but each employee has only one manager. Hierarchical data are characterized by this one-to-many relationship among data.In the hierarchical approach, each relationship must be explicitly defined when the database is created. Each record in a hierarchical database can contain only one key field and only one relationship is allowed between any two fields. This can create a problem because data do not always conform to such a strict hierarchy.Relational ModelA major breakthrough in database research occurred in 1970 when E. F. Codd proposed a fundamentally different approach to database management called relational model ,which uses a table asits data structure.The relational database is the most widely used database structure. Data is organized into related tables. Each table is made up of rows called and columns called fields. Each record contains fields of data about some specific item. For example, in a table containing information on employees, a record would contain fields of data such as a person’s last name ,first name ,and street address.Structured query language(SQL)is a query language for manipulating data in a relational database .It is nonprocedural or declarative, in which the user need only specify an English-like description that specifies the operation and the described record or combination of records. A query optimizer translates the description into a procedure to perform the database manipulation.Network ModelThe network model creates relationships among data through a linked-list structure in which subordinate records can be linked to more than one parent record. This approach combines records with links, which are called pointers. The pointers are addresses that indicate the location of a record. With the network approach, a subordinate record can be linked to a key record and at the same time itself be a key record linked to other sets of subordinate records. The network mode historically has had a performance advantage over other database models. Today , such performance characteristics are only important in high-volume ,high-speed transaction processing such as automatic teller machine networks or airline reservation system.Both hierarchical and network databases are application specific. If a new application is developed ,maintaining the consistency of databases in different applications can be very difficult. For example, suppose a new pension application is developed .The data are the same, but a new database must be created.Object ModelThe newest approach to database management uses an object model , in which records are represented by entities called objects that can both store data and provide methods or procedures to perform specific tasks.The query language used for the object model is the same object-oriented programming language used to develop the database application .This can create problems because there is no simple , uniform query language such as SQL . The object model is relatively new, and only a few examples of object-oriented database exist. It has attracted attention because developers who choose an object-oriented programming language want a database based on an object-oriented model. Distributed DatabaseSimilarly , a distributed database is one in which different parts of the database reside on physically separated computers . One goal of distributed databases is the access of informationwithout regard to where the data might be stored. Keeping in mind that once the users and their data are separated , the communication and networking concepts come into play .Distributed databases require software that resides partially in the larger computer. This software bridges the gap between personal and large computers and resolves the problems of incompatible data formats. Ideally, it would make the mainframe databases appear to be large libraries of information, with most of the processing accomplished on the personal computer.A drawback to some distributed systems is that they are often based on what is called a mainframe-entire model , in which the larger host computer is seen as the master and the terminal or personal computer is seen as a slave. There are some advantages to this approach . With databases under centralized control , many of the problems of data integrity that we mentioned earlier are solved . But today’s personal computers, departmental computers, and distributed processing require computers and their applications to communicate with each other on a more equal or peer-to-peer basis. In a database, the client/server model provides the framework for distributing databases.One way to take advantage of many connected computers running database applications is to distribute the application into cooperating parts that are independent of one anther. A client is an end user or computer program that requests resources across a network. A server is a computer running software that fulfills those requests across a network . When the resources are data in a database ,the client/server model provides the framework for distributing database.A file serve is software that provides access to files across a network. A dedicated file server is a single computer dedicated to being a file server. This is useful ,for example ,if the files are large and require fast access .In such cases, a minicomputer or mainframe would be used as a file server. A distributed file server spreads the files around on individual computers instead of placing them on one dedicated computer.Advantages of the latter server include the ability to store and retrieve files on other computers and the elimination of duplicate files on each computer. A major disadvantage , however, is that individual read/write requests are being moved across the network and problems can arise when updating files. Suppose a user requests a record from a file and changes it while another user requests the same record and changes it too. The solution to this problems called record locking, which means that the first request makes others requests wait until the first request is satisfied . Other users may be able to read the record, but they will not be able to change it .A database server is software that services requests to a database across a network. For example, suppose a user types in a query for data on his or her personal computer . If the application is designed with the client/server model in mind ,the query language part on the personal computer simple sends the query across the network to the database server and requests to be notified when the data are found.Examples of distributed database systems can be found in the engineering world. Sun’s Network Filing System(NFS),for example, is used in computer-aided engineering applications to distribute data among the hard disks in a network of Sun workstation.Distributing databases is an evolutionary step because it is logical that data should exist at the location where they are being used . Departmental computers within a large corporation ,for example, should have data reside locally , yet those data should be accessible by authorized corporate management when they want to consolidate departmental data . DBMS software will protect the security and integrity of the database , and the distributed database will appear to its users as no different from the non-distributed database .In this information age, the data server has become the heart of a company. This one piece of software controls the rhythm of most organizations and is used to pump information lifeblood through the arteries of the network. Because of the critical nature of this application, the data server is also the one of the most popular targets for hackers. If a hacker owns this application, he can cause the company's "heart" to suffer a fatal arrest.Ironically, although most users are now aware of hackers, they still do not realize how susceptible their database servers are to hack attacks. Thus, this article presents a description of the primary methods of attacking database servers (also known as SQL servers) and shows you how to protect yourself from these attacks.You should note this information is not new. Many technical white papers go into great detail about how to perform SQL attacks, and numerous vulnerabilities have been posted to security lists that describe exactly how certain database applications can be exploited. This article was written for the curious non-SQL experts who do not care to know the details, and as a review to those who do use SQL regularly.What Is a SQL Server?A database application is a program that provides clients with access to data. There are many variations of this type of application, ranging from the expensive enterprise-level Microsoft SQL Server to the free and open source mySQL. Regardless of the flavor, most database server applications have several things in common.First, database applications use the same general programming language known as SQL, or Structured Query Language. This language, also known as a fourth-level language due to its simplistic syntax, is at the core of how a client communicates its requests to the server. Using SQL in its simplest form, a programmer can select, add, update, and delete information in a database. However, SQL can also be used to create and design entire databases, perform various functions on the returned information, and even execute other programs.To illustrate how SQL can be used, the following is an example of a simple standard SQL query and a more powerful SQL query:Simple: "Select * from dbFurniture.tblChair"This returns all information in the table tblChair from the database dbFurniture.Complex: "EXEC master..xp_cmdshell 'dir c:\'"This short SQL command returns to the client the list of files and folders under the c:\ directory of the SQL server. Note that this example uses an extended stored procedure that is exclusive to MS SQL Server.The second function that database server applications share is that they all require some form of authenticated connection between client and host. Although the SQL language is fairly easy to use, at least in its basic form, any client that wants to perform queries must first provide some form of credentials that will authorize the client; the client also must define the format of the request and response.This connection is defined by several attributes, depending on the relative location of the client and what operating systems are in use. We could spend a whole article discussing various technologies such as DSN connections, DSN-less connections, RDO, ADO, and more, but these subjects are outside the scope of this article. If you want to learn more about them, a little Google'ing will provide you with more than enough information. However, the following is a list of the more common items included in a connection request.Database sourceRequest typeDatabaseUser IDPasswordBefore any connection can be made, the client must define what type of database server it is connecting to. This is handled by a software component that provides the client with the instructions needed to create the request in the correct format. In addition to the type of database, the request type can be used to further define how the client's request will be handled by the server. Next comes the database name and finally the authentication information.All the connection information is important, but by far the weakest link is the authentication information—or lack thereof. In a properly managed server, each database has its own users with specifically designated permissions that control what type of activity they can perform. For example, a user account would be set up as read only for applications that need to only access information. Another account should be used for inserts or updates, and maybe even a third account would be used for deletes.This type of account control ensures that any compromised account is limited in functionality. Unfortunately, many database programs are set up with null or easy passwords, which leads to successful hack attacks.译文数据库管理系统介绍数据库(database,有时拼作data base)又称为电子数据库,是专门组织起来的一组数据或信息,其目的是为了便于计算机快速查询及检索。

射频识别技术手册(第二版) 外文翻译

射频识别技术手册(第二版) 外文翻译

毕业论文(设计)文献翻译本翻译源自于:RFID Handbook (Second Edition)毕业设计名称:电力系统高速数据采集系统设计外文翻译名称:射频识别技术手册(第二版)学生姓名:翁学娇院 (系):电子信息学院专业班级:电气10803指导教师 :唐桃波辅导教师:唐桃波时间:2012年2月至2012年6月射频识别技术手册:基于非接触式智能卡和识别的原理和应用 第二版Klaus Finkenzeller版权 2003 John Wiley& Sons 有限公司国际标准图书编号:0—470—84402—75。

频率范围和无线电许可条例 5。

1 频率范围因为射频识别系统产生和辐射电磁波,他们已被列为合法的无线电系统.其他功能的无线服务在任何情况下都不能受到射频识别操作系统的干扰和损害。

尤其重要的是要确保RFID 系统不会干扰附近的广播和电视,移动无线电服务(警察、保安服务、工业),航海和航空无线电服务和移动电话。

对射频识别系统来讲,运动保健方面需要的其他无线电服务明显制约了适宜范围内的可操作频(图5.1).出于这个原因,它通常是唯一可以使用的频率范围,已经有人预定了专供工业,科学和医学中的应用。

这些世界范围内的频率划分成国际频率范围(工业-科学—医学),它们也可以用于射频识别应用。

实际可用的射频频率f : :80 60 40 2025 2500.01 30000VLF 0.1 3000 LF 1 300 MF 10 30 HF 100 3 VHF 1000 0.3 UHF 10000 0.03 SHF 100000 0.003 EHF: MHZm 6.78 13.56 27.125 40 66 433 868 915 2450 5800 MHZ 24GHZ H, dB μA/m/10 m(< 30 MHz) BC, LW-/MW-NavigationSW (Com., BC, Mobile, Marine...)FM Radio, Mobile Radio, TVMicrowave Link, SAT-TVNon-ITUITU, not fully deployed 100-135kHz 13.56MHz 2.45GHz图5.1 用于射频识别系统范围内的频率范围为135千赫一下的超长范围通过短波以及超短波到微波范围,包括最高频率24千兆赫。

论文中英文翻译(译文)

论文中英文翻译(译文)

编号:桂林电子科技大学信息科技学院毕业设计(论文)外文翻译(译文)系别:电子工程系专业:电子信息工程学生姓名:韦骏学号:0852100329指导教师单位:桂林电子科技大学信息科技学院姓名:梁勇职称:讲师2012 年6 月5 日设计与实现基于Modbus 协议的嵌入式Linux 系统摘要:随着嵌入式计算机技术的飞速发展,新一代工业自动化数据采集和监测系统,采用核心的高性能嵌入式微处理器的,该系统很好地适应应用程序。

它符合消费等的严格要求的功能,如可靠性,成本,尺寸和功耗等。

在工业自动化应用系统,Modbus 通信协议的工业标准,广泛应用于大规模的工业设备系统,包括DCS,可编程控制器,RTU 及智能仪表等。

为了达到嵌入式数据监测的工业自动化应用软件的需求,本文设计了嵌入式数据采集监测平台下基于Modbus 协议的Linux 环境采集系统。

串行端口的Modbus 协议是实现主/从式,其中包括两种通信模式:ASCII 和RTU。

因此,各种药膏协议的设备能够满足串行的Modbus通信。

在Modbus 协议的嵌入式平台实现稳定和可靠。

它在嵌入式数据监测自动化应用系统的新收购的前景良好。

关键词:嵌入式系统,嵌入式Linux,Modbus 协议,数据采集,监测和控制。

1、绪论Modbus 是一种通讯协议,是一种由莫迪康公司推广。

它广泛应用于工业自动化,已成为实际的工业标准。

该控制装置或不同厂家的测量仪器可以链接到一个行业监控网络使用Modbus 协议。

Modbus 通信协议可以作为大量的工业设备的通讯标准,包括PLC,DCS 系统,RTU 的,聪明的智能仪表。

随着嵌入式计算机技术的飞速发展,嵌入式数据采集监测系统,使用了高性能的嵌入式微处理器为核心,是一个重要的发展方向。

在环境鉴于嵌入式Linux 的嵌入式工业自动化应用的数据,一个Modbus 主协议下的采集监测系统的设计和实现了这个文件。

因此,通信设备,各种药膏协议能够满足串行的Modbus。

大数据文献综述英文版

大数据文献综述英文版

The development and tendency of Big DataAbstract: "Big Data" is the most popular IT word after the "Internet of things" and "Cloud computing". From the source, development, status quo and tendency of big data, we can understand every aspect of it. Big data is one of the most important technologies around the world and every country has their own way to develop the technology.Key words: big data; IT; technology1 The source of big dataDespite the famous futurist Toffler propose the conception of “Big Data” in 1980, for a long time, because the primary stage is still in the development of IT industry and uses of information sources, “Big Data” is not get enough attention by the people in that age[1].2 The development of big dataUntil the financial crisis in 2008 force the IBM ( multi-national corporation of IT industry) proposing conception of “Smart City”and vigorously promote Internet of Things and Cloud computing so that information data has been in a massive growth meanwhile the need for the technology is very urgent. Under this condition, some American data processing companies have focused on developing large-scale concurrent processing system, then the “Big Data”technology become available sooner and Hadoop mass data concurrent processing system has received wide attention. Since 2010, IT giants have proposed their products in big data area. Big companies such as EMC、HP、IBM、Microsoft all purchase other manufacturer relating to big data in order to achieve technical integration[1]. Based on this, we can learn how important the big data strategy is. Development of big data thanks to some big IT companies such as Google、Amazon、China mobile、Alibaba and so on, because they need a optimization way to store and analysis data. Besides, there are also demands of health systems、geographic space remote sensing and digital media[2].3 The status quo of big dataNowadays America is in the lead of big data technology and market application. USA federal government announced a “Big Data’s research and development” plan in March,2012, which involved six federal government department the National Science Foundation, Health Research Institute, Department of Energy, Department of Defense, Advanced Research Projects Agency and Geological Survey in order to improve the ability to extract information and viewpoint of big data[1]. Thus, it can speed science and engineering discovery up, and it is a major move to push some research institutions making innovations.The federal government put big data development into a strategy place, which hasa big impact on every country. At present, many big European institutions is still at the primary stage to use big data and seriously lack technology about big data. Most improvements and technology of big data are come from America. Therefore, there are kind of challenges of Europe to keep in step with the development of big data. But, in the financial service industry especially investment banking in London is one of the earliest industries in Europe. The experiment and technology of big data is as good as the giant institution of America. And, the investment of big data has been maintained promising efforts. January 2013, British government announced 1.89 million pound will be invested in big data and calculation of energy saving technology in earth observation and health care[3].Japanese government timely takes the challenge of big data strategy. July 2013, Japan’s communications ministry proposed a synthesize strategy called “Energy ICT of Japan” which focused on big data application. June 2013, the abe cabinet formally announced the new IT strategy----“The announcement of creating the most advanced IT country”. This announcement comprehensively expounded that Japanese new IT national strategy is with the core of developing opening public data and big data in 2013 to 2020[4].Big data has also drawn attention of China government.《Guiding opinions of the State Council on promoting the healthy and orderly development of the Internet of things》promote to quicken the core technology including sensor network、intelligent terminal、big data processing、intelligent analysis and service integration. December 2012, the national development and reform commission add data analysis software into special guide, in the beginning of 2013 ministry of science and technology announced that big data research is one of the most important content of “973 program”[1]. This program requests that we need to research the expression, measure and semantic understanding of multi-source heterogeneous data, research modeling theory and computational model, promote hardware and software system architecture by energy optimal distributed storage and processing, analysis the relationship of complexity、calculability and treatment efficiency[1]. Above all, we can provide theory evidence for setting up scientific system of big data.4 The tendency of big data4.1 See the future by big dataIn the beginning of 2008, Alibaba found that the whole number of sellers were on a slippery slope by mining analyzing user-behavior data meanwhile the procurement to Europe and America was also glide. They accurately predicting the trend of world economic trade unfold half year earlier so they avoid the financial crisis[2]. Document [3] cite an example which turned out can predict a cholera one year earlier by mining and analysis the data of storm, drought and other natural disaster[3].4.2 Great changes and business opportunitiesWith the approval of big data values, giants of every industry all spend more money in big data industry. Then great changes and business opportunity comes[4].In hardware industry, big data are facing the challenges of manage, storage and real-time analysis. Big data will have an important impact of chip and storage industry,besides, some new industry will be created because of big data[4].In software and service area, the urgent demand of fast data processing will bring great boom to data mining and business intelligence industry.The hidden value of big data can create a lot of new companies, new products, new technology and new projects[2].4.3 Development direction of big dataThe storage technology of big data is relational database at primary. But due to the canonical design, friendly query language, efficient ability dealing with online affair, Big data dominate the market a long term. However, its strict design pattern, it ensures consistency to give up function, its poor expansibility these problems are exposed in big data analysis. Then, NoSQL data storage model and Bigtable propsed by Google start to be in fashion[5].Big data analysis technology which uses MapReduce technological frame proposed by Google is used to deal with large scale concurrent batch transaction. Using file system to store unstructured data is not lost function but also win the expansilility. Later, there are big data analysis platform like HA VEn proposed by HP and Fusion Insight proposed by Huawei . Beyond doubt, this situation will be continued, new technology and measures will come out such as next generation data warehouse, Hadoop distribute and so on[6].ConclusionThis paper we analysis the development and tendency of big data. Based on this, we know that the big data is still at a primary stage, there are too many problems need to deal with. But the commercial value and market value of big data are the direction of development to information age.忽略此处..[1] Li Chunwei, Development report of China’s E-Commerce enterprises, Beijing , 2013,pp.268-270[2] Li Fen, Zhu Zhixiang, Liu Shenghui, The development status and the problems of large data, Journal of Xi’an University of Posts and Telecommunications, 18 volume, pp. 102-103,sep.2013 [3] Kira Radinsky, Eric Horivtz, Mining the Web to Predict Future Events[C]//Proceedings of the 6th ACM International Conference on Web Search and Data Mining, WSDM 2013: New York: Association for Computing Machinery,2013,pp.255-264[4] Chapman A, Allen M D, Blaustein B. It’s About the Data: Provenance as a Toll for Assessing Data Fitness[C]//Proc of the 4th USENIX Workshop on the Theory and Practice of Provenance, Berkely, CA: USENIX Association, 2012:8[5] Li Ruiqin, Zheng Janguo, Big data Research: Status quo, Problems and Tendency[J],Network Application,Shanghai,1994,pp.107-108[6] Meng Xiaofeng, Wang Huiju, Du Xiaoyong, Big Daya Analysis: Competition and Survival of RDBMS and ManReduce[J], Journal of software, 2012,23(1): 32-45。

大数据挖掘外文翻译文献

大数据挖掘外文翻译文献

文献信息:文献标题:A Study of Data Mining with Big Data(大数据挖掘研究)国外作者:VH Shastri,V Sreeprada文献出处:《International Journal of Emerging Trends and Technology in Computer Science》,2016,38(2):99-103字数统计:英文2291单词,12196字符;中文3868汉字外文文献:A Study of Data Mining with Big DataAbstract Data has become an important part of every economy, industry, organization, business, function and individual. Big Data is a term used to identify large data sets typically whose size is larger than the typical data base. Big data introduces unique computational and statistical challenges. Big Data are at present expanding in most of the domains of engineering and science. Data mining helps to extract useful data from the huge data sets due to its volume, variability and velocity. This article presents a HACE theorem that characterizes the features of the Big Data revolution, and proposes a Big Data processing model, from the data mining perspective.Keywords: Big Data, Data Mining, HACE theorem, structured and unstructured.I.IntroductionBig Data refers to enormous amount of structured data and unstructured data thatoverflow the organization. If this data is properly used, it can lead to meaningful information. Big data includes a large number of data which requires a lot of processing in real time. It provides a room to discover new values, to understand in-depth knowledge from hidden values and provide a space to manage the data effectively. A database is an organized collection of logically related data which can be easily managed, updated and accessed. Data mining is a process discovering interesting knowledge such as associations, patterns, changes, anomalies and significant structures from large amount of data stored in the databases or other repositories.Big Data includes 3 V’s as its characteristics. They are volume, velocity and variety. V olume means the amount of data generated every second. The data is in state of rest. It is also known for its scale characteristics. Velocity is the speed with which the data is generated. It should have high speed data. The data generated from social media is an example. Variety means different types of data can be taken such as audio, video or documents. It can be numerals, images, time series, arrays etc.Data Mining analyses the data from different perspectives and summarizing it into useful information that can be used for business solutions and predicting the future trends. Data mining (DM), also called Knowledge Discovery in Databases (KDD) or Knowledge Discovery and Data Mining, is the process of searching large volumes of data automatically for patterns such as association rules. It applies many computational techniques from statistics, information retrieval, machine learning and pattern recognition. Data mining extract only required patterns from the database in a short time span. Based on the type of patterns to be mined, data mining tasks can be classified into summarization, classification, clustering, association and trends analysis.Big Data is expanding in all domains including science and engineering fields including physical, biological and biomedical sciences.II.BIG DATA with DATA MININGGenerally big data refers to a collection of large volumes of data and these data are generated from various sources like internet, social-media, business organization, sensors etc. We can extract some useful information with the help of Data Mining. It is a technique for discovering patterns as well as descriptive, understandable, models from a large scale of data.V olume is the size of the data which is larger than petabytes and terabytes. The scale and rise of size makes it difficult to store and analyse using traditional tools. Big Data should be used to mine large amounts of data within the predefined period of time. Traditional database systems were designed to address small amounts of data which were structured and consistent, whereas Big Data includes wide variety of data such as geospatial data, audio, video, unstructured text and so on.Big Data mining refers to the activity of going through big data sets to look for relevant information. To process large volumes of data from different sources quickly, Hadoop is used. Hadoop is a free, Java-based programming framework that supports the processing of large data sets in a distributed computing environment. Its distributed supports fast data transfer rates among nodes and allows the system to continue operating uninterrupted at times of node failure. It runs Map Reduce for distributed data processing and is works with structured and unstructured data.III.BIG DATA characteristics- HACE THEOREM.We have large volume of heterogeneous data. There exists a complex relationship among the data. We need to discover useful information from this voluminous data.Let us imagine a scenario in which the blind people are asked to draw elephant. The information collected by each blind people may think the trunk as wall, leg as tree, body as wall and tail as rope. The blind men can exchange information with each other.Figure1: Blind men and the giant elephantSome of the characteristics that include are:i.Vast data with heterogeneous and diverse sources: One of the fundamental characteristics of big data is the large volume of data represented by heterogeneous and diverse dimensions. For example in the biomedical world, a single human being is represented as name, age, gender, family history etc., For X-ray and CT scan images and videos are used. Heterogeneity refers to the different types of representations of same individual and diverse refers to the variety of features to represent single information.ii.Autonomous with distributed and de-centralized control: the sources are autonomous, i.e., automatically generated; it generates information without any centralized control. We can compare it with World Wide Web (WWW) where each server provides a certain amount of information without depending on other servers.plex and evolving relationships: As the size of the data becomes infinitely large, the relationship that exists is also large. In early stages, when data is small, there is no complexity in relationships among the data. Data generated from social media and other sources have complex relationships.IV.TOOLS:OPEN SOURCE REVOLUTIONLarge companies such as Facebook, Yahoo, Twitter, LinkedIn benefit and contribute work on open source projects. In Big Data Mining, there are many open source initiatives. The most popular of them are:Apache Mahout:Scalable machine learning and data mining open source software based mainly in Hadoop. It has implementations of a wide range of machine learning and data mining algorithms: clustering, classification, collaborative filtering and frequent patternmining.R: open source programming language and software environment designed for statistical computing and visualization. R was designed by Ross Ihaka and Robert Gentleman at the University of Auckland, New Zealand beginning in 1993 and is used for statistical analysis of very large data sets.MOA: Stream data mining open source software to perform data mining in real time. It has implementations of classification, regression; clustering and frequent item set mining and frequent graph mining. It started as a project of the Machine Learning group of University of Waikato, New Zealand, famous for the WEKA software. The streams framework provides an environment for defining and running stream processes using simple XML based definitions and is able to use MOA, Android and Storm.SAMOA: It is a new upcoming software project for distributed stream mining that will combine S4 and Storm with MOA.Vow pal Wabbit: open source project started at Yahoo! Research and continuing at Microsoft Research to design a fast, scalable, useful learning algorithm. VW is able to learn from terafeature datasets. It can exceed the throughput of any single machine networkinterface when doing linear learning, via parallel learning.V.DATA MINING for BIG DATAData mining is the process by which data is analysed coming from different sources discovers useful information. Data Mining contains several algorithms which fall into 4 categories. They are:1.Association Rule2.Clustering3.Classification4.RegressionAssociation is used to search relationship between variables. It is applied in searching for frequently visited items. In short it establishes relationship among objects. Clustering discovers groups and structures in the data.Classification deals with associating an unknown structure to a known structure. Regression finds a function to model the data.The different data mining algorithms are:Table 1. Classification of AlgorithmsData Mining algorithms can be converted into big map reduce algorithm based on parallel computing basis.Table 2. Differences between Data Mining and Big DataVI.Challenges in BIG DATAMeeting the challenges with BIG Data is difficult. The volume is increasing every day. The velocity is increasing by the internet connected devices. The variety is also expanding and the organizations’ capability to capture and process the data is limited.The following are the challenges in area of Big Data when it is handled:1.Data capture and storage2.Data transmission3.Data curation4.Data analysis5.Data visualizationAccording to, challenges of big data mining are divided into 3 tiers.The first tier is the setup of data mining algorithms. The second tier includesrmation sharing and Data Privacy.2.Domain and Application Knowledge.The third one includes local learning and model fusion for multiple information sources.3.Mining from sparse, uncertain and incomplete data.4.Mining complex and dynamic data.Figure 2: Phases of Big Data ChallengesGenerally mining of data from different data sources is tedious as size of data is larger. Big data is stored at different places and collecting those data will be a tedious task and applying basic data mining algorithms will be an obstacle for it. Next we need to consider the privacy of data. The third case is mining algorithms. When we are applying data mining algorithms to these subsets of data the result may not be that much accurate.VII.Forecast of the futureThere are some challenges that researchers and practitioners will have to deal during the next years:Analytics Architecture:It is not clear yet how an optimal architecture of analytics systems should be to deal with historic data and with real-time data at the same time. An interesting proposal is the Lambda architecture of Nathan Marz. The Lambda Architecture solves the problem of computing arbitrary functions on arbitrary data in real time by decomposing the problem into three layers: the batch layer, theserving layer, and the speed layer. It combines in the same system Hadoop for the batch layer, and Storm for the speed layer. The properties of the system are: robust and fault tolerant, scalable, general, and extensible, allows ad hoc queries, minimal maintenance, and debuggable.Statistical significance: It is important to achieve significant statistical results, and not be fooled by randomness. As Efron explains in his book about Large Scale Inference, it is easy to go wrong with huge data sets and thousands of questions to answer at once.Distributed mining: Many data mining techniques are not trivial to paralyze. To have distributed versions of some methods, a lot of research is needed with practical and theoretical analysis to provide new methods.Time evolving data: Data may be evolving over time, so it is important that the Big Data mining techniques should be able to adapt and in some cases to detect change first. For example, the data stream mining field has very powerful techniques for this task.Compression: Dealing with Big Data, the quantity of space needed to store it is very relevant. There are two main approaches: compression where we don’t loose anything, or sampling where we choose what is thedata that is more representative. Using compression, we may take more time and less space, so we can consider it as a transformation from time to space. Using sampling, we are loosing information, but the gains inspace may be in orders of magnitude. For example Feldman et al use core sets to reduce the complexity of Big Data problems. Core sets are small sets that provably approximate the original data for a given problem. Using merge- reduce the small sets can then be used for solving hard machine learning problems in parallel.Visualization: A main task of Big Data analysis is how to visualize the results. As the data is so big, it is very difficult to find user-friendly visualizations. New techniques, and frameworks to tell and show stories will be needed, as for examplethe photographs, infographics and essays in the beautiful book ”The Human Face of Big Data”.Hidden Big Data: Large quantities of useful data are getting lost since new data is largely untagged and unstructured data. The 2012 IDC studyon Big Data explains that in 2012, 23% (643 exabytes) of the digital universe would be useful for Big Data if tagged and analyzed. However, currently only 3% of the potentially useful data is tagged, and even less is analyzed.VIII.CONCLUSIONThe amounts of data is growing exponentially due to social networking sites, search and retrieval engines, media sharing sites, stock trading sites, news sources and so on. Big Data is becoming the new area for scientific data research and for business applications.Data mining techniques can be applied on big data to acquire some useful information from large datasets. They can be used together to acquire some useful picture from the data.Big Data analysis tools like Map Reduce over Hadoop and HDFS helps organization.中文译文:大数据挖掘研究摘要数据已经成为各个经济、行业、组织、企业、职能和个人的重要组成部分。

数据分析外文文献+翻译

数据分析外文文献+翻译

数据分析外文文献+翻译文献1:《数据分析在企业决策中的应用》该文献探讨了数据分析在企业决策中的重要性和应用。

研究发现,通过数据分析可以获取准确的商业情报,帮助企业更好地理解市场趋势和消费者需求。

通过对大量数据的分析,企业可以发现隐藏的模式和关联,从而制定出更具竞争力的产品和服务策略。

数据分析还可以提供决策支持,帮助企业在不确定的环境下做出明智的决策。

因此,数据分析已成为现代企业成功的关键要素之一。

文献2:《机器研究在数据分析中的应用》该文献探讨了机器研究在数据分析中的应用。

研究发现,机器研究可以帮助企业更高效地分析大量的数据,并从中发现有价值的信息。

机器研究算法可以自动研究和改进,从而帮助企业发现数据中的模式和趋势。

通过机器研究的应用,企业可以更准确地预测市场需求、优化业务流程,并制定更具策略性的决策。

因此,机器研究在数据分析中的应用正逐渐受到企业的关注和采用。

文献3:《数据可视化在数据分析中的应用》该文献探讨了数据可视化在数据分析中的重要性和应用。

研究发现,通过数据可视化可以更直观地呈现复杂的数据关系和趋势。

可视化可以帮助企业更好地理解数据,发现数据中的模式和规律。

数据可视化还可以帮助企业进行数据交互和决策共享,提升决策的效率和准确性。

因此,数据可视化在数据分析中扮演着非常重要的角色。

翻译文献1标题: The Application of Data Analysis in Business Decision-making The Application of Data Analysis in Business Decision-making文献2标题: The Application of Machine Learning in Data Analysis The Application of Machine Learning in Data Analysis文献3标题: The Application of Data Visualization in Data Analysis The Application of Data Visualization in Data Analysis翻译摘要:本文献研究了数据分析在企业决策中的应用,以及机器研究和数据可视化在数据分析中的作用。

数据采集外文文献翻译中英文

数据采集外文文献翻译中英文

数据采集外文文献翻译(含:英文原文及中文译文)文献出处:Txomin Nieva. DATA ACQUISITION SYSTEMS [J]. Computers in Industry, 2013, 4(2):215-237.英文原文DATA ACQUISITION SYSTEMSTxomin NievaData acquisition systems, as the name implies, are products and/or processes used to collect information to document or analyze some phenomenon. In the simplest form, a technician logging the temperature of an oven on a piece of paper is performing data acquisition. As technology has progressed, this type of process has been simplified and made more accurate, versatile, and reliable through electronic equipment. Equipment ranges from simple recorders to sophisticated computer systems. Data acquisition products serve as a focal point in a system, tying together a wide variety of products, such as sensors that indicate temperature, flow, level, or pressure. Some common data acquisition terms are shown below.Data collection technology has made great progress in the past 30 to 40 years. For example, 40 years ago, in a well-known college laboratory, the device used to track temperature rises in bronze made of helium was composed of thermocouples, relays, interrogators, a bundle of papers, anda pencil.Today's university students are likely to automatically process and analyze data on PCs. There are many ways you can choose to collect data. The choice of which method to use depends on many factors, including the complexity of the task, the speed and accuracy you need, the evidence you want, and more. Whether simple or complex, the data acquisition system can operate and play its role.The old way of using pencils and papers is still feasible for some situations, and it is cheap, easy to obtain, quick and easy to start. All you need is to capture multiple channels of digital information (DMM) and start recording data by hand.Unfortunately, this method is prone to errors, slower acquisition of data, and requires too much human analysis. In addition, it can only collect data in a single channel; but when you use a multi-channel DMM, the system will soon become very bulky and clumsy. Accuracy depends on the level of the writer, and you may need to scale it yourself. For example, if the DMM is not equipped with a sensor that handles temperature, the old one needs to start looking for a proportion. Given these limitations, it is an acceptable method only if you need to implement a rapid experiment.Modern versions of the strip chart recorder allow you to retrieve data from multiple inputs. They provide long-term paper records of databecause the data is in graphic format and they are easy to collect data on site. Once a bar chart recorder has been set up, most recorders have enough internal intelligence to operate without an operator or computer. The disadvantages are the lack of flexibility and the relative low precision, often limited to a percentage point. You can clearly feel that there is only a small change with the pen. In the long-term monitoring of the multi-channel, the recorders can play a very good role, in addition, their value is limited. For example, they cannot interact with other devices. Other concerns are the maintenance of pens and paper, the supply of paper and the storage of data. The most important is the abuse and waste of paper. However, recorders are fairly easy to set up and operate, providing a permanent record of data for quick and easy analysis.Some benchtop DMMs offer selectable scanning capabilities. The back of the instrument has a slot to receive a scanner card that can be multiplexed for more inputs, typically 8 to 10 channels of mux. This is inherently limited in the front panel of the instrument. Its flexibility is also limited because it cannot exceed the number of available channels. External PCs usually handle data acquisition and analysis.The PC plug-in card is a single-board measurement system that uses the ISA or PCI bus to expand the slot in the PC. They often have a reading rate of up to 1000 per second. 8 to 16 channels are common, and the collected data is stored directly in the computer and then analyzed.Because the card is essentially a part of the computer, it is easy to establish the test. PC-cards are also relatively inexpensive, partly because they have since been hosted by PCs to provide energy, mechanical accessories, and user interfaces. Data collection optionsOn the downside, the PC plug-in cards often have a 12-word capacity, so you can't detect small changes in the input signal. In addition, the electronic environment within the PC is often susceptible to noise, high clock rates, and bus noise. The electronic contacts limit the accuracy of the PC card. These plug-in cards also measure a range of voltages. To measure other input signals, such as voltage, temperature, and resistance, you may need some external signal monitoring devices. Other considerations include complex calibrations and overall system costs, especially if you need to purchase additional signal monitoring devices or adapt the PC card to the card. Take this into account. If your needs change within the capabilities and limitations of the card, the PC plug-in card provides an attractive method for data collection.Data electronic recorders are typical stand-alone instruments that, once equipped with them, enable the measurement, recording, and display of data without the involvement of an operator or computer. They can handle multiple signal inputs, sometimes up to 120 channels. Accuracy rivals unrivalled desktop DMMs because it operates within a 22 word, 0.004 percent accuracy range. Some data electronic automatic recordershave the ability to measure proportionally, the inspection result is not limited by the user's definition, and the output is a control signal.One of the advantages of using data electronic loggers is their internal monitoring signals. Most can directly measure several different input signals without the need for additional signal monitoring devices. One channel can monitor thermocouples, RTDs, and voltages.Thermocouples provide valuable compensation for accurate temperature measurements. They are typically equipped with multi-channel cards. Built-in intelligent electronic data recorder helps you set the measurement period and specify the parameters for each channel. Once you set it all up, the data electronic recorder will behave like an unbeatable device. The data they store is distributed in memory and can hold 500,000 or more readings.Connecting to a PC makes it easy to transfer data to a computer for further analysis. Most data electronic recorders can be designed to be flexible and simple to configure and operate, and most provide remote location operation options via battery packs or other methods. Thanks to the A/D conversion technology, certain data electronic recorders have a lower reading rate, especially when compared with PC plug-in cards. However, a reading rate of 250 per second is relatively rare. Keep in mind that many of the phenomena that are being measured are physical in nature, such as temperature, pressure, and flow, and there are generallyfewer changes. In addition, because of the monitoring accuracy of the data electron loggers, a large amount of average reading is not necessary, just as they are often stuck on PC plug-in cards.Front-end data acquisition is often done as a module and is typically connected to a PC or controller. They are used in automated tests to collect data, control and cycle detection signals for other test equipment. Send signal test equipment spare parts. The efficiency of the front-end operation is very high, and can match the speed and accuracy with the best stand-alone instrument. Front-end data acquisition works in many models, including VXI versions such as the Agilent E1419A multi-function measurement and VXI control model, as well as a proprietary card elevator. Although the cost of front-end units has been reduced, these systems can be very expensive unless you need to provide high levels of operation, and finding their prices is prohibited. On the other hand, they do provide considerable flexibility and measurement capabilities.Good, low-cost electronic data loggers have the right number of channels (20-60 channels) and scan rates are relatively low but are common enough for most engineers. Some of the key applications include:•product features•Hot die cutting of electronic products•Test of the environmentEnvironmental monitoring•Composition characteristics•Battery testBuilding and computer capacity monitoringA new system designThe conceptual model of a universal system can be applied to the analysis phase of a specific system to better understand the problem and to specify the best solution more easily based on the specific requirements of a particular system. The conceptual model of a universal system can also be used as a starting point for designing a specific system. Therefore, using a general-purpose conceptual model will save time and reduce the cost of specific system development. To test this hypothesis, we developed DAS for railway equipment based on our generic DAS concept model. In this section, we summarize the main results and conclusions of this DAS development.We analyzed the device model package. The result of this analysis is a partial conceptual model of a system consisting of a three-tier device model. We analyzed the equipment project package in the equipment environment. Based on this analysis, we have listed a three-level item hierarchy in the conceptual model of the system. Equipment projects are specialized for individual equipment projects.We analyzed the equipment model monitoring standard package in the equipment context. One of the requirements of this system is the ability to use a predefined set of data to record specific status monitoring reports. We analyzed the equipment project monitoring standard package in the equipment environment. The requirements of the system are: (i) the ability to record condition monitoring reports and event monitoring reports corresponding to the items, which can be triggered by time triggering conditions or event triggering conditions; (ii) the definition of private and public monitoring standards; (iii) Ability to define custom and predefined train data sets. Therefore, we have introduced the "monitoring standards for equipment projects", "public standards", "special standards", "equipment monitoring standards", "equipment condition monitoring standards", "equipment project status monitoring standards and equipment project event monitoring standards, respectively Training item triggering conditions, training item time triggering conditions and training item event triggering conditions are device equipment trigger conditions, equipment item time trigger conditions and device project event trigger condition specialization; and training item data sets, training custom data Sets and trains predefined data sets, which are device project data sets, custom data sets, and specialized sets of predefined data sets.Finally, we analyzed the observations and monitoring reports in the equipment environment. The system's requirement is to recordmeasurements and category observations. In addition, status and incident monitoring reports can be recorded. Therefore, we introduce the concept of observation, measurement, classification observation and monitoring report into the conceptual model of the system.Our generic DAS concept model plays an important role in the design of DAS equipment. We use this model to better organize the data that will be used by system components. Conceptual models also make it easier to design certain components in the system. Therefore, we have an implementation in which a large number of design classes represent the concepts specified in our generic DAS conceptual model. Through an industrial example, the development of this particular DAS demonstrates the usefulness of a generic system conceptual model for developing a particular system.中文译文数据采集系统Txomin Nieva数据采集系统, 正如名字所暗示的, 是一种用来采集信息成文件或分析一些现象的产品或过程。

数据采集外文翻译

数据采集外文翻译

中文1950字附录附录A外文资料Data CollectionAt present,the management of China’s colleges and universities’apartments are developing toward standardization and market development,accidents have occurred in electricity,while some colleges and universities have installed apart ment energy metering control system,however,these systems monitor the prevale nce of low level,billing accuracy is low,electricity-sharing,the network number o f the drawbacks of low extent.Therefore,improving the Energy Measurement m onitoring device has become more urgent.The issue of student hostels in colle ges and universities to monitor energy metering system to study,design the st udent hostels in colleges and universities of the electricity data collector apartm ent.Data acquisition, also known as data acquisition, is the use of a device th at collect data from outside the system and enter into an interface within the s ystem.Data acquisition technology is widely cited in the various fields.Such as camera, microphone, all data collection tools.Data is being collected has been c onverted to electrical signals of various physical quantities such as temperature, water level, wind speed, pressure, etc., can be analog, it can be digital.Sampl e collection generally means that a certain time interval (called the sampling p eriod) to repeat the same point of data collection.The data collected are mostly instantaneous value, but also a feature within a certain period of time value.A ccurate data measurement is the basis for data collection.Data measurement met hod of contact and non-contact detection elements varied.Regardless of which method and components are measured object does not affect the status and me asurement environment as a precondition to ensure the accuracy of the data.Ver y broad meaning of data collection, including continuous physical hold the collection across the state.In computer-aided mapping, surveying and mapping, desi gn, digital graphics or image data acquisition process may also be called, this time to be collected is the geometric volume (or include physical quantities, su ch as gray)data.[1] In today's fast-growing Internet industry, data collection has been widely used in the field of Internet and distributed data acquisition field has undergone important changes.First, the distributed control applications in i ntelligent data acquisition system at home and abroad have made great progres s.Second, the bus-compatible data acquisition plug-in number is increasing, and personal computer-compatible data acquisition system the number is increasing. Various domestic and international data collection machine has come out, the d ata acquisition into a new era.Digital signal processor (DSP) to the high-speed data processing ability an d strong peripherals interface, more and more widely used in power quality an alysis field, in order to improve the real-time and reliability.The DSP and micr ocomputer as the center of the system, realize the power system signal collecti on and analysis. This paper based on the FFT algorithm with window interpola tion electric system harmonic analysis, improves the accuracy of the power qua lity parameters. In electricity parameter acquisition circuit, by highaccuracy tran sformer and improve software synchronous communication sampling method to conduct electricity parameters of the acquisition.The system consists of two main components, mainly complete data acquis ition and logic control.To synchronous sampling and A/D converter circuit pri ority . The DSP development board(SY-5402EVM),complete data processing. T HE signal after transformer, op-amp into A/D converter, using DSP multi-chann el buffer (McBSP) and serial port (A/D connected, data collection and operatio ns. At the same time, adopt PLL circuit implementation synchronous sampling, can prevent well due to sampling synchronization and cause the measuring err or. The overall system diagram of the A/D converter chooses the Analog to pr oduce stats redetect (AD) company AD73360. The chip has six analogue input channel, each channel can output 16 the digital quantity. Six channel simultan eous sampling, and conversion, timeshare transmission, effectively reduce gener ated due to the sampling time different phase error. SY - 5402EVM on-board DSP chip is TI company's 16 fixed-point digital signal processor TMS320VC54 02. It has high costperformance and provide high-speed, bidirectional, multi-channel belt cushion, be used to serial port with system of other serial devices di rectly interface.The realization method of ac sample:In the field of power quality analysi s,The fast Fourier transform (FFT) algorithm analysis of electric system harmon ic is commonly used.and the FFT algorithm to signal a strict requirements syn chronous sampling. The synchronous sampling influence: it's difficult to accomp lish synchronous sampling and integer a period truncation in the actual measur ement, so there was a affect the measurement accuracy of the frequency spectr um leakage problem. The signal has to deal with through sampling and A/D c onversion get limited long digital sequence,the original signal multiplied by A r ectangular window to truncated. Time-domain truncation will cause the detuning frequency domain, spectrum leakage occurs. In the synchronous sampling, bec ause the actual signal every harmonic component can't exactly landed in freque ncy resolution point in, but fall between the frequency resolution points. But F FT spectrum is discrete, only in all sampling points, while in other places of s pectrum is not. Such through FFT and cannot directly get every harmonic com ponent, but only the accurate value in neighboring frequency resolution point v alue to approximate instead of, can cause the fence effect error.The realization method of synchronous sampling signal:According to provide different ways of sampling signal, synchronous sampling method and divided into software sync hronous sampling method and hardware synchronous sampling method is two k inds. Software is synchronous sampling method by micro controller (MCU) or DSP provide synchronized sampling pulse, first measured the measured signal, the sa mpling interval period T Δ T = T/N (N for week of sampling points), T hus the count value determined timer,Use timing interrupt way realization sync hronous sampling. The advantage of this method is no hardware synchronous c ircuit, simple structure .This topic will be the eventual realization of access to embedded systems,the realization of the power measurement and monitoring,m onitoring system to meet the electricity network,intelligence requirement,it prom ote the development of remote monitoring services,bringing a certain degree of socio.economic effectiveness.On the fundamental reactive current and harmonic current detection, there are mainly 2 ways: First, the instantaneous reactive power theory based method, the second is based on adaptive cancellation techniques.In addition, there areother non-mainstream approach, such as fast Fourier transform method, wavelet transform.Instantaneous power theory based on the method of offensive principles ar e: a three-phase current detection and load phase voltage A, the coordinate tra nsformation, two-phase stationary coordinate system the current value, calculate the instantaneous active and instantaneous reactive power ip iq,then after coor dinate transformation, three-phase fundamental active current, with the final loa d current minus the fundamental current, active power and harmonic currents a re fundamental iah, ibhi, ich.From:Principles of Data Acquisitio数据采集目前,我国高校公寓管理正在向着正规化、市场化发展,在不断提高学生方便用电的同时,用电事故频有发生,虽然部分高校公寓已经安装了电能计量监控系统,但这些系统普遍存在着监控程度低、计费精度不高、电费均分、网络程度低等诸多端。

论文必备中英文献数据库大全

论文必备中英文献数据库大全

论文必备——中英文献数据库大全终身受用,写论文需要的参考文献都在这里了!一、中文数据库中国最大的数据库,内容较全。

收录了5000多种中文期刊,1994年以来的数百万篇文章,并且目前正以每天数千篇的速度进行更新。

阅读全文需在网站主页下载CAJ全文浏览器。

文献收录1989年以来的全文。

只是扫描质量有点差劲,1994年以后的数据不如CNKI全。

阅读全文需下载维谱全文浏览器,约7M。

目前,以下站点提供免费检索3、万方数据库收录了核心期刊的全文,文件为pdf格式,阅读全文需Acrobat Reader 浏览器。

二、外文全文站点(所有外文数据库世界上第二大免费数据库(最大的免费数据库没有生物学、农业方面的文献),该网站提供部分文献的免费检索,和所用文献的超级链接,免费文献在左边标有FREE.Elsevier Science是荷兰一家全球著名的学术期刊出版商,每年出版大量的农业和生物科学、化学和化工、临床医学、生命科学、计算机科学、地球科学、工程、能源和技术、环境科学、材料科学、航空航天、天文学、物理、数学、经济、商业、管理、社会科学、艺术和人文科学类的学术图书和期刊,目前电子期刊总数已超过1 200多种(其中生物医学期刊499种),其中的大部分期刊都是SCI、EI等国际公认的权威大型检索数据库收录的各个学科的核心学术期刊。

Wiley InterScience是John Wiely & Sons 公司创建的动态在线内容服务,1997年开始在网上开通。

通过InterScience,Wiley公司以许可协议形式向用户提供在线访问全文内容的服务。

Wiley InterScience收录了360多种科学、工程技术、医疗领域及相关专业期刊、30多种大型专业参考书、13种实验室手册的全文和500多个题目的Wiley学术图书的全文。

其中被SCI收录的核心期刊近200种。

(注册一个用户名密码,下次直接用注册的用户名密码进去,不用代理照样能看文章全文,Willey注册一个,就可以免费使用CP了,那可是绝对好的Protocols )施普林格出版集团年出新书2000多种,期刊500多种,其中400多种期刊有电子版。

外文翻译--自动抄表系统

外文翻译--自动抄表系统

英文原文:Automatic meter reading systemThe present invention relates to automatic meter reading. More particularly, the present invention relates to an automated system for remotely monitoring a plurality of utility meters on command from a host server via an RF outbound broadcast.BACKGROUND OF THE INVENTIONHistorically, meters measuring electrical energy, water flow, gas usage, and the like have used measurement devices, which mechanically monitor the subscriber's usage and display a reading of the usage at the meter itself. Consequently, the reading of these meters has required that human meter readers physically go to the site of the meter and manually document the readings. Clearly, this approach relies very heavily on human intervention and, thus, is very costly, time-consuming, and prone to human error. As the number of meters in a typical utility's service region has increased, in some cases into the millions, human meter reading has become prohibitive in terms of time and money.In response, various sensing devices have been developed to automatically read utility meters and store the meter data electronically. These sensing devices, usually optical, magnetic, or photoelectric in nature, are coupled to the meter to record the meter data. Additionally, the meters have been equipped with radio frequency (RF) transceivers and control devices which enable the meters to transmit meter data over an RF link when requested to do so. Hand-held devices have been developed which include RF transceivers designed to interface with the meters' RF transceivers. These hand-held devices enable the human meter reader to simply walk by the meter's location, transmit a reading request over an RF link from the hand-held device to the meter's receiving device, wait for a response from the meter's sensing and transmitting device, and then record, manually or electronically, the meter data.Similarly, meter reading devices have been developed for drive-by reading systems. Utility vans are equipped with RF transceivers similar to those described in the hand-held example above. The human meter reader drives by the subscriber's location, with an automated reading system in the utility van. Again, the meters are commanded to report the meter data, which is received in the van via an RF link, where the data is recorded electronically. While this methodology improves upon the previous approaches, it still requires a significant amount of human intervention and time.Recently, there has been a concerted effort to accomplish meter reading by installing fixed communication networks that would allow data to flow from the meterall the way to the host system without human intervention. These fixed communications networks can operate using wire line or radio technology.FIG. 1 shows a conventional fixed communication network for automated meter reading (AMR) technology. As shown in FIG. 1, a fixed communication network having wire line technology in which utility meters 10 are connected to a wide area network (WAN) consisting of a suitable communications medium, including ordinary telephone lines, or the power lines that feed the meters themselves.One disadvantage of this approach has been that when a number of meters transmit meter data nearly simultaneously, the inherent latency on the wide area network results in packet collisions, lost data, garbled data, and general degradation of integrity across the system. To compensate for the collisions and interference between data packages destined for the central computer, due to the latency inherent in the WAN, various management schemes have been employed to ensure reliable delivery of the meter data. However, while this approach may be suitable for small systems, it does not serve the needs of a utility which monitors thousands or even millions of meters.In an attempt to better manage the traffic in the WAN, approaches have been developed wherein meter control devices similar to those described above have been programmed to transmit meter data in response to commands received from the central computer via the WAN. By limiting the number of meter reading commands transmitted at a given time, the central computer controls the volume of data transmitted simultaneously. However, the additional WAN traffic further aggravated the degradation of data integrity due to various WAN latency effects. Thus, while these approaches may serve to eliminate the need for human meter readers, reliance on the WAN has proven these approaches to be unsatisfactory for servicing the number of meters in the typical service region.Consequently, radio technology has tended to be the medium of choice due to its higher data rates and independence of the distribution network. The latest evolution of automated meter reading systems have made use of outbound RF communications from a fixed source (usually the utility's central station), directly to RF receivers mounted on the meters. The meters are also equipped with control devices which initiate the transfer of meter data when commanded to do so by the fixed source. The meters respond via a WAN as in the previous wire-based example. One disadvantage of these approaches is that there is still far too much interference on the WAN when all of the meters respond at about the same time. Thus, while these approaches reduce some of the WAN traffic (by eliminating outbound commands over the WAN), they are still unable to accommodate the large number of meters being polled.It is worthy of note that the wire-based systems typically use a single frequencychannel and allow the impedance and transfer characteristics of the transformers in the substation to prevent injection equipment in one station from interfering with receivers in another station. This built-in isolation in the network makes time division multiplexing less critical than for radio based metering systems. Typical fixed network radio systems also utilize a single channel to read all meters but the systems do not have a natural blocking point similar to the substation transformer utilized by distribution line carrier (DLC) networks. Also, the latency inherent in the WAN has contributed significantly to the problems associated with time division multiplexing a single frequency communications systems. As a result, the systems require sophisticated management schemes to time division multiplex the channel for optimal utilization.Therefore, a need exists to provide a system whereby a utility company can reliably and rapidly read on the order of one million meters in the absence of any significant human intervention. Further, a need exists to provide such a system which accommodates changes to the network as well as changes in operating conditions without significant degradation of performance.SUMMARY OF THE INVENTIONThe present invention fulfills these needs by providing an automated meter reading system having a host server interfaced to a plurality of nodes, each node communicating with a number of utility meters. In a preferred embodiment, the system has a selection means for selecting a group of noninterfering nodes; and an outbound RF broadcast channel from the host server for communicating with the selected group to initiate the reading of meters that communicate with those nodes and the uploading of meter data provided by those meters to those nodes. This outbound RF broadcast channel can be an existing channel currently being used for demand side management. In a preferred embodiment, the system also has a two-way communication link over a wide area network between the host server and each of the nodes. In a more preferred embodiment, the host server receives meter data read from at least one million meters in no more than about five minutes.In yet another preferred embodiment, the system also has a number of gateways, each communicating with a plurality of nodes, grouped to form sets of noninterfering gateways. In this embodiment, the system also has a selection means for selecting one of the sets of noninterfering gateways, and a second outbound RF broadcast channel from the host server for communicating with the selected set to initiate uploading of meter data from the selected set to the host server. This second outbound RF broadcast channel can be an existing channel currently being used for demand side management.The present invention further fulfills these needs by providing a method for usingan outbound RF channel to automatically read meters. In a preferred embodiment, the method comprises the steps of: defining a number of groups of noninterfering nodes: selecting a first group; broadcasting a read command to each node in the first group; selecting a second group; and broadcasting a read command to each node in the second group.In another embodiment, the method further comprises the steps of: reading meter data, in response to the read command, from each meter communicating with the node receiving the read command; recording the meter data in a data storage means associated with that node; broadcasting an upload message to each node in the first group; uploading the meter data recorded in the data storage means associated with the nodes of the first group to the host server; broadcasting an upload message to each node in the second group; and uploading the meter data recorded in the data storage means associated with nodes of the second group to the host server.In yet another embodiment, at least some of the nodes communicate through one of a number of gateways to the host server. In this embodiment, the method further comprises the steps of: selecting a first set of noninterfering gateways; broadcasting an upload message to each gateway in the first set; uploading the meter data recorded in the data storage means associated with the nodes that communicate with the first set of noninterfering gateways to the host server; selecting a second set of noninterfering gateways; broadcasting an upload message to each gateway in the second set; uploading the meter data recorded in the data storage means associated with nodes that communicate with the second set of noninterfering gateways to the host server.The present invention further fulfills the aforementioned needs by providing an automated meter reading system wherein the host server maintains a topology database in which each meter is assigned to at least one node, each node is assigned to at least one gateway. The nodes are preferably grouped together to define groups of noninterfering nodes and the gateways are preferably grouped together to define sets of noninterfering gateways.In another preferred embodiment, each of the plurality of nodes is adapted to receive RF broadcasts and the host server sequentially broadcasts a communication over an RF channel to each group of noninterfering nodes to initiate meter reading. In yet another preferred embodiment, each of the plurality of gateways is adapted to receive RF broadcasts and the host server sequentially broadcasts an upload message over a second RF channel to each set of noninterfering gateways, the gateways uploading the meter data to the host server via a wide area network in response to the upload message.The present invention further fulfills these needs by providing a method ofautomatically reading a plurality of meters in an AMR system comprising the steps of: selecting one of the nodes designated to communicate with each gateway; grouping the selected nodes to form groups of noninterfering nodes; forming sets of gateways such that each gateway within one set has an individual gateway designator; maintaining a topology database that uniquely identifies for each meter the set, gateway and node designators associated with said meter; and reading the meters based on the set, gateway and node designators.In another preferred embodiment, the method further comprises the step of initiating meter reading by sequentially broadcasting a read message over an RF channel to each group of noninterfering nodes. In yet another preferred embodiment, the method further comprises the step of initiating the uploading of meter data by sequentially broadcasting an upload message over the RF channel to each group of noninterfering nodes.The present invention will be better understood, and its numerous objects and advantages will become apparent by reference to the following detailed description of the invention when taken in conjunction with the following drawings, in which: FIG. 1 shows a conventional fixed communication network for automated meter reading technology;FIG. 2 shows a block diagram of an automated meter reading system according to the present invention;FIG. 3 shows a block diagram of an automated meter reading system in which an optional gateway is included according to the present invention;FIG. 4 shows a network of nodes and gateways exemplifying a group of noninterfering nodes;FIG. 5 shows communications traffic within one set of gateway service regions in an automated meter reading system;FIG. 6 shows the process by which a host server commands groups of noninterfering nodes to read meters and by which nodes read and store meter data gateways in accordance with a preferred embodiment of the present invention;FIG. 7 shows the process by which a host server commands nodes and gateways to upload meter data simultaneously in accordance with a preferred embodiment of the present invention;FIG. 8 shows the process by which a host server commands nodes and gateways to upload meter data by using groups of noninterfering gateways in accordance with a preferred embodiment of the present invention.DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTFIG. 2 shows a diagram of a preferred embodiment of an automated meter reading system which uses broadcast technology to read utility meters in accordancewith the present invention. The system includes a host server, a wide area network (WAN), a plurality of optional gateway interface (OGI) nodes, and a plurality of utility meters 。

数据采集系统中英文对照外文翻译文献

数据采集系统中英文对照外文翻译文献

中英文对照外文翻译(文档含英文原文和中文翻译)Data Acquisition SystemsData acquisition systems are used to acquire process operating data and store it on,secondary storage devices for later analysis. Many or the data acquisition systems acquire this data at very high speeds and very little computer time is left to carry out any necessary, or desirable, data manipulations or reduction. All the data are stored on secondary storage devices and manipulated subsequently to derive the variables ofin-terest. It is very often necessary to design special purpose data acquisition systems and interfaces to acquire the high speed process data. This special purpose design can be an expensive proposition.Powerful mini- and mainframe computers are used to combine the data acquisition with other functions such as comparisons between the actual output and the desirable output values, and to then decide on the control action which must be taken to ensure that the output variables lie within preset limits. The computing power required will depend upon the type of process control system implemented. Software requirements for carrying out proportional, ratio or three term control of process variables are relatively trivial, and microcomputers can be used to implement such process control systems. It would not be possible to use many of the currently available microcomputers for the implementation of high speed adaptive control systems which require the use of suitable process models and considerable online manipulation of data.Microcomputer based data loggers are used to carry out intermediate functions such as data acquisition at comparatively low speeds, simple mathematical manipulations of raw data and some forms of data reduction. The first generation of data loggers, without any programmable computing facilities, was used simply for slow speed data acquisition from up to one hundred channels. All the acquired data could be punched out on paper tape or printed for subsequent analysis. Such hardwired data loggers are being replaced by the new generation of data loggers which incorporate microcomputers and can be programmed by the user. They offer an extremely good method of collecting the process data, using standardized interfaces, and subsequently performing the necessary manipulations to provide the information of interest to the process operator. The data acquired can be analyzed to establish correlations, if any, between process variables and to develop mathematical models necessary for adaptive and optimal process control.The data acquisition function carried out by data loggers varies from one to 9 in system to another. Simple data logging systems acquire data from a few channels while complex systems can receive data from hundreds, or even thousands, of input channels distributed around one or more processes. The rudimentary data loggers scan the selected number of channels, connected to sensors or transducers, in a sequential manner and the data are recorded in a digital format. A data logger can be dedicated in the sense that it can only collect data from particular types of sensors and transducers. It is best to use a nondedicated data logger since any transducer or sensor can be connected to the channels via suitable interface circuitry. This facility requires the use of appropriate signal conditioning modules.Microcomputer controlled data acquisition facilitates the scanning of a large number of sensors. The scanning rate depends upon the signal dynamics which means that some channels must be scanned at very high speeds in order to avoid aliasing errors while there is very little loss of information by scanning other channels at slower speeds. In some data logging applications the faster channels require sampling at speeds of up to 100 times per second while slow channels can be sampled once every five minutes. The conventional hardwired, non-programmable data loggers sample all the channels in a sequential manner and the sampling frequency of all the channels must be the same. This procedure results in the accumulation of very large amounts of data, some of which is unnecessary, and also slows down the overall effective sampling frequency. Microcomputer based data loggers can be used to scan some fast channels at a higher frequency than other slow speed channels.The vast majority of the user programmable data loggers can be used to scan up to 1000 analog and 1000 digital input channels. A small number of data loggers, with a higher degree of sophistication, are suitable for acquiring data from up to 15, 000 analog and digital channels. The data from digital channels can be in the form of Transistor- Transistor Logic or contact closure signals. Analog data must be converted into digital format before it is recorded and requires the use of suitable analog to digital converters (ADC).The characteristics of the ADC will define the resolution that can be achieved and the rate at which the various channels can be sampled. An in-crease in the number of bits used in the ADC improves the resolution capability. Successive approximation ADC's arefaster than integrating ADC's. Many microcomputer controlled data loggers include a facility to program the channel scanning rates. Typical scanning rates vary from 2 channels per second to 10, 000 channels per second.Most data loggers have a resolution capability of ±0.01% or better, It is also pos-sible to achieve a resolution of 1 micro-volt. The resolution capability, in absolute terms, also depends upon the range of input signals, Standard input signal ranges are 0-10 volt, 0-50 volt and 0-100 volt. The lowest measurable signal varies form 1 t, volt to 50, volt. A higher degree of recording accuracy can be achieved by using modules which accept data in small, selectable ranges. An alternative is the auto ranging facil-ity available on some data loggers.The accuracy with which the data are acquired and logged-on the appropriate storage device is extremely important. It is therefore necessary that the data acquisi-tion module should be able to reject common mode noise and common mode voltage. Typical common mode noise rejection capabilities lie in the range 110 dB to 150 dB. A decibel (dB) is a tern which defines the ratio of the power levels of two signals. Thus if the reference and actual signals have power levels of N, and Na respectively, they will have a ratio of n decibels, wheren=10 Log10(Na /Nr)Protection against maximum common mode voltages of 200 to 500 volt is available on typical microcomputer based data loggers.The voltage input to an individual data logger channel is measured, scaled and linearised before any further data manipulations or comparisons are carried out.In many situations, it becomes necessary to alter the frequency at which particu-lar channels are sampled depending upon the values of data signals received from a particular input sensor. Thus a channel might normally be sampled once every 10 minutes. If, however, the sensor signals approach the alarm limit, then it is obviously desirable to sample that channel once every minute or even faster so that the operators can be informed, thereby avoiding any catastrophes. Microcomputer controlledintel-ligent data loggers may be programmed to alter the sampling frequencies depending upon the values of process signals. Other data loggers include self-scanning modules which can initiate sampling.The conventional hardwired data loggers, without any programming facilities, simply record the instantaneous values of transducer outputs at a regular samplingin-terval. This raw data often means very little to the typical user. To be meaningful, this data must be linearised and scaled, using a calibration curve, in order to determine the real value of the variable in appropriate engineering units. Prior to the availability of programmable data loggers, this function was usually carried out in the off-line mode on a mini- or mainframe computer. The raw data values had to be punched out on pa-per tape, in binary or octal code, to be input subsequently to the computer used for analysis purposes and converted to the engineering units. Paper tape punches are slow speed mechanical devices which reduce the speed at which channels can be scanned. An alternative was to print out the raw data values which further reduced the data scanning rate. It was not possible to carry out any limit comparisons or provide any alarm information. Every single value acquired by the data logger had to be recorded eventhough it might not serve any useful purpose during subsequent analysis; many data values only need recording when they lie outside the pre-set low and high limits.If the analog data must be transmitted over any distance, differences in ground potential between the signal source and final location can add noise in the interface design. In order to separate common-mode interference form the signal to be recorded or processed, devices designed for this purpose, such as instrumentation amplifiers, may be used. An instrumentation amplifier is characterized by good common-mode- rejection capability, a high input impedance, low drift, adjustable gain, and greater cost than operational amplifiers. They range from monolithic ICs to potted modules, and larger rack-mounted modules with manual scaling and null adjustments. When a very high common-mode voltage is present or the need for extremely-lowcom-mon-mode leakage current exists(as in many medical-electronics applications),an isolation amplifier is required. Isolation amplifiers may use optical or transformer isolation.Analog function circuits are special-purpose circuits that are used for a variety of signal conditioning operations on signals which are in analog form. When their accu-racy is adequate, they can relieve the microprocessor of time-consuming software and computations. Among the typical operations performed are multiplications, division, powers, roots, nonlinear functions such as for linearizing transducers, rimsmeasure-ments, computing vector sums, integration and differentiation, andcurrent-to-voltage or voltage- to-current conversion. Many of these operations can be purchased in available devices as multiplier/dividers, log/antilog amplifiers, and others.When data from a number of independent signal sources must be processed by the same microcomputer or communications channel, a multiplexer is used to channel the input signals into the A/D converter.Multiplexers are also used in reverse, as when a converter must distribute analog information to many different channels. The multiplexer is fed by a D/A converter which continually refreshes the output channels with new information.In many systems, the analog signal varies during the time that the converter takes to digitize an input signal. The changes in this signal level during the conversion process can result in errors since the conversion period can be completed some time after the conversion command. The final value never represents the data at the instant when the conversion command is transmitted. Sample-hold circuits are used to make an acquisition of the varying analog signal and to hold this signal for the duration of the conversion process. Sample-hold circuits are common in multichannel distribution systems where they allow each channel to receive and hold the signal level.In order to get the data in digital form as rapidly and as accurately as possible, we must use an analog/digital (A/D) converter, which might be a shaft encoder, a small module with digital outputs, or a high-resolution, high-speed panel instrument. These devices, which range form IC chips to rack-mounted instruments, convert ana-log input data, usually voltage, into an equivalent digital form. The characteristics of A/D converters include absolute and relative accuracy, linearity, monotonic, resolu-tion, conversion speed, and stability. A choice of input ranges, output codes, and other features are available. The successive-approximation technique is popular for a large number ofapplications, with the most popular alternatives being the counter-comparator types, and dual-ramp approaches. The dual-ramp has been widely-used in digital voltmeters.D/A converters convert a digital format into an equivalent analog representation. The basic converter consists of a circuit of weighted resistance values or ratios, each controlled by a particular level or weight of digital input data, which develops the output voltage or current in accordance with the digital input code. A special class of D/A converter exists which have the capability of handling variable reference sources. These devices are the multiplying DACs. Their output value is the product of the number represented by the digital input code and the analog reference voltage, which may vary form full scale to zero, and in some cases, to negative values.Component Selection CriteriaIn the past decade, data-acquisition hardware has changed radically due to ad-vances in semiconductors, and prices have come down too; what have not changed, however, are the fundamental system problems confronting the designer. Signals may be obscured by noise, rfi,ground loops, power-line pickup, and transients coupled into signal lines from machinery. Separating the signals from these effects becomes a matter for concern.Data-acquisition systems may be separated into two basic categories:(1)those suited to favorable environments like laboratories -and(2)those required for hostile environments such as factories, vehicles, and military installations. The latter group includes industrial process control systems where temperature information may be gathered by sensors on tanks, boilers, wats, or pipelines that may be spread over miles of facilities. That data may then be sent to a central processor to provide real-time process control. The digital control of steel mills, automated chemical production, and machine tools is carried out in this kind of hostile environment. The vulnerability of the data signals leads to the requirement for isolation and other techniques.At the other end of the spectrum-laboratory applications, such as test systems for gathering information on gas chromatographs, mass spectrometers, and other sophis-ticated instruments-the designer's problems are concerned with the performing of sen-sitive measurements under favorable conditions rather than with the problem ofpro-tecting the integrity of collected data under hostile conditions.Systems in hostile environments might require components for wide tempera-tures, shielding, common-mode noise reduction, conversion at an early stage, redun-dant circuits for critical measurements, and preprocessing of the digital data to test its reliability. Laboratory systems, on the other hand, will have narrower temperature ranges and less ambient noise. But the higher accuracies require sensitive devices, and a major effort may be necessary for the required signal /noise ratios.The choice of configuration and components in data-acquisition design depends on consideration of a number of factors:1. Resolution and accuracy required in final format.2. Number of analog sensors to be monitored.3. Sampling rate desired.4. Signal-conditioning requirement due to environment and accuracy.5. Cost trade-offs.Some of the choices for a basic data-acquisition configuration include:1 .Single-channel techniques.A. Direct conversion.B. Preamplification and direct conversion.C. Sample-hold and conversion.D. Preamplification, sample-hold, and conversion.E. Preamplification, signal-conditioning, and direct conversion.F. Preamplification, signal-conditioning, sample-hold, and conversion.2. Multichannel techniques.A. Multiplexing the outputs of single-channel converters.B. Multiplexing the outputs of sample-holds.C. Multiplexing the inputs of sample-holds.D. Multiplexing low-level data.E. More than one tier of multiplexers.Signal-conditioning may include:1. Radiometric conversion techniques.B. Range biasing.D. Logarithmic compression.A. Analog filtering.B. Integrating converters.C. Digital data processing.We shall consider these techniques later, but first we will examine some of the components used in these data-acquisition system configurations.MultiplexersWhen more than one channel requires analog-to-digital conversion, it is neces-sary to use time-division multiplexing in order to connect the analog inputs to a single converter, or to provide a converter for each input and then combine the converter outputs by digital multiplexing.Analog MultiplexersAnalog multiplexer circuits allow the timesharing of analog-to-digital converters between a numbers of analog information channels. An analog multiplexer consists of a group of switches arranged with inputs connected to the individual analog channels and outputs connected in common(as shown in Fig. 1).The switches may be ad-dressed by a digital input code.Many alternative analog switches are available in electromechanical and solid-state forms. Electromechanical switch types include relays, stepper switches,cross-bar switches, mercury-wetted switches, and dry-reed relay switches. The best switching speed is provided by reed relays(about 1 ms).The mechanical switches provide high do isolation resistance, low contact resistance, and the capacity to handle voltages up to 1 KV, and they are usually inexpensive. Multiplexers using mechanical switches are suited to low-speed applications as well as those having high resolution requirements. They interface well with the slower A/D converters, like the integrating dual-slope types. Mechanical switches have a finite life, however, usually expressed innumber of operations. A reed relay might have a life of 109 operations, which wouldallow a 3-year life at 10 operations/second.Solid-state switch devices are capable of operation at 30 ns, and they have a life which exceeds most equipment requirements. Field-effect transistors(FETs)are used in most multiplexers. They have superseded bipolar transistors which can introduce large voltage offsets when used as switches.FET devices have a leakage from drain to source in the off state and a leakage from gate or substrate to drain and source in both the on and off states. Gate leakage in MOS devices is small compared to other sources of leakage. When the device has a Zener-diode-protected gate, an additional leakage path exists between the gate and source.Enhancement-mode MOS-FETs have the advantage that the switch turns off when power is removed from the MUX. Junction-FET multiplexers always turn on with the power off.A more recent development, the CMOS-complementary MOS-switch has the advantage of being able to multiplex voltages up to and including the supply voltages. A±10-V signal can be handled with a ±10-V supply.Trade-off Considerations for the DesignerAnalog multiplexing has been the favored technique for achieving lowest system cost. The decreasing cost of A/D converters and the availability of low-cost, digital integrated circuits specifically designed for multiplexing provide an alternative with advantages for some applications. A decision on the technique to use for a givensys-tem will hinge on trade-offs between the following factors:1. Resolution. The cost of A/D converters rises steeply as the resolution increases due to the cost of precision elements. At the 8-bit level, the per-channel cost of an analog multiplexer may be a considerable proportion of the cost of a converter. At resolutions above 12 bits, the reverse is true, and analog multiplexing tends to be more economical.2. Number of channels. This controls the size of the multiplexer required and the amount of wiring and interconnections. Digital multiplexing onto a common data bus reduces wiring to a minimum in many cases. Analog multiplexing is suited for 8 to 256 channels; beyond this number, the technique is unwieldy and analog errors be-come difficult to minimize. Analog and digital multiplexing is often combined in very large systems.3. Speed of measurement, or throughput. High-speed A/D converters can add a considerable cost to the system. If analog multiplexing demands a high-speedcon-verter to achieve the desired sample rate, a slower converter for each channel with digital multiplexing can be less costly.4. Signal level and conditioning. Wide dynamic ranges between channels can be difficult with analog multiplexing. Signals less than 1V generally require differential low-level analog multiplexing which is expensive, with programmable-gain amplifiers after the MUX operation. The alternative of fixed-gain converters on each channel, with signal-conditioning designed for the channel requirement, with digital multi-plexing may be more efficient.5. Physical location of measurement points. Analog multiplexing is suitedfor making measurements at distances up to a few hundred feet from the converter, since analog lines may suffer from losses, transmission-line reflections, and interference. Lines may range from twisted wire pairs to multiconductor shielded cable, depending on signal levels, distance, and noise environments. Digital multiplexing is operable to thousands of miles, with the proper transmission equipment, for digital transmission systems can offer the powerful noise-rejection characteristics that are required for29 Data Acquisition Systems long-distance transmission.Digital MultiplexingFor systems with small numbers of channels, medium-scale integrated digital multiplexers are available in TTL and MOS logic families. The 74151 is a typical example. Eight of these integrated circuits can be used to multiplex eight A/D con-verters of 8-bit resolution onto a common data bus.This digital multiplexing example offers little advantages in wiring economy, but it is lowest in cost, and the high switching speed allows operation at sampling rates much faster than analog multiplexers. The A/D converters are required only to keep up with the channel sample rate, and not with the commutating rate. When large numbers of A/D converters are multiplexed, the data-bus technique reduces system interconnections. This alone may in many cases justify multiple A/D converters. Data can be bussed onto the lines in bit-parallel or bit-serial format, as many converters have both serial and parallel outputs. A variety of devices can be used to drive the bus, from open collector and tristate TTL gates to line drivers and optoelectronic isolators. Channel-selection decoders can be built from 1-of-16 decoders to the required size. This technique also allows additional reliability in that a failure of one A/D does not affect the other channels. An important requirement is that the multiplexer operate without introducing unacceptable errors at the sample-rate speed. For a digital MUX system, one can determine the speed from propagation delays and the time required to charge the bus capacitance.Analog multiplexers can be more difficult to characterize. Their speed is a func-tion not only of internal parameters but also external parameters such as channel, source impedance, stray capacitance and the number of channels, and the circuit lay-out. The user must be aware of the limiting parameters in the system to judge their ef-fect on performance.The nonideal transmission and open-circuit characteristics of analog multiplexers can introduce static and dynamic errors into the signal path. These errors include leakage through switches, coupling of control signals into the analog path, and inter-actions with sources and following amplifiers. Moreover, the circuit layout can com-pound these effects.Since analog multiplexers may be connected directly to sources which may have little overload capacity or poor settling after overloads, the switches should have a break-before-make action to prevent the possibility of shorting channels together. It may be necessary to avoid shorted channels when power is removed and a chan-nels-off with power-down characteristic is desirable. In addition to the chan-nel-addressing lines, which are normally binary-coded, it is useful to have inhibited or enable lines to turn all switches off regardless of the channel being addressed. This simplifies the external logic necessary to cascade multiplexers and can also be useful in certain modes of channeladdressing. Another requirement for both analog and digital multiplexers is the tolerance of line transients and overload conditions, and the ability to absorb the transient energy and recover without damage.数据采集系统数据采集系统是用来获取数据处理和存储在二级存储设备,为后来的分析。

2011届毕业设计英文翻译格式例文

2011届毕业设计英文翻译格式例文

本科毕业设计(论文)外文翻译译文学生姓名:院(系):计算机学院专业班级:通信工程0701指导教师:完成日期:2011 年 3 月15 日文献名称(单片机)文献名称(Single Chip Microcomputer)作者:Jessica起止页码:9——23出版日期(DOI 10.1007/s00253-008-1657-1)出版单位:外语教学和研究出版社中文原文:一、单片机介绍单片机也被称为微控制器(Microcontroller Unit),常用英文字母的缩写MCU 表示单片机,它最早是被用在工业控制领域。

单片机由芯片内仅有CPU的专用处理器发展而来。

最早的设计理念是通过将大量外围设备和CPU集成在一个芯片中,使计算机系统更小,更容易集成进复杂的而对体积要求严格的控制设备当中。

INTEL的Z80是最早按照这种思想设计出的处理器,从此以后,单片机和专用处理器的发展便分道扬镳。

早期的单片机都是8位或4位的。

其中最成功的是INTEL的8031,因为简单可靠而性能不错获得了很大的好评。

此后在8031上发展出了MCS51系列单片机系统。

基于这一系统的单片机系统直到现在还在广泛使用。

随着工业控制领域要求的提高,开始出现了16位单片机,但因为性价比不理想并未得到很广泛的使用。

90年代后随着消费电子产品大发展,单片机技术得到了巨大提高。

随着INTEL i960系列特别是后来的ARM系列的广泛使用,32位单片机迅速取代16位单片机的高端地位,并且进入主流市场。

而传统的8位单片机的性能也得到了飞速提高,处理能力比起80年代提高了数百倍。

目前,高端的32位单片机主频已经超过300MHz,性能直追90年代中期的专用处理器,而普通的型号出厂价格跌落至1美元,最高端[1]的型号也只有10美元。

当代单片机系统已经不再只在裸机环境下开发和使用,大量专用的嵌入式操作系统被广泛使用在全系列的单片机上。

而在作为掌上电脑和手机核心处理的高端单片机甚至可以直接使用专用的Windows和Linux操作系统。

测控技术与仪器 自动化 外文翻译 外文文献 英文文献

测控技术与仪器 自动化 外文翻译 外文文献 英文文献

外文出处:资料1:Virtual instrument based on serial(用外文写)communication and data acquisition system of management .资料2:LabVIEW serial communication based on Frequency Control Monitoring System附件:资料1:1.翻译译文;2.外文原文。

资料2:1.翻译译文;2.外文原文。

附件:资料1翻译译文在自动化控制和智能仪器仪表中, 单片机的应用越来越广泛, 由于单片机的运算功能较差, 往往需要借助计算机系统, 因此单片机和 PC机进行远程通信更具有实际意义, 通信的关键在于互传数据信息。

51系列单片机内部的串行口具有通信的功能,该串行口可以作为通信接口, 利用该串行口与 PC机的串行口 COM 1或COM 2进行串行通信, 将单片机采集的数据传送到 PC机中, 由 PC机的高级语言或数据库语言对数据进行整理及统计等复杂处理就能满足实际的应用需要。

软件设计,初始化后,打开数据通道对上下游信号进行采样,并进行相关运算,求峰值R~,对R.二是否峰值进行判断,以确保正确求出延时r,从而得出正确的流量。

由于一次相关计算所需时间很短,因此,采用计数器控制。

PC机和单片机在进行通信时, 首先分别对各自的串行口进行初始化、确定串行口工作方式、设定波特率、传输数据长度等, 然后才开始数据传输, 这些工作是由软件来完成的, 因此对 PC机和单片机均需设计相应的通信软件。

DOS环境下, 串行通信一般用中断方式来实现,用户对通信端口进行完全控制。

而在 W i ndow s 环境下, 系统禁止应用程序直接对硬件进行操作。

在W indows环境下提供了完备的 AP I应用程序接口函数, 程序员通过这些函数与通信硬件接口。

通信函数是中断驱动的: 发送数据时, 先将其放入缓存区,串口准备好后, 就将其发送出去; 传来的数据迅速申请中断, 使 W i ndow s接收它并将其存入缓冲区, 以供读取。

OPENLABCHROMATOGRAPHY数据系统(CDS)EZCHROM用户指南

OPENLABCHROMATOGRAPHY数据系统(CDS)EZCHROM用户指南

Agilent OpenLAB Chromatography 数据系统 (CDS) EZChrom用户指南声明© Agilent Technologies, Inc. 2010, 2011, 2012根据美国和国际版权法,未经Agilent 科技有限公司事先同意和书面许可,不得以任何形式、任何方式(包括存储为电子版、修改或翻译成外文)复制本手册的任何部分。

手册部件号M8201-97012版本2012年6第三版美国印刷Agilent Technologies, Inc.5301 Stevens Creek BoulevardSanta Clara, CA 95051 担保说明本手册内容按 “原样”提供,在将来的版本中如有更改,恕不另行通知。

此外,在适用法律允许的最大范围内,Agilent 对本手册以及此处包含的任何信息不作任何明示或默示担保,包括但不仅限于针对某一特殊用途的适销性和适用性的默示担保。

对于本手册或此处包含的任何信息可能出现的错误,或者因修改、使用本手册或此处包含的任何信息或因其性能方面的原因而造成的偶然或必然的损失,Agilent 不承担任何责任。

如果Agilent 与用户签订了单独的书面协议,其中涉及本手册内容的担保条款与这些条款冲突,则以协议中的担保条款为准。

技术许可本手册中提及的硬件和/ 或软件以许可权的方式提供,其使用或复制必须遵守许可条款。

有限权利声明美国政府受限权利。

授予联邦政府的软件和技术数据权利仅包括通常提供给最终用户的那些权利。

Agilent 根据 FAR 12.211(技术数据)和 12.212(计算机软件)和(对于国防部)DFARS 252.227-7015 (技术数据-商品)以及 DFARS 227.7202-3 (商业计算机软件或计算机软件文档中的权利)来提供软件和技术数据方面的此常规商业许可。

安全声明事项表示存在危险。

提醒您注意某个操作步骤、某项操作或类似问题,如果执行不当或未遵照提示操作,可能会损坏产品或丢失重要数据。

外文翻译-----单片机数据采集接口

外文翻译-----单片机数据采集接口

附录二外文原文及翻译Single-Chip Data Acquisition InterfaceGintaras PaukstaitisAbstractThis paper presents a single-chip data acquisition interface. It’s devoted for from one to eight analogous signals input to RAM of IBM PC or compatible computers. Maximal signal sampling rate is 80 kHz. Interface has programmable gain for analogous signals as well as programmable sampling rate and number of channels. Some functional unit was designed using synthesis from VHDL with help of Synopsys. Interface was based on 1 mm CMOS process from ATMEL-ES2. It was verified using kit for DFWII of Cadence. The Place & Route tools from Cadence have been used to obtain the circuit layout.Table of containsAbstract1. Introduction2. Steps of Designing3. Analogous Part4. Digital Part5. Interface Testing6. Creation of Layout7.Technical Data8. Conclusions9. Acknowledgements10.References1. IntroductionNowadays units with VLSI are widely used in the world. It is really important for miniaturisation. Circuits with some IC redesigned to VLSI reduce its area many times. By the way, relatively VLSI itself becomes cheaper. While using units with VLSI gets less damage, as well as uses less power. Using of CAD makes easier and faster complicated IC designing. Cheaper computers give an opportunity to get servers not only for big companies and institutions of education but also for medium firms. This stride encouraged such complex circuits designing programs as Synopsys and Cadence creation. While using them it is possible to design suitable circuits for fabrication or layout creation. Synopsys simulates functions described in VHDL andfrom its description synthesises circuits which can be made from Cadence libraries elements. It abounds to transform them to Cadence and to create the layout of IC. The steps of Cadence designing are illustrated in Fig. 1.Single-chip data acquisition interface was designed according to basic circuit of data acquisition board. It was designed by Department of Applied Electronics in Kaunas University of Technology. It is used in medicine. Created single-chip interface has better electrical parameters. That’s way it could be used wider. Prototype board was designed in TTL element base. Single-chip interface is designed in CMOS element base. While converting the circuit there were no complicated problems. The delay of CMOS elements is less than TTL. That’s way the delay of signals was not bigger and didn’t change the first work of the circuit. ISA bus sig nals of IBM PC are TTL element logic levels, therefore interface should be connected through buffers for TTL and CMOS logic levels reconciliation.2. Steps of DesigningA circuit was designed according to a basic circuit. That is way Semi-Custom Design method was used. The flow-chart of interface is shown in the Fig. 2.It was necessary to use 8 operational amplifiers (OA) to fit eight analogous signals to A/D converter's limits. OA has programmable established gain. In many cases it could let analogous signal without any additional amplifiers to give to A/D converters. Gain for every OA separately fixed with Gain Control Block. Two converters change analogous signal to the digital one. Eachof converters has 4 switch-able inputs. Converters work method is comparison of every bit. Channel Control Block establishes the order of signal switching.Programmable interval timer establishes the frequency on signal switching as well as the data sampling rate. It has three counters, which work in frequency dividing and one-shot modes. Dividing coefficients of timer is settings through Internal Bus. The length of dividing coefficientis 16 bits. The timer divides 894kHz frequency signal therefore minimal interface sampling rate is FMIN = 894 / 216 = 14 Hz. Maximal sampling rate limits speed characteristic of A/D converters. It is equal FMAX = 80 kHz. Gain of OA, sampling rate and number of switching channels is set while sending charging words to the ports which are established by Address Decoding Block. Data to PC is fed in a single Direct Memory Access (DMA) mode. DMA controller is in charge of commuting protocol from PC. DMA Control Block is responsible from the side of interface. Clock Signal Block sets clock frequency of 1,8 MHz for converters and 0,9 MHz for timer.Control logic consists of simple gates and flip-flops. That is way gates and Flip-flops of the ES2 1mm CMOS element's library was used to design it. The reason why the 1mm CMOS ES2 technology library was chosen was the wide choice of it’s analogous elements for Semi-Custom Design. But ES2 library has no some functional elements which were used in the circuit. For example Intel 8253 programmable interval timer, binary counter or address decoder. Therefore these elements was described in VHDL. While using elements of 1mm CMOS ES2 technologylibrary with the assistance of Synopsys necessary circuits ware synthesised. EDIF of circuits was transported to Cadence. Having been connected with the left control logic and with the analogous signals converting part they made a full functioning interface. The stages of designing are shown3. Analogous PartAlternating analogous voltage signals are changed to pulsate one from 0 to +5 V signal in the analogous part of interface. As converter is made of CMOS elements and it’s power supply is 0 and +5 V so it can change only signal between 0 and +5 V limits. In order to reduce converting mistake converters are given analogous signal which should as close as possible to the limits. Programmable OA makes stronger analogous signals. It has 16 possible gains which are selected with the help of four bit code. They have non-inverting input which has a pad for external analogous signal input. To change the alternating voltage ( ~2,5 V) to pulsate one (0 to +5 V) "virtual ground" pad of OA is connected with +2,5 V and signal source "ground" (its "ground" voltage must be 2,5 V). Design of interface was simulated with Verilog-XL program. It simulates only digital signals. That is way while simulating analogous signals they were described as 8-bit digital vectors. Verilog HDL models of analogous elements are used for this simulation. HDL models are changed into layout models for the creation of layout. A/D converter of ES2 library is divided into 2 parts: analogous part consist of D/A converter and comparator. There is control logic and registers in the digital part. That is way only analogous part is changed in converters when layout is being created.4. Digital PartControl Block of interface was designed while changing discrete components of board to accordingly chip components of ES2 library. Some changes through different control of ES2 library and prototype board analogous elements were made. It was timer described in VHDL for its designing. Three models were created: two models for clock frequency dividing from coefficient which length is 16 and 8 bit and another one for one-shot mode. The length of control word is 8 bit. Standard packages of IEEE library were used for description of the models. It made easier operations themselves with vector data. VHDL models were simulated with Synopsys VHDL Debugger. Functional correct VHDL models of timer counters were synthesised by using elements of ES2 library. While synthesisingoptimisation was done. Because the delay of circuits signal (few nanoseconds) is comparing with clock period (1,2 mm) is less so optimisation was only worth for small areas. Set_max_area command was used for this goal. The area rapport summary of 16 bits timer counter synthesis is shown in the Table 1. It is clear that a number of counters elements becomes smaller approximately for 13%. But their area becomes smaller only for 1,5%. The reason is that the number of elements was being diminished with diminishing of combinational logic. While element of combinational logic comparing with noncombinational onetakes much small area. Besides some elements often are changed by one with same function but not much small area, in example 2 OR and 1 AND element are changed to one OR-AND.While synthesising binary counter which purpose is dividing external clock signal for converters and timers were used commands which put buffers on output signals wires. It is done because clock signal is delivered for many flip-flops (on timer). Primary synthesised circuit and a circuit with additional buffers and the number of diminished elements are shown in Fig. 4. EDIF of synthesised functional elements was transported to Cadence and there it is connected with control logic and analogous elements.Table 1. Summary of the Counter’s Area Optimisation5. Interface TestingIt was simulated full work for the verification of interface with Verilog-XL. Test programs are wrote in STL: control words fed for OA, Channel Control Block and timer, data scanning. Single-chip interface is good-working and has technical data as shown in Table 2.6. Creation of LayoutAnalogous elements used in layout were changed from Verilog HDL to physical. They are put on periphery of the chip. It is done because they have pads which are connected with IC package's pins. The pads of digital signals are put separately from analogous elements. The reason is that analogous elements have two power supply rails. And digital pads have four rails. Corner elements witch supply powers for periphery pads have four rails too. Therefore analogous elements are separated from corner elements by special elements. Analogous power supply is given by these special elements for ADC and OA. Designer-guided automatic method was used for the creation of layout. It was used automatic standard logic placement and routing tools for Cadence. To reduce the influence of noise region for standard logic was created as far as possible from the analogous elements part. Analogous elements are connected among themselves outsideIC. If the chip OA parameters are not sufficient it is possible to use outside placed OA. The layout of chip is shown in Fig 5. Chip has much empty area because its area is limited by pads of periphery. The total area is required is 21,5 mm2 (4,7´4,6 mm), with an active area of 1.6 mm2 (1,39´1,17 mm).7. Technical Data8. ConclusionsIn this paper I have presented a single chip analogous data acquisition interface. Complex functional blocks was described in VHDL. With help of Synopsys full functional unit was synthesised. Units were excess, so the optimisation was done for small area. After transporting to Cadence synthesised units were worked according to the set function. All circuits of interface, including models of analogous elements, were verified with Verilog-XL. The chip layout based on 1.0 mm CMOS process from ATMEL-ES2 was created.My diploma thesis was based on this project.9. AcknowledgementsThanks to prof. R.Ðeinauskas for his directing, dipl. eng.A.Maèiulis to give me a basic circuit of prototypic board and assoc. prof. R.Benisevièiûtë for they valuable suggestions.10. References[1] Data Acquisition Boards Catalogue. KethlerMetraByte, 1996-1997, vol. 28.[2] ZanalabedinNavabi. Beginning VHDL: An Introduction Language Concept,Boston-Massachusetts, 1994.[3] User Guide for the ES2 0.7mm/1.0mm CMOS Library Design Kit on CADENCE DFWII Software (Design Kit/User Guide Version: 4.1e1), July, 1996.单片机数据采集接口摘要本文提出了一种单芯片的数据采集接口。

外文翻译---数据采集:简介

外文翻译---数据采集:简介

附录A 英文文献原文Data Acquisition: An IntroductionBruxton CorporationThis is an informal introduction digital data acquisition hardware. It is primarily directed towards assisting in the selection of appropriate hardware for recording with the Acquire program. OverviewIn principle, data acquisition hardware is quite simple. An A/D converter delivers a sequence of values representing an analog signal to an acquisition program. In practice, selecting and properly using data acquisition hardware is more complex. This document provides an informal introduction to the topic..Many of the examples are taken from patch-clamp recording. This technique requiresaccurate acquisition of low-level signals (picoamperes) with bandwidth in the audio range (up to 10kHz).BackgroundA data acquisition system converts a signal derived from a sensor into a sequence of digital values. The sensor is connected to an amplifier, which converts the signal into a potential. The amplifier is in turn connected to a digitizer, which contains an A/Dconverter. The digitizer produces a sequence of values representing the signal.Signal SourceThe source of most signals to be digitized is a sensor, connected to an amplifier with appropriate signal conditioning. The amplifier delivers an electrical signal. This signal is then digitized using an A/D converter.For patch-clamp recording, the sensors are solution filled pipettes. The pipette is connected to a patch-clamp amplifier that converts the voltage at the pipette or the current through the pipette to a high-level signal. By convention, the full-scale output range of a patch-clamp amplifier is ±10V, matching the range of common instrumentation quality digitizers.DigitizerA digitizer converts one or more channels of analog signal to a sequence of corresponding digital values. The heart of a digitizer is an A/D converter, a device that samples an analog signal and converts the sample to a digital value.ContentsBackground 1From Sensors to Signals 2From Signals to Samples 2From Samples to Computer 4Measurement Accuracy 5sensorAmplifier Digitizer +3.250+3.100+2.500+1.745+0.985For example, for recording from a single ion channel, the digitizer might determine the output of the patch clamp amplifier once every 50ms and provide the resulting value to the computer.Sampling TheoremThe purpose of dataacquisition is to analyze an analogsignal in digital form. For this tobe possible, the sequence ofvalues produced by a digitizermust represent the original analogsignal.The sampling theorem states that this is the case. The sampling theorem states that an analog signal can be reconstructed from a sequence of samples taken at a uniform interval, as long as the sampling frequency is no less than double the signal bandwidth. For example, assume a signal contains frequencies from DC (0Hz) to 10kHz. This signal must be sampled at a rate of at least 20kHz to be reconstructed properly.As a practical matter, the sampling rate should be several times the minimum sampling rate for the highest frequency of interest. For example, to resolve a 10kHz signal, a minimum sampling rate of 20kHz is required, but a sampling rate of 50kHz or more should be used in practice. ControlMost of this discussion is about digitizing analog signals for a computer. In many cases, a computer also produces analog control signals. For example, in patch-clamp experiments involving voltage-gated ion channels, the computer is frequently used to produce an electrical stimulus to activate the channels. These control signals are produced using a D/A (digital to analog) converter.From Sensors to SignalsMany signal sources consist of a sensor and an amplifier. The amplifier converts the output of the sensor into the signal to be digitized.PreamplifierMany instrumentation systemsare built with a preamplifier located as close to the sensor as possible. Aseparate amplifier converts thepreamplifier output to a high-levelsignal. Placing the preamplifier close to the sensor reduces noise, by allowing the signal to be amplified before being sent over a cable. Since physical space near the sensor is limited, the preamplifier is as small as possible, with the bulk of the electronics being located in the amplifier.For example, in a patch clamp setup, the sensor is a solution-filled pipette, the preamplifier is the head stage, and the amplifier is the patch-clamp amplifier itself.Signal ConditioningMany sensors deliver signals that must be transformed before they can be digitized. For example, a microelectrode pipette may be used to measure current, while the digitizer measures potential (voltage). The patch clamp amplifier provides a current-to-voltage amplification, usuallymeasured in mV of output per pA of input. This transformation of the sensor signal is called signalMicroelectrodeconditioning.Signal conditioning may be more complex. An input signal from a non-linear sensor may be converted to a voltage that is linear in the quantity being measured, compensation may be made for second-order effects such as temperature, or an indirect effect such as a frequency shift may be converted to a voltage.Integrated DigitizerAs the cost of A/D converters declines, the digitizing function can be moved into the amplifier. For example, the HEKA elektronik EPC-9 patch-clamp amplifier contains a built in digitizing unit (an Instrutech ITC-16).Integrating a digitizer into an amplifier can substantially reduce total noise in the digitized signal, since the analog signal is not carried over a cable from the amplifier to an external digitizer. Be careful of instrument specifications when comparing an analog amplifier to one with a built-in digitizer. Including the digital electronics in the amplifier housing may increase noise, and the digitizer itself may add noise to the signal. However, the total noise in the digitized signal may be much less than if an external digitizer is used. Compare an amplifier with an integrated digitizer to the combination of an analog amplifier and an external digitizer.A major advantage of integrating a digitizer into an amplifier is that the amplifier designer can easily include features for computer control. A data acquisition program connected to such an amplifier can then offer an integrated user interface, simplifying operation. In addition, the acquisition program can record all amplifier settings, simplifying data analysis.From Signals to SamplesA digitizer consists of an A/D (analog to digital) converter that samples an analog input signal and converts it to a sequence of digital values.AliasingThe sampling theorem states that, in order to be able to reconstruct a signal, the sampling rate must be at least twice the signal bandwidth. What happens if a signal contains components at a frequency higher than half the sampling frequency? The frequency components above half the sampling rate appear at a lower frequency in the sampled data.The apparent frequency of a sampled signal is the actual frequency modulo half of the sampling rate. For example, if a 26kHz signal is sampled at 50kHz, it appears to be a 1kHz signal in the sampled data. This effect is called aliasing.Anti-Aliasing FilterIf a signal to be digitized has components at frequencies greater than the half the sampling frequency, an anti aliasing filter is required to reduce the signal band width. The anti-aliasing filter must cut off signal components above one half the sampling rate.Most signal sources are inherently band-limited, so in practice, anti-aliasing filters are often not required. However, some signal sources produce broadband noise that must be removed by an anti-aliasing filter.For example, patch-clamp amplifiers have built-in anti aliasing filters. The pipette used for patch-clamp recording inherently filters signals above a low frequency in the range of 1kHz. The good high frequency response of a patch clamp amplifier is achieved only by boosting the high frequency component of the signal to compensate for the frequency response of the pipette. This can produce significant high-frequency noise. A patch-clamp amplifier provides a filter to eliminate this noise.Integrating ConvertersThe discussion of aliasing assumes instantaneous sampling. The output value produced by the A/D is represents the instantaneous analog signal amplitude. Such sampling A/D converters are the most common for use in instrumentation.Some A/D converters employ an integrating conversion technique. The output value produced by such a digitizer represents the integral of the analog signal amplitude over the sampling interval. Such converters eliminate aliasing. They can be viewed as containing a built-in anti-aliasing filter.Integrating converters are rarely used in high-speed control applications. The most common techniques for implementing high-speed integrating converters result in a delay of many sample intervals between an analog sample and the corresponding digitizer output value. This delay can introduce considerable phase shift at high frequencies in closed-loop response if the digitizer is used in a control system.ResolutionTypically a digitizer provides the computer with fixed length binary numbers. For example, the Axon Instruments Digidata 1200A produces 12-bit numbers, while the Instrutech Corporation ITC-16 produces 16-bit numbers. The length of each value is called the resolution of the device, measured in bits.The resolution can be translated to an absolute input level. Most digitizers measure swings of up to approximately 10V from zero, for a total range of 20V. A 12-bit value has a resolution of 1 part in 4096, so the resolution of a 12-bit digitizer is 20V divided by 4096, or approximately 5mV. This is expressed by saying that a change of one count (or one least significant bit, or LSB) represents 5mV.Since analog instruments rarely have an accuracy significantly exceeding 0.1%, it might seem that 10 or 11 bit resolution would be sufficient in a digitizer. However, additional bits of resolution are needed because the input signal frequently does not use the entire input range. For example, even if the instrumentation amplifier gain has been adjusted to yield an input signal with a 20V range, small components of the signal with a 2V range might also be of interest.0.1% resolution of a 2V signal within a 20V range requires at least 13 bits of resolution.AccuracySeveral specifications are used to express the accuracy of a digitizer.The absolute accuracy expresses how precisely the digital values produced represent the analog inputs. For example, a digitizer might have an absolute accuracy of 1 part in 4096. This can also be expressed by saying that the digitizer has 12 bit absolute accuracy.The relative accuracy expresses how precisely the digitizer measures the difference between two analog input values. This is frequently of greater interest than the absolute accuracy.8 bits10 bits12 bits14 bits16 bits 18 bits Digitizer Resolution(+10V Range)ResolutionDistinct Values 1 LSB (approximate)25610244096163846553626214480mV 20mV 5mV 1.25mV 300μV 75μVThe noise specification expresses how much the digitizer output will vary with no change in the analog input. This is frequently expressed as a number of bits. For example, a 16-bit digitizer with two bits of noise will produce effectively the same results as a 14-bit digitizer.The accuracy of a digitizer varies strongly with its maximum sampling rate. The more accurate the digitizer, the slower it is.Be careful when reading digitizer specifications. In some cases, manufacturers publish specifications of the A/D converter used in a digitizer as the specifications for the entire digitizer. However, the accuracy of the digitizer may be significantly less. The digitizer may include necessary components such as amplifiers and voltage references that degrade the accuracy. In addition, the A/D specifications apply only under specific conditions described in the converter datasheet. In the digitizer, those conditions may not apply.From Samples to ComputerOnce data has been digitized, it must be transferred to a computer. Usually a digitizer is built as a computer plug in board, so transfers take place over the computer bus.Digitizers used for high-speed measurement can feed data to the computer at a high and constant rate. For example, a digitizer running on one channel at 100k samples/second will typically produce 200k bytes/second of data continuously. This is a large stream of data.The continuous nature of much data acquisition requires some kind of buffering. For example, if the computer stops for 30ms to write data to disk or to update a display, 6000 bytes of data will accumulate. The data must be stored somewhere, or it will be lost.Data Transfer: DMAThe Axon Instruments Digidata 1200 uses DMA (direct memory access) to transfer data to the memory of the host computer. DMA transfers proceed regardless of the activity in the host.DMA transfers encounter problems on during continuous acquisition. The problem is that the DMAcontroller used on PC motherboards is only capable oftransferring data to a contiguous block of memory. However,Microsoft Windows 95 and Windows NT use allocate memory in 4K byte pages. A data acquisition program mighthave a large buffer, but the buffer will be scattered 4K byte pages in physical memory. The DMA controller can transferto only one page at a time. When done with a page, itinterrupts the host computer. The device driver for thedigitizer must then reload the DMA controller for the nextpage. Normally these periodic interrupts are not a problem.For example, even at the full 330kHz rate of the Digidata 1200, a 4K page is filled only every 6ms. The interrupt handling in the driver might take 50us on a fast processor. Less than 1% of the time of the processor is taken servicing interrupts.However, a problem occurs under multitasking operating systems such as Microsoft Windows NT, because many other activities can take place simultaneously. If another device driver is performing processing and has locked out interrupts temporarily, the digitizer device driver may have to wait to service the DMA controller.To deal with this problem, Axon Instruments has increased the buffer memory in the 4K Page 4K Page Digitizer 4K Page 4K Page Computer MemoryDigidata from 2K samples in the Digidata 1200 to 8K samples in the 1200A and 1200B. This increase allows the unit to buffer data for up to 24ms even at 330kHz, avoiding problems. Data Transfer: BuffersThe Instrutech Corporation ITC-16 and ITC-18 do not use DMA. Instead, they use a large buffer to hold data until it can be processed by the host computer. The data is then transferred to the host computer by programmed I/O. That is, the device driver performs the transfer. On current computers, programmed I/O is about as efficient as DMA. These computers are generally limited in performance by the memory system. Therefore, even through a DMA transfer occurs without the intervention of the host computer, the transfer ties up the memory, which effectively stalls the processor. The Instrutech digitizers do not provide interrupts to the host computer. Instead, host computer periodically polls the device to obtain data. This polling is performed periodically by the application program (i.e. HEKA Pulse or Bruxton Corporation Acquire. Since the polling may be infrequent, the digitizer needs a large buffer. For example ,if a program can poll the digitizer only once every 100ms,the digitizer must have a 20000 sample memory to operate at 200kHz.The Instrutech ITC-16 has a 16k sample FIFO. The Instrutech ITC-18 is available with either a 256k sample FIFO or a 1M sample FIFO.Data Transfer: PCI Bus MasteringSome PCI bus data acquisition boards can write data directly into the memory of the host computer using bus mastering. Bus master data transfers do not use the motherboard DMA controller, and therefore can potentially support writing directly to a buffer composed of discontiguous 4K pages. In the future, bus master designs are likely to become popular. Those familiar with computer system design will notice that the PCI bus master transfers are in fact direct memory access (DMA) transfers. On PC systems, for historical reasons, the term DMA refers to the use of the DMA controller built in to the motherboard.Data Transfer: OutputThe discussion so far has concentrated on data transfer for acquired data. If the digitizer is used for synchronous stimulation or control, the same data transfer problem occurs as for acquiring data. In fact, the total data rate doubles. Consider, for example, a stimulus/response measurement on one channel with a 100kHz sampling rate. Acquired data is received by the computer at 100kHz. Simultaneously, the stimulus waveform must be delivered by the computer to the digitizer at 100kHz. The full data rate 200kHz.The Axon Instruments and Instrutech digitizers have symmetric handling of inputs and outputs. The output buffers are the same size as the input buffers, and the same data transfer technique is used.Measurement AccuracyThe following sections discuss the issues that influence the accuracy of dynamic measurements.CrosstalkMost digitizers record from multiple analog input channels, with 8 or 16 input channels being commonly supported. An important specification is the crosstalk between input channels, that is, the amount of input signal from one channel that appears on another channel.Crosstalk is a problem because many digitizers use a single analog to digitalconverter, and a switch called amultiplexer to select between input Channel D Channel A Channel B Channel CMultiplexerA/D converterchannels.The multiplexer itself is a source of crosstalk. Even when a switch is open, capacitive coupling between the input of the switch and the output of the multiplexer produces a frequency-dependent crosstalk. High-frequency input signals are coupled to the multiplexer output even when they are not selected.To measure such crosstalk, ground an analog input and sample from it. Meanwhile, connect a high-frequency signal to other input channels. Notice the amplitude of the high-frequency signal that appears on the grounded input. This is the crosstalk. Vary the input frequency and notice the change in the amount of crosstalk.Crosstalk may not be significant when a digitizer is used for patch-clamp data acquisition. Typically one analog input is used for the ion channel signal, while other analog inputs are used to measure very low-frequency signals. The low-frequency signals do not couple significantly to the ion channel signal. The ion channel signal does couple into the low-frequency channels, but this can generally be eliminated by averaging many input samples on those channels.If you measure on several channels containing high frequency data, characterize the crosstalk of your data acquisition system before you do so. Otherwise you may find yourself measuring correlations in input data due to your digitizer instead of the system being measured.This problem will become less significant with time, as the cost of A/D converters drops. Digitizer manufacturers can afford to place one A/D converter for each input channel, avoiding the use of a multiplexer.Settling TimeThe settling time of the A/D converter input may limit the rate of multi-channel sampling. The input amplifiers on many A/D converters cannot follow very high frequency input signals. When the multiplexer switches channels, this appears as a sudden jump in signal level to the input of the A/D converter. At low sampling rates,the A/D input will have considerable time to settle before converting the next sample. At high sampling rates, the input may not have time to settle, and the input signal on one channel affects the value measured on the next.To see this effect, ground all inputs of a digitizer except one. Connect this input to a variable DC level. Sample at a high rate on multiple channels. Notice if changing the input level on one channel causes the value measured on one of the grounded channels to change.Frequently, digitizers achieve full bandwidth only when the multiplexer is not being used, and the digitizer is sampling from only a single input channel.The Axon Instruments Digidata 1200A/B and the Instrutech Corporation ITC-16 both use a single A/D converter and a multiplexer. The Instrutech Corporation ITC-18 uses a separate A/D converter per input channel. While this raises the cost of the device, it essentially eliminates crosstalk.GroundingThe digitizer is electrically part of your instrumentation system. This can cause problems if you do not consider the digitizer when planning the grounding of your instrumentation.If your digitizer is used only for acquisition, you can take advantage of differential analogMultiplexer A B CDinputs to avoid connecting your digitizer directly to your measurement ground through signal cables. However, if you use the analog outputs of your digitizer this may not be possible, since analog outputs are rarely differential.Analog outputs are particularly a problem if the digitizer ground is the same as the computer ground. Computer ground lines usually transmit high-frequency switching noise. The noise can be coupled through the common ground into your measurement system. This is a common failing of low-cost digitizer boards.The Instrutech ITC-16 and ITC-18 use optical isolation in the digital control path of the digitizer. This completely isolates the measurement system from the computer ground.Input ImpedanceThe FET-based input amplifiers used in modern digitizers have a very high input impedance. If inputs are left unconnected, they can pick up unwanted signals and couple them into the digitizer.The Axon Instruments Digidata 1200A/B and the Instrutech ITC-16 have very high impedance analog inputs. For best results, unused inputs on these devices should be grounded.The Instrutech ITC-18 has bleed resistors connected internally between the analog inputs and ground to reduce pickup of stray signals. Grounding of unused analog inputs is less critical with this device.PhaseIf you are sampling from multiple input channels, you may be interested in the phase relationship between the inputs.Digitizers that use a single multiplexed A/D converter inherently have a delay between measurements on different input channels. For example, if two channels are being sampled, each at interval T, most multiplexer-based digitizers will sample successive channels at interval T/2. Sample number N on channel A and sample number N on channel B will be separated in time by T/2.For most applications, this delay is not of concern. However, in some cases the phase relationship between signals is of interest.To limit the phase shift between channels, you can ample at a very high rate. If you can sample quickly enough, you can minimize the delay between samples.An alternative solution is to sample from successive channels at high speed in a burst. Some digitizers provide sophisticated internal timers that allow you to sample a group of channels quickly, then delay for the next sample. For example, suppose your sampling rate is 1kHz on four channels. With most digitizers, you would sample at an interval of 250ms. However, if your digitizer has the capability, you could sample the four channels at an interval of only 10ms, then wait until a full 1000ms interval has elapsed before the next sample.You can also correct for the error in software. You maybe able to adjust your calculations for the delay. For example, the HEKA Pulse program is aware of some of the delays in the Instrutech ITC-16, and adjusts for them.The best solution is to use a digitizer without a multiplexer. Some digitizers, such as the Instrutech ITC-18 and the Markenrich CL522, provide an A/D converter for each input channel. This allows all channels to be sampled simultaneously, with no delay. Using multiple A/D converters is by far the best solution, but it is also the most expensive.SynchronizationDigitizers may provide analog outputs used for stimulation and control. The analog outputs are updated at the same rate the analog inputs are sampled, and have sufficient buffering to allow continuous stimulation while recording.When using a digitizer to measure the response of a system to a stimulus, be aware of the time relationship between stimulation and sampling. Two effects must be considered: the pipeline and the device timing.Digitizers generally have pipelines of input and output samples. For example, the A/D converter usually delivers a digitized data value while it converts the next value. Data values may be temporarily buffered in internal registers while being transferred. This usually leads to a delay of three to five samples in a pipeline.To see the effect of this pipeline, suppose that at a stimulus value appears on one of the digitizer outputs. Simultaneously an analog input is sampled. Even if the system being measured has no delay, several sample times will pass before the analog input value resulting from the stimulus passes through the pipeline. When measuring the response of a system to a stimulus, this delay must be taken into account. Depending on the digitizer design, this delay may be a function of the number of channels being sampled or stimulated.Analog input sampling and analog output update may not be simultaneous. The designer of a digitizer usually tries to minimize analog input measurement noise. When analog outputs are updated, the transition may cause electrical disturbances that appear as noise on the analog inputs. Capacitive coupling from the outputs to the input can appear as noise on the inputs. Noise can also be a result of coupling through the power supply or ground.A simple technique to minimize this noise is to choose the phase relationship of sampling and update to allow as much time to pass following an update before the next sample. For example, if the sampling interval is T, the analog inputs might be sampled at time 0 and the analog outputs might be updated at time T/2.If you are interested in measuring the response of a system to a stimulus precisely, you will have to obtain information from the vendor regarding the synchronization of stimulation and response.附录B 英文文献译文数据采集:简介Bruxton 公司这是一个非正式引入数字数据采集硬件。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

中英文对照外文翻译(文档含英文原文和中文翻译)Data Acquisition SystemsData acquisition systems are used to acquire process operating data and store it on,secondary storage devices for later analysis. Many or the data acquisition systems acquire this data at very high speeds and very little computer time is left to carry out any necessary, or desirable, data manipulations or reduction. All the data are stored on secondary storage devices and manipulated subsequently to derive the variables ofin-terest. It is very often necessary to design special purpose data acquisition systems and interfaces to acquire the high speed process data. This special purpose design can be an expensive proposition.Powerful mini- and mainframe computers are used to combine the data acquisition with other functions such as comparisons between the actual output and the desirable output values, and to then decide on the control action which must be taken to ensure that the output variables lie within preset limits. The computing power required will depend upon the type of process control system implemented. Software requirements for carrying out proportional, ratio or three term control of process variables are relatively trivial, and microcomputers can be used to implement such process control systems. It would not be possible to use many of the currently available microcomputers for the implementation of high speed adaptive control systems which require the use of suitable process models and considerable online manipulation of data.Microcomputer based data loggers are used to carry out intermediate functions such as data acquisition at comparatively low speeds, simple mathematical manipulations of raw data and some forms of data reduction. The first generation of data loggers, without any programmable computing facilities, was used simply for slow speed data acquisition from up to one hundred channels. All the acquired data could be punched out on paper tape or printed for subsequent analysis. Such hardwired data loggers are being replaced by the new generation of data loggers which incorporate microcomputers and can be programmed by the user. They offer an extremely good method of collecting the process data, using standardized interfaces, and subsequently performing the necessary manipulations to provide the information of interest to the process operator. The data acquired can be analyzed to establish correlations, if any, between process variables and to develop mathematical models necessary for adaptive and optimal process control.The data acquisition function carried out by data loggers varies from one to 9 in system to another. Simple data logging systems acquire data from a few channels while complex systems can receive data from hundreds, or even thousands, of input channels distributed around one or more processes. The rudimentary data loggers scan the selected number of channels, connected to sensors or transducers, in a sequential manner and the data are recorded in a digital format. A data logger can be dedicated in the sense that it can only collect data from particular types of sensors and transducers. It is best to use a nondedicated data logger since any transducer or sensor can be connected to the channels via suitable interface circuitry. This facility requires the use of appropriate signal conditioning modules.Microcomputer controlled data acquisition facilitates the scanning of a large number of sensors. The scanning rate depends upon the signal dynamics which means that some channels must be scanned at very high speeds in order to avoid aliasing errors while there is very little loss of information by scanning other channels at slower speeds. In some data logging applications the faster channels require sampling at speeds of up to 100 times per second while slow channels can be sampled once every five minutes. The conventional hardwired, non-programmable data loggers sample all the channels in a sequential manner and the sampling frequency of all the channels must be the same. This procedure results in the accumulation of very large amounts of data, some of which is unnecessary, and also slows down the overall effective sampling frequency. Microcomputer based data loggers can be used to scan some fast channels at a higher frequency than other slow speed channels.The vast majority of the user programmable data loggers can be used to scan up to 1000 analog and 1000 digital input channels. A small number of data loggers, with a higher degree of sophistication, are suitable for acquiring data from up to 15, 000 analog and digital channels. The data from digital channels can be in the form of Transistor- Transistor Logic or contact closure signals. Analog data must be converted into digital format before it is recorded and requires the use of suitable analog to digital converters (ADC).The characteristics of the ADC will define the resolution that can be achieved and the rate at which the various channels can be sampled. An in-crease in the number of bits used in the ADC improves the resolution capability. Successive approximation ADC's arefaster than integrating ADC's. Many microcomputer controlled data loggers include a facility to program the channel scanning rates. Typical scanning rates vary from 2 channels per second to 10, 000 channels per second.Most data loggers have a resolution capability of ±0.01% or better, It is also pos-sible to achieve a resolution of 1 micro-volt. The resolution capability, in absolute terms, also depends upon the range of input signals, Standard input signal ranges are 0-10 volt, 0-50 volt and 0-100 volt. The lowest measurable signal varies form 1 t, volt to 50, volt. A higher degree of recording accuracy can be achieved by using modules which accept data in small, selectable ranges. An alternative is the auto ranging facil-ity available on some data loggers.The accuracy with which the data are acquired and logged-on the appropriate storage device is extremely important. It is therefore necessary that the data acquisi-tion module should be able to reject common mode noise and common mode voltage. Typical common mode noise rejection capabilities lie in the range 110 dB to 150 dB. A decibel (dB) is a tern which defines the ratio of the power levels of two signals. Thus if the reference and actual signals have power levels of N, and Na respectively, they will have a ratio of n decibels, wheren=10 Log10(Na /Nr)Protection against maximum common mode voltages of 200 to 500 volt is available on typical microcomputer based data loggers.The voltage input to an individual data logger channel is measured, scaled and linearised before any further data manipulations or comparisons are carried out.In many situations, it becomes necessary to alter the frequency at which particu-lar channels are sampled depending upon the values of data signals received from a particular input sensor. Thus a channel might normally be sampled once every 10 minutes. If, however, the sensor signals approach the alarm limit, then it is obviously desirable to sample that channel once every minute or even faster so that the operators can be informed, thereby avoiding any catastrophes. Microcomputer controlledintel-ligent data loggers may be programmed to alter the sampling frequencies depending upon the values of process signals. Other data loggers include self-scanning modules which can initiate sampling.The conventional hardwired data loggers, without any programming facilities, simply record the instantaneous values of transducer outputs at a regular samplingin-terval. This raw data often means very little to the typical user. To be meaningful, this data must be linearised and scaled, using a calibration curve, in order to determine the real value of the variable in appropriate engineering units. Prior to the availability of programmable data loggers, this function was usually carried out in the off-line mode on a mini- or mainframe computer. The raw data values had to be punched out on pa-per tape, in binary or octal code, to be input subsequently to the computer used for analysis purposes and converted to the engineering units. Paper tape punches are slow speed mechanical devices which reduce the speed at which channels can be scanned. An alternative was to print out the raw data values which further reduced the data scanning rate. It was not possible to carry out any limit comparisons or provide any alarm information. Every single value acquired by the data logger had to be recorded eventhough it might not serve any useful purpose during subsequent analysis; many data values only need recording when they lie outside the pre-set low and high limits.If the analog data must be transmitted over any distance, differences in ground potential between the signal source and final location can add noise in the interface design. In order to separate common-mode interference form the signal to be recorded or processed, devices designed for this purpose, such as instrumentation amplifiers, may be used. An instrumentation amplifier is characterized by good common-mode- rejection capability, a high input impedance, low drift, adjustable gain, and greater cost than operational amplifiers. They range from monolithic ICs to potted modules, and larger rack-mounted modules with manual scaling and null adjustments. When a very high common-mode voltage is present or the need for extremely-lowcom-mon-mode leakage current exists(as in many medical-electronics applications),an isolation amplifier is required. Isolation amplifiers may use optical or transformer isolation.Analog function circuits are special-purpose circuits that are used for a variety of signal conditioning operations on signals which are in analog form. When their accu-racy is adequate, they can relieve the microprocessor of time-consuming software and computations. Among the typical operations performed are multiplications, division, powers, roots, nonlinear functions such as for linearizing transducers, rimsmeasure-ments, computing vector sums, integration and differentiation, andcurrent-to-voltage or voltage- to-current conversion. Many of these operations can be purchased in available devices as multiplier/dividers, log/antilog amplifiers, and others.When data from a number of independent signal sources must be processed by the same microcomputer or communications channel, a multiplexer is used to channel the input signals into the A/D converter.Multiplexers are also used in reverse, as when a converter must distribute analog information to many different channels. The multiplexer is fed by a D/A converter which continually refreshes the output channels with new information.In many systems, the analog signal varies during the time that the converter takes to digitize an input signal. The changes in this signal level during the conversion process can result in errors since the conversion period can be completed some time after the conversion command. The final value never represents the data at the instant when the conversion command is transmitted. Sample-hold circuits are used to make an acquisition of the varying analog signal and to hold this signal for the duration of the conversion process. Sample-hold circuits are common in multichannel distribution systems where they allow each channel to receive and hold the signal level.In order to get the data in digital form as rapidly and as accurately as possible, we must use an analog/digital (A/D) converter, which might be a shaft encoder, a small module with digital outputs, or a high-resolution, high-speed panel instrument. These devices, which range form IC chips to rack-mounted instruments, convert ana-log input data, usually voltage, into an equivalent digital form. The characteristics of A/D converters include absolute and relative accuracy, linearity, monotonic, resolu-tion, conversion speed, and stability. A choice of input ranges, output codes, and other features are available. The successive-approximation technique is popular for a large number ofapplications, with the most popular alternatives being the counter-comparator types, and dual-ramp approaches. The dual-ramp has been widely-used in digital voltmeters.D/A converters convert a digital format into an equivalent analog representation. The basic converter consists of a circuit of weighted resistance values or ratios, each controlled by a particular level or weight of digital input data, which develops the output voltage or current in accordance with the digital input code. A special class of D/A converter exists which have the capability of handling variable reference sources. These devices are the multiplying DACs. Their output value is the product of the number represented by the digital input code and the analog reference voltage, which may vary form full scale to zero, and in some cases, to negative values.Component Selection CriteriaIn the past decade, data-acquisition hardware has changed radically due to ad-vances in semiconductors, and prices have come down too; what have not changed, however, are the fundamental system problems confronting the designer. Signals may be obscured by noise, rfi,ground loops, power-line pickup, and transients coupled into signal lines from machinery. Separating the signals from these effects becomes a matter for concern.Data-acquisition systems may be separated into two basic categories:(1)those suited to favorable environments like laboratories -and(2)those required for hostile environments such as factories, vehicles, and military installations. The latter group includes industrial process control systems where temperature information may be gathered by sensors on tanks, boilers, wats, or pipelines that may be spread over miles of facilities. That data may then be sent to a central processor to provide real-time process control. The digital control of steel mills, automated chemical production, and machine tools is carried out in this kind of hostile environment. The vulnerability of the data signals leads to the requirement for isolation and other techniques.At the other end of the spectrum-laboratory applications, such as test systems for gathering information on gas chromatographs, mass spectrometers, and other sophis-ticated instruments-the designer's problems are concerned with the performing of sen-sitive measurements under favorable conditions rather than with the problem ofpro-tecting the integrity of collected data under hostile conditions.Systems in hostile environments might require components for wide tempera-tures, shielding, common-mode noise reduction, conversion at an early stage, redun-dant circuits for critical measurements, and preprocessing of the digital data to test its reliability. Laboratory systems, on the other hand, will have narrower temperature ranges and less ambient noise. But the higher accuracies require sensitive devices, and a major effort may be necessary for the required signal /noise ratios.The choice of configuration and components in data-acquisition design depends on consideration of a number of factors:1. Resolution and accuracy required in final format.2. Number of analog sensors to be monitored.3. Sampling rate desired.4. Signal-conditioning requirement due to environment and accuracy.5. Cost trade-offs.Some of the choices for a basic data-acquisition configuration include:1 .Single-channel techniques.A. Direct conversion.B. Preamplification and direct conversion.C. Sample-hold and conversion.D. Preamplification, sample-hold, and conversion.E. Preamplification, signal-conditioning, and direct conversion.F. Preamplification, signal-conditioning, sample-hold, and conversion.2. Multichannel techniques.A. Multiplexing the outputs of single-channel converters.B. Multiplexing the outputs of sample-holds.C. Multiplexing the inputs of sample-holds.D. Multiplexing low-level data.E. More than one tier of multiplexers.Signal-conditioning may include:1. Radiometric conversion techniques.B. Range biasing.D. Logarithmic compression.A. Analog filtering.B. Integrating converters.C. Digital data processing.We shall consider these techniques later, but first we will examine some of the components used in these data-acquisition system configurations.MultiplexersWhen more than one channel requires analog-to-digital conversion, it is neces-sary to use time-division multiplexing in order to connect the analog inputs to a single converter, or to provide a converter for each input and then combine the converter outputs by digital multiplexing.Analog MultiplexersAnalog multiplexer circuits allow the timesharing of analog-to-digital converters between a numbers of analog information channels. An analog multiplexer consists of a group of switches arranged with inputs connected to the individual analog channels and outputs connected in common(as shown in Fig. 1).The switches may be ad-dressed by a digital input code.Many alternative analog switches are available in electromechanical and solid-state forms. Electromechanical switch types include relays, stepper switches,cross-bar switches, mercury-wetted switches, and dry-reed relay switches. The best switching speed is provided by reed relays(about 1 ms).The mechanical switches provide high do isolation resistance, low contact resistance, and the capacity to handle voltages up to 1 KV, and they are usually inexpensive. Multiplexers using mechanical switches are suited to low-speed applications as well as those having high resolution requirements. They interface well with the slower A/D converters, like the integrating dual-slope types. Mechanical switches have a finite life, however, usually expressed innumber of operations. A reed relay might have a life of 109 operations, which wouldallow a 3-year life at 10 operations/second.Solid-state switch devices are capable of operation at 30 ns, and they have a life which exceeds most equipment requirements. Field-effect transistors(FETs)are used in most multiplexers. They have superseded bipolar transistors which can introduce large voltage offsets when used as switches.FET devices have a leakage from drain to source in the off state and a leakage from gate or substrate to drain and source in both the on and off states. Gate leakage in MOS devices is small compared to other sources of leakage. When the device has a Zener-diode-protected gate, an additional leakage path exists between the gate and source.Enhancement-mode MOS-FETs have the advantage that the switch turns off when power is removed from the MUX. Junction-FET multiplexers always turn on with the power off.A more recent development, the CMOS-complementary MOS-switch has the advantage of being able to multiplex voltages up to and including the supply voltages. A±10-V signal can be handled with a ±10-V supply.Trade-off Considerations for the DesignerAnalog multiplexing has been the favored technique for achieving lowest system cost. The decreasing cost of A/D converters and the availability of low-cost, digital integrated circuits specifically designed for multiplexing provide an alternative with advantages for some applications. A decision on the technique to use for a givensys-tem will hinge on trade-offs between the following factors:1. Resolution. The cost of A/D converters rises steeply as the resolution increases due to the cost of precision elements. At the 8-bit level, the per-channel cost of an analog multiplexer may be a considerable proportion of the cost of a converter. At resolutions above 12 bits, the reverse is true, and analog multiplexing tends to be more economical.2. Number of channels. This controls the size of the multiplexer required and the amount of wiring and interconnections. Digital multiplexing onto a common data bus reduces wiring to a minimum in many cases. Analog multiplexing is suited for 8 to 256 channels; beyond this number, the technique is unwieldy and analog errors be-come difficult to minimize. Analog and digital multiplexing is often combined in very large systems.3. Speed of measurement, or throughput. High-speed A/D converters can add a considerable cost to the system. If analog multiplexing demands a high-speedcon-verter to achieve the desired sample rate, a slower converter for each channel with digital multiplexing can be less costly.4. Signal level and conditioning. Wide dynamic ranges between channels can be difficult with analog multiplexing. Signals less than 1V generally require differential low-level analog multiplexing which is expensive, with programmable-gain amplifiers after the MUX operation. The alternative of fixed-gain converters on each channel, with signal-conditioning designed for the channel requirement, with digital multi-plexing may be more efficient.5. Physical location of measurement points. Analog multiplexing is suitedfor making measurements at distances up to a few hundred feet from the converter, since analog lines may suffer from losses, transmission-line reflections, and interference. Lines may range from twisted wire pairs to multiconductor shielded cable, depending on signal levels, distance, and noise environments. Digital multiplexing is operable to thousands of miles, with the proper transmission equipment, for digital transmission systems can offer the powerful noise-rejection characteristics that are required for29 Data Acquisition Systems long-distance transmission.Digital MultiplexingFor systems with small numbers of channels, medium-scale integrated digital multiplexers are available in TTL and MOS logic families. The 74151 is a typical example. Eight of these integrated circuits can be used to multiplex eight A/D con-verters of 8-bit resolution onto a common data bus.This digital multiplexing example offers little advantages in wiring economy, but it is lowest in cost, and the high switching speed allows operation at sampling rates much faster than analog multiplexers. The A/D converters are required only to keep up with the channel sample rate, and not with the commutating rate. When large numbers of A/D converters are multiplexed, the data-bus technique reduces system interconnections. This alone may in many cases justify multiple A/D converters. Data can be bussed onto the lines in bit-parallel or bit-serial format, as many converters have both serial and parallel outputs. A variety of devices can be used to drive the bus, from open collector and tristate TTL gates to line drivers and optoelectronic isolators. Channel-selection decoders can be built from 1-of-16 decoders to the required size. This technique also allows additional reliability in that a failure of one A/D does not affect the other channels. An important requirement is that the multiplexer operate without introducing unacceptable errors at the sample-rate speed. For a digital MUX system, one can determine the speed from propagation delays and the time required to charge the bus capacitance.Analog multiplexers can be more difficult to characterize. Their speed is a func-tion not only of internal parameters but also external parameters such as channel, source impedance, stray capacitance and the number of channels, and the circuit lay-out. The user must be aware of the limiting parameters in the system to judge their ef-fect on performance.The nonideal transmission and open-circuit characteristics of analog multiplexers can introduce static and dynamic errors into the signal path. These errors include leakage through switches, coupling of control signals into the analog path, and inter-actions with sources and following amplifiers. Moreover, the circuit layout can com-pound these effects.Since analog multiplexers may be connected directly to sources which may have little overload capacity or poor settling after overloads, the switches should have a break-before-make action to prevent the possibility of shorting channels together. It may be necessary to avoid shorted channels when power is removed and a chan-nels-off with power-down characteristic is desirable. In addition to the chan-nel-addressing lines, which are normally binary-coded, it is useful to have inhibited or enable lines to turn all switches off regardless of the channel being addressed. This simplifies the external logic necessary to cascade multiplexers and can also be useful in certain modes of channeladdressing. Another requirement for both analog and digital multiplexers is the tolerance of line transients and overload conditions, and the ability to absorb the transient energy and recover without damage.数据采集系统数据采集系统是用来获取数据处理和存储在二级存储设备,为后来的分析。

相关文档
最新文档