数据采集技术外文翻译文献
大数据外文翻译参考文献综述
大数据外文翻译参考文献综述(文档含中英文对照即英文原文和中文翻译)原文:Data Mining and Data PublishingData mining is the extraction of vast interesting patterns or knowledge from huge amount of data. The initial idea of privacy-preserving data mining PPDM was to extend traditional data mining techniques to work with the data modified to mask sensitive information. The key issues were how to modify the data and how to recover the data mining result from the modified data. Privacy-preserving data mining considers the problem of running data mining algorithms on confidential data that is not supposed to be revealed even to the partyrunning the algorithm. In contrast, privacy-preserving data publishing (PPDP) may not necessarily be tied to a specific data mining task, and the data mining task may be unknown at the time of data publishing. PPDP studies how to transform raw data into a version that is immunized against privacy attacks but that still supports effective data mining tasks. Privacy-preserving for both data mining (PPDM) and data publishing (PPDP) has become increasingly popular because it allows sharing of privacy sensitive data for analysis purposes. One well studied approach is the k-anonymity model [1] which in turn led to other models such as confidence bounding, l-diversity, t-closeness, (α,k)-anonymity, etc. In particular, all known mechanisms try to minimize information loss and such an attempt provides a loophole for attacks. The aim of this paper is to present a survey for most of the common attacks techniques for anonymization-based PPDM & PPDP and explain their effects on Data Privacy.Although data mining is potentially useful, many data holders are reluctant to provide their data for data mining for the fear of violating individual privacy. In recent years, study has been made to ensure that the sensitive information of individuals cannot be identified easily.Anonymity Models, k-anonymization techniques have been the focus of intense research in the last few years. In order to ensure anonymization of data while at the same time minimizing the informationloss resulting from data modifications, everal extending models are proposed, which are discussed as follows.1.k-Anonymityk-anonymity is one of the most classic models, which technique that prevents joining attacks by generalizing and/or suppressing portions of the released microdata so that no individual can be uniquely distinguished from a group of size k. In the k-anonymous tables, a data set is k-anonymous (k ≥ 1) if each record in the data set is in- distinguishable from at least (k . 1) other records within the same data set. The larger the value of k, the better the privacy is protected. k-anonymity can ensure that individuals cannot be uniquely identified by linking attacks.2. Extending ModelsSince k-anonymity does not provide sufficient protection against attribute disclosure. The notion of l-diversity attempts to solve this problem by requiring that each equivalence class has at least l well-represented value for each sensitive attribute. The technology of l-diversity has some advantages than k-anonymity. Because k-anonymity dataset permits strong attacks due to lack of diversity in the sensitive attributes. In this model, an equivalence class is said to have l-diversity if there are at least l well-represented value for the sensitive attribute. Because there are semantic relationships among the attribute values, and different values have very different levels of sensitivity. Afteranonymization, in any equivalence class, the frequency (in fraction) of a sensitive value is no more than α.3. Related Research AreasSeveral polls show that the public has an in- creased sense of privacy loss. Since data mining is often a key component of information systems, homeland security systems, and monitoring and surveillance systems, it gives a wrong impression that data mining is a technique for privacy intrusion. This lack of trust has become an obstacle to the benefit of the technology. For example, the potentially beneficial data mining re- search project, Terrorism Information Awareness (TIA), was terminated by the US Congress due to its controversial procedures of collecting, sharing, and analyzing the trails left by individuals. Motivated by the privacy concerns on data mining tools, a research area called privacy-reserving data mining (PPDM) emerged in 2000. The initial idea of PPDM was to extend traditional data mining techniques to work with the data modified to mask sensitive information. The key issues were how to modify the data and how to recover the data mining result from the modified data. The solutions were often tightly coupled with the data mining algorithms under consideration. In contrast, privacy-preserving data publishing (PPDP) may not necessarily tie to a specific data mining task, and the data mining task is sometimes unknown at the time of data publishing. Furthermore, some PPDP solutions emphasize preserving the datatruthfulness at the record level, but PPDM solutions often do not preserve such property. PPDP Differs from PPDM in Several Major Ways as Follows :1) PPDP focuses on techniques for publishing data, not techniques for data mining. In fact, it is expected that standard data mining techniques are applied on the published data. In contrast, the data holder in PPDM needs to randomize the data in such a way that data mining results can be recovered from the randomized data. To do so, the data holder must understand the data mining tasks and algorithms involved. This level of involvement is not expected of the data holder in PPDP who usually is not an expert in data mining.2) Both randomization and encryption do not preserve the truthfulness of values at the record level; therefore, the released data are basically meaningless to the recipients. In such a case, the data holder in PPDM may consider releasing the data mining results rather than the scrambled data.3) PPDP primarily “anonymizes” the data by hiding the identity of record owners, whereas PPDM seeks to directly hide the sensitive data. Excellent surveys and books in randomization and cryptographic techniques for PPDM can be found in the existing literature. A family of research work called privacy-preserving distributed data mining (PPDDM) aims at performing some data mining task on a set of private databasesowned by different parties. It follows the principle of Secure Multiparty Computation (SMC), and prohibits any data sharing other than the final data mining result. Clifton et al. present a suite of SMC operations, like secure sum, secure set union, secure size of set intersection, and scalar product, that are useful for many data mining tasks. In contrast, PPDP does not perform the actual data mining task, but concerns with how to publish the data so that the anonymous data are useful for data mining. We can say that PPDP protects privacy at the data level while PPDDM protects privacy at the process level. They address different privacy models and data mining scenarios. In the field of statistical disclosure control (SDC), the research works focus on privacy-preserving publishing methods for statistical tables. SDC focuses on three types of disclosures, namely identity disclosure, attribute disclosure, and inferential disclosure. Identity disclosure occurs if an adversary can identify a respondent from the published data. Revealing that an individual is a respondent of a data collection may or may not violate confidentiality requirements. Attribute disclosure occurs when confidential information about a respondent is revealed and can be attributed to the respondent. Attribute disclosure is the primary concern of most statistical agencies in deciding whether to publish tabular data. Inferential disclosure occurs when individual information can be inferred with high confidence from statistical information of the published data.Some other works of SDC focus on the study of the non-interactive query model, in which the data recipients can submit one query to the system. This type of non-interactive query model may not fully address the information needs of data recipients because, in some cases, it is very difficult for a data recipient to accurately construct a query for a data mining task in one shot. Consequently, there are a series of studies on the interactive query model, in which the data recipients, including adversaries, can submit a sequence of queries based on previously received query results. The database server is responsible to keep track of all queries of each user and determine whether or not the currently received query has violated the privacy requirement with respect to all previous queries. One limitation of any interactive privacy-preserving query system is that it can only answer a sublinear number of queries in total; otherwise, an adversary (or a group of corrupted data recipients) will be able to reconstruct all but 1 . o(1) fraction of the original data, which is a very strong violation of privacy. When the maximum number of queries is reached, the query service must be closed to avoid privacy leak. In the case of the non-interactive query model, the adversary can issue only one query and, therefore, the non-interactive query model cannot achieve the same degree of privacy defined by Introduction the interactive model. One may consider that privacy-reserving data publishing is a special case of the non-interactivequery model.This paper presents a survey for most of the common attacks techniques for anonymization-based PPDM & PPDP and explains their effects on Data Privacy. k-anonymity is used for security of respondents identity and decreases linking attack in the case of homogeneity attack a simple k-anonymity model fails and we need a concept which prevent from this attack solution is l-diversity. All tuples are arranged in well represented form and adversary will divert to l places or on l sensitive attributes. l-diversity limits in case of background knowledge attack because no one predicts knowledge level of an adversary. It is observe that using generalization and suppression we also apply these techniques on those attributes which doesn’t need th is extent of privacy and this leads to reduce the precision of publishing table. e-NSTAM (extended Sensitive Tuples Anonymity Method) is applied on sensitive tuples only and reduces information loss, this method also fails in the case of multiple sensitive tuples.Generalization with suppression is also the causes of data lose because suppression emphasize on not releasing values which are not suited for k factor. Future works in this front can include defining a new privacy measure along with l-diversity for multiple sensitive attribute and we will focus to generalize attributes without suppression using other techniques which are used to achieve k-anonymity because suppression leads to reduce the precision ofpublishing table.译文:数据挖掘和数据发布数据挖掘中提取出大量有趣的模式从大量的数据或知识。
大数据外文翻译参考文献综述
大数据外文翻译参考文献综述(文档含中英文对照即英文原文和中文翻译)原文:Data Mining and Data PublishingData mining is the extraction of vast interesting patterns or knowledge from huge amount of data. The initial idea of privacy-preserving data mining PPDM was to extend traditional data mining techniques to work with the data modified to mask sensitive information. The key issues were how to modify the data and how to recover the data mining result from the modified data. Privacy-preserving data mining considers the problem of running data mining algorithms on confidential data that is not supposed to be revealed even to the partyrunning the algorithm. In contrast, privacy-preserving data publishing (PPDP) may not necessarily be tied to a specific data mining task, and the data mining task may be unknown at the time of data publishing. PPDP studies how to transform raw data into a version that is immunized against privacy attacks but that still supports effective data mining tasks. Privacy-preserving for both data mining (PPDM) and data publishing (PPDP) has become increasingly popular because it allows sharing of privacy sensitive data for analysis purposes. One well studied approach is the k-anonymity model [1] which in turn led to other models such as confidence bounding, l-diversity, t-closeness, (α,k)-anonymity, etc. In particular, all known mechanisms try to minimize information loss and such an attempt provides a loophole for attacks. The aim of this paper is to present a survey for most of the common attacks techniques for anonymization-based PPDM & PPDP and explain their effects on Data Privacy.Although data mining is potentially useful, many data holders are reluctant to provide their data for data mining for the fear of violating individual privacy. In recent years, study has been made to ensure that the sensitive information of individuals cannot be identified easily.Anonymity Models, k-anonymization techniques have been the focus of intense research in the last few years. In order to ensure anonymization of data while at the same time minimizing the informationloss resulting from data modifications, everal extending models are proposed, which are discussed as follows.1.k-Anonymityk-anonymity is one of the most classic models, which technique that prevents joining attacks by generalizing and/or suppressing portions of the released microdata so that no individual can be uniquely distinguished from a group of size k. In the k-anonymous tables, a data set is k-anonymous (k ≥ 1) if each record in the data set is in- distinguishable from at least (k . 1) other records within the same data set. The larger the value of k, the better the privacy is protected. k-anonymity can ensure that individuals cannot be uniquely identified by linking attacks.2. Extending ModelsSince k-anonymity does not provide sufficient protection against attribute disclosure. The notion of l-diversity attempts to solve this problem by requiring that each equivalence class has at least l well-represented value for each sensitive attribute. The technology of l-diversity has some advantages than k-anonymity. Because k-anonymity dataset permits strong attacks due to lack of diversity in the sensitive attributes. In this model, an equivalence class is said to have l-diversity if there are at least l well-represented value for the sensitive attribute. Because there are semantic relationships among the attribute values, and different values have very different levels of sensitivity. Afteranonymization, in any equivalence class, the frequency (in fraction) of a sensitive value is no more than α.3. Related Research AreasSeveral polls show that the public has an in- creased sense of privacy loss. Since data mining is often a key component of information systems, homeland security systems, and monitoring and surveillance systems, it gives a wrong impression that data mining is a technique for privacy intrusion. This lack of trust has become an obstacle to the benefit of the technology. For example, the potentially beneficial data mining re- search project, Terrorism Information Awareness (TIA), was terminated by the US Congress due to its controversial procedures of collecting, sharing, and analyzing the trails left by individuals. Motivated by the privacy concerns on data mining tools, a research area called privacy-reserving data mining (PPDM) emerged in 2000. The initial idea of PPDM was to extend traditional data mining techniques to work with the data modified to mask sensitive information. The key issues were how to modify the data and how to recover the data mining result from the modified data. The solutions were often tightly coupled with the data mining algorithms under consideration. In contrast, privacy-preserving data publishing (PPDP) may not necessarily tie to a specific data mining task, and the data mining task is sometimes unknown at the time of data publishing. Furthermore, some PPDP solutions emphasize preserving the datatruthfulness at the record level, but PPDM solutions often do not preserve such property. PPDP Differs from PPDM in Several Major Ways as Follows :1) PPDP focuses on techniques for publishing data, not techniques for data mining. In fact, it is expected that standard data mining techniques are applied on the published data. In contrast, the data holder in PPDM needs to randomize the data in such a way that data mining results can be recovered from the randomized data. To do so, the data holder must understand the data mining tasks and algorithms involved. This level of involvement is not expected of the data holder in PPDP who usually is not an expert in data mining.2) Both randomization and encryption do not preserve the truthfulness of values at the record level; therefore, the released data are basically meaningless to the recipients. In such a case, the data holder in PPDM may consider releasing the data mining results rather than the scrambled data.3) PPDP primarily “anonymizes” the data by hiding the identity of record owners, whereas PPDM seeks to directly hide the sensitive data. Excellent surveys and books in randomization and cryptographic techniques for PPDM can be found in the existing literature. A family of research work called privacy-preserving distributed data mining (PPDDM) aims at performing some data mining task on a set of private databasesowned by different parties. It follows the principle of Secure Multiparty Computation (SMC), and prohibits any data sharing other than the final data mining result. Clifton et al. present a suite of SMC operations, like secure sum, secure set union, secure size of set intersection, and scalar product, that are useful for many data mining tasks. In contrast, PPDP does not perform the actual data mining task, but concerns with how to publish the data so that the anonymous data are useful for data mining. We can say that PPDP protects privacy at the data level while PPDDM protects privacy at the process level. They address different privacy models and data mining scenarios. In the field of statistical disclosure control (SDC), the research works focus on privacy-preserving publishing methods for statistical tables. SDC focuses on three types of disclosures, namely identity disclosure, attribute disclosure, and inferential disclosure. Identity disclosure occurs if an adversary can identify a respondent from the published data. Revealing that an individual is a respondent of a data collection may or may not violate confidentiality requirements. Attribute disclosure occurs when confidential information about a respondent is revealed and can be attributed to the respondent. Attribute disclosure is the primary concern of most statistical agencies in deciding whether to publish tabular data. Inferential disclosure occurs when individual information can be inferred with high confidence from statistical information of the published data.Some other works of SDC focus on the study of the non-interactive query model, in which the data recipients can submit one query to the system. This type of non-interactive query model may not fully address the information needs of data recipients because, in some cases, it is very difficult for a data recipient to accurately construct a query for a data mining task in one shot. Consequently, there are a series of studies on the interactive query model, in which the data recipients, including adversaries, can submit a sequence of queries based on previously received query results. The database server is responsible to keep track of all queries of each user and determine whether or not the currently received query has violated the privacy requirement with respect to all previous queries. One limitation of any interactive privacy-preserving query system is that it can only answer a sublinear number of queries in total; otherwise, an adversary (or a group of corrupted data recipients) will be able to reconstruct all but 1 . o(1) fraction of the original data, which is a very strong violation of privacy. When the maximum number of queries is reached, the query service must be closed to avoid privacy leak. In the case of the non-interactive query model, the adversary can issue only one query and, therefore, the non-interactive query model cannot achieve the same degree of privacy defined by Introduction the interactive model. One may consider that privacy-reserving data publishing is a special case of the non-interactivequery model.This paper presents a survey for most of the common attacks techniques for anonymization-based PPDM & PPDP and explains their effects on Data Privacy. k-anonymity is used for security of respondents identity and decreases linking attack in the case of homogeneity attack a simple k-anonymity model fails and we need a concept which prevent from this attack solution is l-diversity. All tuples are arranged in well represented form and adversary will divert to l places or on l sensitive attributes. l-diversity limits in case of background knowledge attack because no one predicts knowledge level of an adversary. It is observe that using generalization and suppression we also apply these techniques on those attributes which doesn’t need th is extent of privacy and this leads to reduce the precision of publishing table. e-NSTAM (extended Sensitive Tuples Anonymity Method) is applied on sensitive tuples only and reduces information loss, this method also fails in the case of multiple sensitive tuples.Generalization with suppression is also the causes of data lose because suppression emphasize on not releasing values which are not suited for k factor. Future works in this front can include defining a new privacy measure along with l-diversity for multiple sensitive attribute and we will focus to generalize attributes without suppression using other techniques which are used to achieve k-anonymity because suppression leads to reduce the precision ofpublishing table.译文:数据挖掘和数据发布数据挖掘中提取出大量有趣的模式从大量的数据或知识。
论文中英文翻译(译文)
编号:桂林电子科技大学信息科技学院毕业设计(论文)外文翻译(译文)系别:电子工程系专业:电子信息工程学生姓名:韦骏学号:0852100329指导教师单位:桂林电子科技大学信息科技学院姓名:梁勇职称:讲师2012 年6 月5 日设计与实现基于Modbus 协议的嵌入式Linux 系统摘要:随着嵌入式计算机技术的飞速发展,新一代工业自动化数据采集和监测系统,采用核心的高性能嵌入式微处理器的,该系统很好地适应应用程序。
它符合消费等的严格要求的功能,如可靠性,成本,尺寸和功耗等。
在工业自动化应用系统,Modbus 通信协议的工业标准,广泛应用于大规模的工业设备系统,包括DCS,可编程控制器,RTU 及智能仪表等。
为了达到嵌入式数据监测的工业自动化应用软件的需求,本文设计了嵌入式数据采集监测平台下基于Modbus 协议的Linux 环境采集系统。
串行端口的Modbus 协议是实现主/从式,其中包括两种通信模式:ASCII 和RTU。
因此,各种药膏协议的设备能够满足串行的Modbus通信。
在Modbus 协议的嵌入式平台实现稳定和可靠。
它在嵌入式数据监测自动化应用系统的新收购的前景良好。
关键词:嵌入式系统,嵌入式Linux,Modbus 协议,数据采集,监测和控制。
1、绪论Modbus 是一种通讯协议,是一种由莫迪康公司推广。
它广泛应用于工业自动化,已成为实际的工业标准。
该控制装置或不同厂家的测量仪器可以链接到一个行业监控网络使用Modbus 协议。
Modbus 通信协议可以作为大量的工业设备的通讯标准,包括PLC,DCS 系统,RTU 的,聪明的智能仪表。
随着嵌入式计算机技术的飞速发展,嵌入式数据采集监测系统,使用了高性能的嵌入式微处理器为核心,是一个重要的发展方向。
在环境鉴于嵌入式Linux 的嵌入式工业自动化应用的数据,一个Modbus 主协议下的采集监测系统的设计和实现了这个文件。
因此,通信设备,各种药膏协议能够满足串行的Modbus。
收集文献英语作文模板范文
收集文献英语作文模板范文Collecting Literature: A Necessary Skill for Academic Writing。
Introduction。
In the world of academia, writing is an essential skill that students and scholars must master. Whether it is for a research paper, a thesis, or a dissertation, the ability to gather and analyze literature is crucial for producing high-quality academic writing. In this article, we will explore the importance of collecting literature and provide a template for effectively organizing and synthesizing the information gathered.Importance of Collecting Literature。
Before delving into the specifics of how to collect literature, it is important to understand why this skill is so vital for academic writing. Literature, in this context, refers to the body of scholarly work and research that has been published on a particular topic. Collecting literature involves finding and gathering relevant articles, books, and other sources of information that will inform and support the arguments and ideas presented in a piece of academic writing.The primary reason for collecting literature is to build a strong foundation of knowledge on a given topic. By reviewing and synthesizing existing research, writers can gain a deeper understanding of the subject matter and identify gaps or areas for further exploration. Additionally, literature collection allows writers to situate their own work within the broader academic discourse and demonstrate a thorough understanding of the existing scholarship.Template for Collecting Literature。
信息技术类英文文献2000字
信息技术类英文文献2000字Information Systems Outsourcing Life Cycle And Risks Analysis1. IntroductionInformation systems outsourcing has obtainedtremendous attentions in the informat ion technology industry. Although there are a number of reasons for companies to pur suing information systems(IS)outsourcing , the most prominent motivation for IS outsou rcing that revealed in the literatures was cost saving". Cos tfactor has been a major decision factors for IS outsourcing.Other than cost factor, there are other reasons for outsourc ing decision. The Outsourcing Institute surveyed outsourcing e nd-users from their membership in 1998 and found that top 1 0 reasons companies outsource were:Reduce and control operating costs, improve company focus, ga in access to world-class capabilities, freeinternal resources for other purposes, resources are not avai lable internally, accelerate reengineering benefits,function dif ficult to manage/out of control, make capital funds available , share risks, and cash infusion.Within these top ten outsourcing reasons, there are three it ems that related to financial concerns, they are operating c osts, capital funds available, and cashinfusion. Since the phenomenon of wage difference exists in the outsourced countries, it is obvious thatSoutsourcing companies would save remarkable amount of labor cost. According to Gartner, Inc.' s report, world business o utsourcing services would grow from $110 billion in 2002 to $173 billion in 2007,a proximately 9.5% annual growth rate.In addition to cost saving concern, there are other factors that influence outsourcing decision, inc luding the awarenessof success and risk factors, theoutsourcing risks identification and management, and the project quality management. Outsourcing activities are substantially complicated and outsourcing project usually carries a huge array of risks. Unmanaged outsourcingrisks will increase total project cost, devaluatesoftware quality, delay project completion time, andfinally lower the success rate of the out sourcingproject. Outsourcing risks have been discovered in areas such as unexpected transition and management costs,switching costs, costly contractual amendments, disputes and l itigation, service debasement, cost escalation, lossof organizat ional competence, hidden service costs, and soon.Most publ ished outsourcing studies focused onorganizat ional and managerial issues. We believe that IS ou tsourcing projects embrace various risks anduncertainty that may inhibit the chance of outsourc ing succ ess. In addition to service and management related risk issu es, we feel that technical issues that restrain the degree of outsourcing success may have beenoverlooked. These technical issues are projectmanagement, software quality, and quality assessment methods t hat can be used to implement IS outsourcing projects. Unmanaged risks generate loss. We intend to identify the tec hnical risks during outsourcing period, so these technical ri sks can be properly managed and the cost of outsourcing pro ject can be further reduced. Th main purpose of this paperis to identify the different phases of IS outsourcing life cycle, and to discuss the i implications of success and riskfactors,softwarequality and project management, and their impacts to the success of IT outsourcing.Most outsourcing initiatives involve strategicplanning and management partic ipation, therefore, thedecision process is obv iously broad and lengthy. In orderto conduct a comprehensive study onto outsourcing project ris k analysis, we propose an IS outsourcing life cycle framewor k to be served as a yardstick. Each ISdistinguish related quality and risk management issues duringoutsourcing practice.The following sections start with needed theoretical foundatio ns to IS outsourc ing, inc luding economictheories, outsourc ing contracting theories, and risktheories. The IS outsourcing life cycle framework is then in troduced. It continues to discuss the risk implicat ions in precontract, contract, and post- contract phases. ISO standa rds on quality systems and risk management arediscussed and compared in the next section. A conclusionand direction for future study are provided in the last . sect: ion.。
关于爬虫的外文文献
关于爬虫的外文文献爬虫技术作为数据采集的重要手段,在互联网信息挖掘、数据分析等领域发挥着重要作用。
本文将为您推荐一些关于爬虫的外文文献,以供学习和研究之用。
1."Web Scraping with Python: Collecting Data from the Modern Web"作者:Ryan Mitchell简介:本书详细介绍了如何使用Python进行网页爬取,从基础概念到实战案例,涵盖了许多常用的爬虫技术和工具。
通过阅读这本书,您可以了解到爬虫的基本原理、反爬虫策略以及如何高效地采集数据。
2."Scraping the Web: Strategies and Techniques for Data Mining"作者:Dmitry Zinoviev简介:本书讨论了多种爬虫策略和技术,包括分布式爬虫、增量式爬虫等。
同时,还介绍了数据挖掘和文本分析的相关内容,为读者提供了一个全面的爬虫技术学习指南。
3."Mining the Social Web: Data Mining Facebook, Twitter, LinkedIn, Instagram, Pinterest, and More"作者:Matthew A.Russell简介:本书主要关注如何从社交媒体平台(如Facebook、Twitter 等)中采集数据。
通过丰富的案例,展示了如何利用爬虫技术挖掘社交媒体中的有价值信息。
4."Crawling the Web: An Introduction to Web Scraping and Data Mining"作者:Michael H.Goldwasser, David Letscher简介:这本书为初学者提供了一个关于爬虫技术和数据挖掘的入门指南。
内容包括:爬虫的基本概念、HTTP协议、正则表达式、数据存储和数据分析等。
大数据挖掘外文翻译文献
文献信息:文献标题:A Study of Data Mining with Big Data(大数据挖掘研究)国外作者:VH Shastri,V Sreeprada文献出处:《International Journal of Emerging Trends and Technology in Computer Science》,2016,38(2):99-103字数统计:英文2291单词,12196字符;中文3868汉字外文文献:A Study of Data Mining with Big DataAbstract Data has become an important part of every economy, industry, organization, business, function and individual. Big Data is a term used to identify large data sets typically whose size is larger than the typical data base. Big data introduces unique computational and statistical challenges. Big Data are at present expanding in most of the domains of engineering and science. Data mining helps to extract useful data from the huge data sets due to its volume, variability and velocity. This article presents a HACE theorem that characterizes the features of the Big Data revolution, and proposes a Big Data processing model, from the data mining perspective.Keywords: Big Data, Data Mining, HACE theorem, structured and unstructured.I.IntroductionBig Data refers to enormous amount of structured data and unstructured data thatoverflow the organization. If this data is properly used, it can lead to meaningful information. Big data includes a large number of data which requires a lot of processing in real time. It provides a room to discover new values, to understand in-depth knowledge from hidden values and provide a space to manage the data effectively. A database is an organized collection of logically related data which can be easily managed, updated and accessed. Data mining is a process discovering interesting knowledge such as associations, patterns, changes, anomalies and significant structures from large amount of data stored in the databases or other repositories.Big Data includes 3 V’s as its characteristics. They are volume, velocity and variety. V olume means the amount of data generated every second. The data is in state of rest. It is also known for its scale characteristics. Velocity is the speed with which the data is generated. It should have high speed data. The data generated from social media is an example. Variety means different types of data can be taken such as audio, video or documents. It can be numerals, images, time series, arrays etc.Data Mining analyses the data from different perspectives and summarizing it into useful information that can be used for business solutions and predicting the future trends. Data mining (DM), also called Knowledge Discovery in Databases (KDD) or Knowledge Discovery and Data Mining, is the process of searching large volumes of data automatically for patterns such as association rules. It applies many computational techniques from statistics, information retrieval, machine learning and pattern recognition. Data mining extract only required patterns from the database in a short time span. Based on the type of patterns to be mined, data mining tasks can be classified into summarization, classification, clustering, association and trends analysis.Big Data is expanding in all domains including science and engineering fields including physical, biological and biomedical sciences.II.BIG DATA with DATA MININGGenerally big data refers to a collection of large volumes of data and these data are generated from various sources like internet, social-media, business organization, sensors etc. We can extract some useful information with the help of Data Mining. It is a technique for discovering patterns as well as descriptive, understandable, models from a large scale of data.V olume is the size of the data which is larger than petabytes and terabytes. The scale and rise of size makes it difficult to store and analyse using traditional tools. Big Data should be used to mine large amounts of data within the predefined period of time. Traditional database systems were designed to address small amounts of data which were structured and consistent, whereas Big Data includes wide variety of data such as geospatial data, audio, video, unstructured text and so on.Big Data mining refers to the activity of going through big data sets to look for relevant information. To process large volumes of data from different sources quickly, Hadoop is used. Hadoop is a free, Java-based programming framework that supports the processing of large data sets in a distributed computing environment. Its distributed supports fast data transfer rates among nodes and allows the system to continue operating uninterrupted at times of node failure. It runs Map Reduce for distributed data processing and is works with structured and unstructured data.III.BIG DATA characteristics- HACE THEOREM.We have large volume of heterogeneous data. There exists a complex relationship among the data. We need to discover useful information from this voluminous data.Let us imagine a scenario in which the blind people are asked to draw elephant. The information collected by each blind people may think the trunk as wall, leg as tree, body as wall and tail as rope. The blind men can exchange information with each other.Figure1: Blind men and the giant elephantSome of the characteristics that include are:i.Vast data with heterogeneous and diverse sources: One of the fundamental characteristics of big data is the large volume of data represented by heterogeneous and diverse dimensions. For example in the biomedical world, a single human being is represented as name, age, gender, family history etc., For X-ray and CT scan images and videos are used. Heterogeneity refers to the different types of representations of same individual and diverse refers to the variety of features to represent single information.ii.Autonomous with distributed and de-centralized control: the sources are autonomous, i.e., automatically generated; it generates information without any centralized control. We can compare it with World Wide Web (WWW) where each server provides a certain amount of information without depending on other servers.plex and evolving relationships: As the size of the data becomes infinitely large, the relationship that exists is also large. In early stages, when data is small, there is no complexity in relationships among the data. Data generated from social media and other sources have complex relationships.IV.TOOLS:OPEN SOURCE REVOLUTIONLarge companies such as Facebook, Yahoo, Twitter, LinkedIn benefit and contribute work on open source projects. In Big Data Mining, there are many open source initiatives. The most popular of them are:Apache Mahout:Scalable machine learning and data mining open source software based mainly in Hadoop. It has implementations of a wide range of machine learning and data mining algorithms: clustering, classification, collaborative filtering and frequent patternmining.R: open source programming language and software environment designed for statistical computing and visualization. R was designed by Ross Ihaka and Robert Gentleman at the University of Auckland, New Zealand beginning in 1993 and is used for statistical analysis of very large data sets.MOA: Stream data mining open source software to perform data mining in real time. It has implementations of classification, regression; clustering and frequent item set mining and frequent graph mining. It started as a project of the Machine Learning group of University of Waikato, New Zealand, famous for the WEKA software. The streams framework provides an environment for defining and running stream processes using simple XML based definitions and is able to use MOA, Android and Storm.SAMOA: It is a new upcoming software project for distributed stream mining that will combine S4 and Storm with MOA.Vow pal Wabbit: open source project started at Yahoo! Research and continuing at Microsoft Research to design a fast, scalable, useful learning algorithm. VW is able to learn from terafeature datasets. It can exceed the throughput of any single machine networkinterface when doing linear learning, via parallel learning.V.DATA MINING for BIG DATAData mining is the process by which data is analysed coming from different sources discovers useful information. Data Mining contains several algorithms which fall into 4 categories. They are:1.Association Rule2.Clustering3.Classification4.RegressionAssociation is used to search relationship between variables. It is applied in searching for frequently visited items. In short it establishes relationship among objects. Clustering discovers groups and structures in the data.Classification deals with associating an unknown structure to a known structure. Regression finds a function to model the data.The different data mining algorithms are:Table 1. Classification of AlgorithmsData Mining algorithms can be converted into big map reduce algorithm based on parallel computing basis.Table 2. Differences between Data Mining and Big DataVI.Challenges in BIG DATAMeeting the challenges with BIG Data is difficult. The volume is increasing every day. The velocity is increasing by the internet connected devices. The variety is also expanding and the organizations’ capability to capture and process the data is limited.The following are the challenges in area of Big Data when it is handled:1.Data capture and storage2.Data transmission3.Data curation4.Data analysis5.Data visualizationAccording to, challenges of big data mining are divided into 3 tiers.The first tier is the setup of data mining algorithms. The second tier includesrmation sharing and Data Privacy.2.Domain and Application Knowledge.The third one includes local learning and model fusion for multiple information sources.3.Mining from sparse, uncertain and incomplete data.4.Mining complex and dynamic data.Figure 2: Phases of Big Data ChallengesGenerally mining of data from different data sources is tedious as size of data is larger. Big data is stored at different places and collecting those data will be a tedious task and applying basic data mining algorithms will be an obstacle for it. Next we need to consider the privacy of data. The third case is mining algorithms. When we are applying data mining algorithms to these subsets of data the result may not be that much accurate.VII.Forecast of the futureThere are some challenges that researchers and practitioners will have to deal during the next years:Analytics Architecture:It is not clear yet how an optimal architecture of analytics systems should be to deal with historic data and with real-time data at the same time. An interesting proposal is the Lambda architecture of Nathan Marz. The Lambda Architecture solves the problem of computing arbitrary functions on arbitrary data in real time by decomposing the problem into three layers: the batch layer, theserving layer, and the speed layer. It combines in the same system Hadoop for the batch layer, and Storm for the speed layer. The properties of the system are: robust and fault tolerant, scalable, general, and extensible, allows ad hoc queries, minimal maintenance, and debuggable.Statistical significance: It is important to achieve significant statistical results, and not be fooled by randomness. As Efron explains in his book about Large Scale Inference, it is easy to go wrong with huge data sets and thousands of questions to answer at once.Distributed mining: Many data mining techniques are not trivial to paralyze. To have distributed versions of some methods, a lot of research is needed with practical and theoretical analysis to provide new methods.Time evolving data: Data may be evolving over time, so it is important that the Big Data mining techniques should be able to adapt and in some cases to detect change first. For example, the data stream mining field has very powerful techniques for this task.Compression: Dealing with Big Data, the quantity of space needed to store it is very relevant. There are two main approaches: compression where we don’t loose anything, or sampling where we choose what is thedata that is more representative. Using compression, we may take more time and less space, so we can consider it as a transformation from time to space. Using sampling, we are loosing information, but the gains inspace may be in orders of magnitude. For example Feldman et al use core sets to reduce the complexity of Big Data problems. Core sets are small sets that provably approximate the original data for a given problem. Using merge- reduce the small sets can then be used for solving hard machine learning problems in parallel.Visualization: A main task of Big Data analysis is how to visualize the results. As the data is so big, it is very difficult to find user-friendly visualizations. New techniques, and frameworks to tell and show stories will be needed, as for examplethe photographs, infographics and essays in the beautiful book ”The Human Face of Big Data”.Hidden Big Data: Large quantities of useful data are getting lost since new data is largely untagged and unstructured data. The 2012 IDC studyon Big Data explains that in 2012, 23% (643 exabytes) of the digital universe would be useful for Big Data if tagged and analyzed. However, currently only 3% of the potentially useful data is tagged, and even less is analyzed.VIII.CONCLUSIONThe amounts of data is growing exponentially due to social networking sites, search and retrieval engines, media sharing sites, stock trading sites, news sources and so on. Big Data is becoming the new area for scientific data research and for business applications.Data mining techniques can be applied on big data to acquire some useful information from large datasets. They can be used together to acquire some useful picture from the data.Big Data analysis tools like Map Reduce over Hadoop and HDFS helps organization.中文译文:大数据挖掘研究摘要数据已经成为各个经济、行业、组织、企业、职能和个人的重要组成部分。
数据分析外文文献+翻译
数据分析外文文献+翻译文献1:《数据分析在企业决策中的应用》该文献探讨了数据分析在企业决策中的重要性和应用。
研究发现,通过数据分析可以获取准确的商业情报,帮助企业更好地理解市场趋势和消费者需求。
通过对大量数据的分析,企业可以发现隐藏的模式和关联,从而制定出更具竞争力的产品和服务策略。
数据分析还可以提供决策支持,帮助企业在不确定的环境下做出明智的决策。
因此,数据分析已成为现代企业成功的关键要素之一。
文献2:《机器研究在数据分析中的应用》该文献探讨了机器研究在数据分析中的应用。
研究发现,机器研究可以帮助企业更高效地分析大量的数据,并从中发现有价值的信息。
机器研究算法可以自动研究和改进,从而帮助企业发现数据中的模式和趋势。
通过机器研究的应用,企业可以更准确地预测市场需求、优化业务流程,并制定更具策略性的决策。
因此,机器研究在数据分析中的应用正逐渐受到企业的关注和采用。
文献3:《数据可视化在数据分析中的应用》该文献探讨了数据可视化在数据分析中的重要性和应用。
研究发现,通过数据可视化可以更直观地呈现复杂的数据关系和趋势。
可视化可以帮助企业更好地理解数据,发现数据中的模式和规律。
数据可视化还可以帮助企业进行数据交互和决策共享,提升决策的效率和准确性。
因此,数据可视化在数据分析中扮演着非常重要的角色。
翻译文献1标题: The Application of Data Analysis in Business Decision-making The Application of Data Analysis in Business Decision-making文献2标题: The Application of Machine Learning in Data Analysis The Application of Machine Learning in Data Analysis文献3标题: The Application of Data Visualization in Data Analysis The Application of Data Visualization in Data Analysis翻译摘要:本文献研究了数据分析在企业决策中的应用,以及机器研究和数据可视化在数据分析中的作用。
外文翻译原文及译文-基于51单片机的电子秤设计
外文文献翻译译稿1基于电阻应变式称重传感器的高精度和低容量电子秤开发Baoxiang He,Guirong Lu ,Kaibin Chu ,Guoqiang Ma摘要:基于称重传感器的应变计优化设计中除了一些先进的稳定技术比如温度的影响之外,静态超载和计算机模式识别(CRT)技术也被用来进行动态模拟与分析。
这种多谐振荡的压力释放方法是在生产中创造性的使用了压力传感器,由于这种技术,量程30G的压力传感器才能做到高精度,高稳定性。
由于使用了这种压力传感器,使得基于传感器的电子秤拥有300,00种分类和小于0.2mg的精度。
这种压力传感器的量程和精度远远高于市场上的同类产品,而其价格却远低于电磁压力传感器。
因此,这种压力传感器的商业前景是十分广阔的。
关键词:设计;电阻应变式称重传感器;精度;电子秤1.介绍众所周知,压力传感器的精度是决定一个的电子秤精度的关键。
目前,用于高精度称重的传感器主要是电磁平衡式称重传感器。
低成本电阻应变式称重传感器仅能用于使低精度的称量。
主要影响精度应变式称重传感器的误差是蠕变和温度漂移,特别是对于低负荷的传感器来说。
一般来说,高精度传感器的负载能力最低是300克。
称重传感器的最大分配平衡只有50K,最小分辨率是不小于0.01克。
总而言之,对于超低容量称重传感器来说设计和制造技术是很难被应用到敏感的称重传感器的加工和生产中的。
因此很难做出足够好的高精度平衡的称重传感器。
使得低量程和高精度的传感器始终是全世界的热门话题。
本文将分析应力释放及补偿技术,探索低量程高精度应变式称重传感器的制造技术。
2.原理与方法A. 残余应力的释放制作压力传感器主要部件的材料是铝棒。
为了获得更好的综合性能,铝条会在挤压后进行淬火。
由于淬火的残余应力不能被自然老化而得到充分释放,此外,机械加工和固化过程中也会造成很大的残余应力,特别是对于超低容量称重传感器来说,如果这个压力不及时释放,可能就会在压力传感器被测试或者是最终使用的时候释放出来。
数据采集外文文献翻译中英文
数据采集外文文献翻译(含:英文原文及中文译文)文献出处:Txomin Nieva. DATA ACQUISITION SYSTEMS [J]. Computers in Industry, 2013, 4(2):215-237.英文原文DATA ACQUISITION SYSTEMSTxomin NievaData acquisition systems, as the name implies, are products and/or processes used to collect information to document or analyze some phenomenon. In the simplest form, a technician logging the temperature of an oven on a piece of paper is performing data acquisition. As technology has progressed, this type of process has been simplified and made more accurate, versatile, and reliable through electronic equipment. Equipment ranges from simple recorders to sophisticated computer systems. Data acquisition products serve as a focal point in a system, tying together a wide variety of products, such as sensors that indicate temperature, flow, level, or pressure. Some common data acquisition terms are shown below.Data collection technology has made great progress in the past 30 to 40 years. For example, 40 years ago, in a well-known college laboratory, the device used to track temperature rises in bronze made of helium was composed of thermocouples, relays, interrogators, a bundle of papers, anda pencil.Today's university students are likely to automatically process and analyze data on PCs. There are many ways you can choose to collect data. The choice of which method to use depends on many factors, including the complexity of the task, the speed and accuracy you need, the evidence you want, and more. Whether simple or complex, the data acquisition system can operate and play its role.The old way of using pencils and papers is still feasible for some situations, and it is cheap, easy to obtain, quick and easy to start. All you need is to capture multiple channels of digital information (DMM) and start recording data by hand.Unfortunately, this method is prone to errors, slower acquisition of data, and requires too much human analysis. In addition, it can only collect data in a single channel; but when you use a multi-channel DMM, the system will soon become very bulky and clumsy. Accuracy depends on the level of the writer, and you may need to scale it yourself. For example, if the DMM is not equipped with a sensor that handles temperature, the old one needs to start looking for a proportion. Given these limitations, it is an acceptable method only if you need to implement a rapid experiment.Modern versions of the strip chart recorder allow you to retrieve data from multiple inputs. They provide long-term paper records of databecause the data is in graphic format and they are easy to collect data on site. Once a bar chart recorder has been set up, most recorders have enough internal intelligence to operate without an operator or computer. The disadvantages are the lack of flexibility and the relative low precision, often limited to a percentage point. You can clearly feel that there is only a small change with the pen. In the long-term monitoring of the multi-channel, the recorders can play a very good role, in addition, their value is limited. For example, they cannot interact with other devices. Other concerns are the maintenance of pens and paper, the supply of paper and the storage of data. The most important is the abuse and waste of paper. However, recorders are fairly easy to set up and operate, providing a permanent record of data for quick and easy analysis.Some benchtop DMMs offer selectable scanning capabilities. The back of the instrument has a slot to receive a scanner card that can be multiplexed for more inputs, typically 8 to 10 channels of mux. This is inherently limited in the front panel of the instrument. Its flexibility is also limited because it cannot exceed the number of available channels. External PCs usually handle data acquisition and analysis.The PC plug-in card is a single-board measurement system that uses the ISA or PCI bus to expand the slot in the PC. They often have a reading rate of up to 1000 per second. 8 to 16 channels are common, and the collected data is stored directly in the computer and then analyzed.Because the card is essentially a part of the computer, it is easy to establish the test. PC-cards are also relatively inexpensive, partly because they have since been hosted by PCs to provide energy, mechanical accessories, and user interfaces. Data collection optionsOn the downside, the PC plug-in cards often have a 12-word capacity, so you can't detect small changes in the input signal. In addition, the electronic environment within the PC is often susceptible to noise, high clock rates, and bus noise. The electronic contacts limit the accuracy of the PC card. These plug-in cards also measure a range of voltages. To measure other input signals, such as voltage, temperature, and resistance, you may need some external signal monitoring devices. Other considerations include complex calibrations and overall system costs, especially if you need to purchase additional signal monitoring devices or adapt the PC card to the card. Take this into account. If your needs change within the capabilities and limitations of the card, the PC plug-in card provides an attractive method for data collection.Data electronic recorders are typical stand-alone instruments that, once equipped with them, enable the measurement, recording, and display of data without the involvement of an operator or computer. They can handle multiple signal inputs, sometimes up to 120 channels. Accuracy rivals unrivalled desktop DMMs because it operates within a 22 word, 0.004 percent accuracy range. Some data electronic automatic recordershave the ability to measure proportionally, the inspection result is not limited by the user's definition, and the output is a control signal.One of the advantages of using data electronic loggers is their internal monitoring signals. Most can directly measure several different input signals without the need for additional signal monitoring devices. One channel can monitor thermocouples, RTDs, and voltages.Thermocouples provide valuable compensation for accurate temperature measurements. They are typically equipped with multi-channel cards. Built-in intelligent electronic data recorder helps you set the measurement period and specify the parameters for each channel. Once you set it all up, the data electronic recorder will behave like an unbeatable device. The data they store is distributed in memory and can hold 500,000 or more readings.Connecting to a PC makes it easy to transfer data to a computer for further analysis. Most data electronic recorders can be designed to be flexible and simple to configure and operate, and most provide remote location operation options via battery packs or other methods. Thanks to the A/D conversion technology, certain data electronic recorders have a lower reading rate, especially when compared with PC plug-in cards. However, a reading rate of 250 per second is relatively rare. Keep in mind that many of the phenomena that are being measured are physical in nature, such as temperature, pressure, and flow, and there are generallyfewer changes. In addition, because of the monitoring accuracy of the data electron loggers, a large amount of average reading is not necessary, just as they are often stuck on PC plug-in cards.Front-end data acquisition is often done as a module and is typically connected to a PC or controller. They are used in automated tests to collect data, control and cycle detection signals for other test equipment. Send signal test equipment spare parts. The efficiency of the front-end operation is very high, and can match the speed and accuracy with the best stand-alone instrument. Front-end data acquisition works in many models, including VXI versions such as the Agilent E1419A multi-function measurement and VXI control model, as well as a proprietary card elevator. Although the cost of front-end units has been reduced, these systems can be very expensive unless you need to provide high levels of operation, and finding their prices is prohibited. On the other hand, they do provide considerable flexibility and measurement capabilities.Good, low-cost electronic data loggers have the right number of channels (20-60 channels) and scan rates are relatively low but are common enough for most engineers. Some of the key applications include:•product features•Hot die cutting of electronic products•Test of the environmentEnvironmental monitoring•Composition characteristics•Battery testBuilding and computer capacity monitoringA new system designThe conceptual model of a universal system can be applied to the analysis phase of a specific system to better understand the problem and to specify the best solution more easily based on the specific requirements of a particular system. The conceptual model of a universal system can also be used as a starting point for designing a specific system. Therefore, using a general-purpose conceptual model will save time and reduce the cost of specific system development. To test this hypothesis, we developed DAS for railway equipment based on our generic DAS concept model. In this section, we summarize the main results and conclusions of this DAS development.We analyzed the device model package. The result of this analysis is a partial conceptual model of a system consisting of a three-tier device model. We analyzed the equipment project package in the equipment environment. Based on this analysis, we have listed a three-level item hierarchy in the conceptual model of the system. Equipment projects are specialized for individual equipment projects.We analyzed the equipment model monitoring standard package in the equipment context. One of the requirements of this system is the ability to use a predefined set of data to record specific status monitoring reports. We analyzed the equipment project monitoring standard package in the equipment environment. The requirements of the system are: (i) the ability to record condition monitoring reports and event monitoring reports corresponding to the items, which can be triggered by time triggering conditions or event triggering conditions; (ii) the definition of private and public monitoring standards; (iii) Ability to define custom and predefined train data sets. Therefore, we have introduced the "monitoring standards for equipment projects", "public standards", "special standards", "equipment monitoring standards", "equipment condition monitoring standards", "equipment project status monitoring standards and equipment project event monitoring standards, respectively Training item triggering conditions, training item time triggering conditions and training item event triggering conditions are device equipment trigger conditions, equipment item time trigger conditions and device project event trigger condition specialization; and training item data sets, training custom data Sets and trains predefined data sets, which are device project data sets, custom data sets, and specialized sets of predefined data sets.Finally, we analyzed the observations and monitoring reports in the equipment environment. The system's requirement is to recordmeasurements and category observations. In addition, status and incident monitoring reports can be recorded. Therefore, we introduce the concept of observation, measurement, classification observation and monitoring report into the conceptual model of the system.Our generic DAS concept model plays an important role in the design of DAS equipment. We use this model to better organize the data that will be used by system components. Conceptual models also make it easier to design certain components in the system. Therefore, we have an implementation in which a large number of design classes represent the concepts specified in our generic DAS conceptual model. Through an industrial example, the development of this particular DAS demonstrates the usefulness of a generic system conceptual model for developing a particular system.中文译文数据采集系统Txomin Nieva数据采集系统, 正如名字所暗示的, 是一种用来采集信息成文件或分析一些现象的产品或过程。
数据采集外文翻译
中文1950字附录附录A外文资料Data CollectionAt present,the management of China’s colleges and universities’apartments are developing toward standardization and market development,accidents have occurred in electricity,while some colleges and universities have installed apart ment energy metering control system,however,these systems monitor the prevale nce of low level,billing accuracy is low,electricity-sharing,the network number o f the drawbacks of low extent.Therefore,improving the Energy Measurement m onitoring device has become more urgent.The issue of student hostels in colle ges and universities to monitor energy metering system to study,design the st udent hostels in colleges and universities of the electricity data collector apartm ent.Data acquisition, also known as data acquisition, is the use of a device th at collect data from outside the system and enter into an interface within the s ystem.Data acquisition technology is widely cited in the various fields.Such as camera, microphone, all data collection tools.Data is being collected has been c onverted to electrical signals of various physical quantities such as temperature, water level, wind speed, pressure, etc., can be analog, it can be digital.Sampl e collection generally means that a certain time interval (called the sampling p eriod) to repeat the same point of data collection.The data collected are mostly instantaneous value, but also a feature within a certain period of time value.A ccurate data measurement is the basis for data collection.Data measurement met hod of contact and non-contact detection elements varied.Regardless of which method and components are measured object does not affect the status and me asurement environment as a precondition to ensure the accuracy of the data.Ver y broad meaning of data collection, including continuous physical hold the collection across the state.In computer-aided mapping, surveying and mapping, desi gn, digital graphics or image data acquisition process may also be called, this time to be collected is the geometric volume (or include physical quantities, su ch as gray)data.[1] In today's fast-growing Internet industry, data collection has been widely used in the field of Internet and distributed data acquisition field has undergone important changes.First, the distributed control applications in i ntelligent data acquisition system at home and abroad have made great progres s.Second, the bus-compatible data acquisition plug-in number is increasing, and personal computer-compatible data acquisition system the number is increasing. Various domestic and international data collection machine has come out, the d ata acquisition into a new era.Digital signal processor (DSP) to the high-speed data processing ability an d strong peripherals interface, more and more widely used in power quality an alysis field, in order to improve the real-time and reliability.The DSP and micr ocomputer as the center of the system, realize the power system signal collecti on and analysis. This paper based on the FFT algorithm with window interpola tion electric system harmonic analysis, improves the accuracy of the power qua lity parameters. In electricity parameter acquisition circuit, by highaccuracy tran sformer and improve software synchronous communication sampling method to conduct electricity parameters of the acquisition.The system consists of two main components, mainly complete data acquis ition and logic control.To synchronous sampling and A/D converter circuit pri ority . The DSP development board(SY-5402EVM),complete data processing. T HE signal after transformer, op-amp into A/D converter, using DSP multi-chann el buffer (McBSP) and serial port (A/D connected, data collection and operatio ns. At the same time, adopt PLL circuit implementation synchronous sampling, can prevent well due to sampling synchronization and cause the measuring err or. The overall system diagram of the A/D converter chooses the Analog to pr oduce stats redetect (AD) company AD73360. The chip has six analogue input channel, each channel can output 16 the digital quantity. Six channel simultan eous sampling, and conversion, timeshare transmission, effectively reduce gener ated due to the sampling time different phase error. SY - 5402EVM on-board DSP chip is TI company's 16 fixed-point digital signal processor TMS320VC54 02. It has high costperformance and provide high-speed, bidirectional, multi-channel belt cushion, be used to serial port with system of other serial devices di rectly interface.The realization method of ac sample:In the field of power quality analysi s,The fast Fourier transform (FFT) algorithm analysis of electric system harmon ic is commonly used.and the FFT algorithm to signal a strict requirements syn chronous sampling. The synchronous sampling influence: it's difficult to accomp lish synchronous sampling and integer a period truncation in the actual measur ement, so there was a affect the measurement accuracy of the frequency spectr um leakage problem. The signal has to deal with through sampling and A/D c onversion get limited long digital sequence,the original signal multiplied by A r ectangular window to truncated. Time-domain truncation will cause the detuning frequency domain, spectrum leakage occurs. In the synchronous sampling, bec ause the actual signal every harmonic component can't exactly landed in freque ncy resolution point in, but fall between the frequency resolution points. But F FT spectrum is discrete, only in all sampling points, while in other places of s pectrum is not. Such through FFT and cannot directly get every harmonic com ponent, but only the accurate value in neighboring frequency resolution point v alue to approximate instead of, can cause the fence effect error.The realization method of synchronous sampling signal:According to provide different ways of sampling signal, synchronous sampling method and divided into software sync hronous sampling method and hardware synchronous sampling method is two k inds. Software is synchronous sampling method by micro controller (MCU) or DSP provide synchronized sampling pulse, first measured the measured signal, the sa mpling interval period T Δ T = T/N (N for week of sampling points), T hus the count value determined timer,Use timing interrupt way realization sync hronous sampling. The advantage of this method is no hardware synchronous c ircuit, simple structure .This topic will be the eventual realization of access to embedded systems,the realization of the power measurement and monitoring,m onitoring system to meet the electricity network,intelligence requirement,it prom ote the development of remote monitoring services,bringing a certain degree of socio.economic effectiveness.On the fundamental reactive current and harmonic current detection, there are mainly 2 ways: First, the instantaneous reactive power theory based method, the second is based on adaptive cancellation techniques.In addition, there areother non-mainstream approach, such as fast Fourier transform method, wavelet transform.Instantaneous power theory based on the method of offensive principles ar e: a three-phase current detection and load phase voltage A, the coordinate tra nsformation, two-phase stationary coordinate system the current value, calculate the instantaneous active and instantaneous reactive power ip iq,then after coor dinate transformation, three-phase fundamental active current, with the final loa d current minus the fundamental current, active power and harmonic currents a re fundamental iah, ibhi, ich.From:Principles of Data Acquisitio数据采集目前,我国高校公寓管理正在向着正规化、市场化发展,在不断提高学生方便用电的同时,用电事故频有发生,虽然部分高校公寓已经安装了电能计量监控系统,但这些系统普遍存在着监控程度低、计费精度不高、电费均分、网络程度低等诸多端。
多路数据采集与分析系统的设计及应用 外文翻译 外文文献 英文文献
附录五中英文资料Multi-channel data collection and analysisof the design and applicationAbstract:The Paper mainly introduces a multichannel data acquisition and analysis system composed of one PC and one measuring instrument. The system can test eight products parallelly. It reduces the test cost and improves work efficiency. The paper also gives the hardware structure and software flow diagr am of the system. The application in the gyro test is also introduced briefly.Key words:communication prot;data acquisition; gyro; testWith the development of computer technology and the digital measuring instrument, usually by computer and measuring instruments to communicate with each other in real-time data collection and use of computer powerful computing capability to conduct the analysis of the data processing. Particularly in the large volume of data, measuring the length of time occasions, such as the Gyro-tilt test, using computer for automatic control of measuring instruments, automatic data acquisition and analysis it is particularly important, can save a lot of manpower and material resources to improve work efficiency, reduce costs , The conventional method of testing is usually a measuring instrument at the same time can only test a product, namely a computer and a measuring instrument test system can only be composed of serial testing. To test multiple products at the same time, they need multiple systems, testing products in large volume, low efficiency, such as the composition of several sets of test system, an increase of cost. First on a machine with a PC and a measuring instrument consisting of 8-way data collection and analysissystem, which can carry out multiple sets of product testing, at no additional cost on the basis of a computer give full play to the advantages of automatic test, Improve work efficiency.1 PrincipleThe system hardware and software system. A PC through a RS232 port and a measuring instrument connected, PC-parallel port (LPT) and an 8-way channel selector attached to a 8-way connector will channel selector were connected with a number of test products.The working principle as shown in Figure 1. The course of testing, computer through the parallel port 8-way control channel selection, were open different channels, each channel for data transmission by choosing to measuring instruments, measuring instruments through the RS232 port to the computer data sent to save, A complete cycle of all channels of data collection, and this has also tested a number of product features.Figure 1 system block diagram of workThroughout the course of testing, all the control operations have completed the software automatically, without human intervention.2 hardware designThe system is mainly to use the computer onboard RS232 communication ports and digital measuring instrument of communication port connecting communications, re-use LPT parallel port on a 8-way channel selector for access control. 8-way channel of choice for an 8-elected one of analog switches and related circuit, the control signals from the computer's parallel port to provide and meet shown in table 1.Table1 The relation between channel selection and port output Communications port output Binary code Channel selection selectchannel0 000 11 001 22 010 33 011 44 100 55 101 66 110 77 111 88-way channel selector industry can use the SCM, subject to additional controls, select RS232 serial port as data transmission, because the RS232 port is the computer and measuring instruments on the standard configuration, communicate with each other without additional hardware , Easy to use. In addition, a serial communication-only a bit, with only a standard data-voltage potential, hence more difficult in data errors. In a parallel port to transfer data 8-bit, data transmission speed, but the data vulnerable to interference. Transmission distance in a shorter amount of data transmission larger circumstances, may be parallel port (such as GPIB, LPT, etc.) to communicate. In addition, since LPT parallel port may signal transmission, channel selection is suitable for the control port.System in the course of work, good access control modules and data acquisition module synchronization is particularly important because different channels of datastorage needs of the corresponding data buffer pool, which is controlled by software.3 software designThe whole system software design is the most important part. Software system from the bottom of the communication protocol can be divided into functional three-tier module and user interface. Software design in the use of multi-threaded Windows technology, the technology for data collection procedures can effectively accelerate the reaction time and increase the efficiency of implementation. The procedures used in a separate thread for data collection, so the guaranteed maximum energy collection of real-time; using another thread at the same time data processing, such procedures to avoid a single-threaded the same time only the implementation of a functional deficiencies. Especially when the amount of data collection, data processing task, using multi-threaded technology will greatly improve the efficiency of the system as a whole.3.1 Data Acquisition ModuleData acquisition modules to eight channels of data in a cycle of all the acquisition to the computer, and save the channel, and the corresponding data in the buffer. Its procedures diagram shown in Figure 2.Fig 2 Flow diagram of data acquisitionAt the beginning of procedures, with the choice of control and store data buffer at the same time to switch to the same channel, 8-way data collection cycle and command judgement, in the end not received orders, has recycling collection to do.Multi-channel data acquisition process the data vulnerable to interference, especially in the fast-channel switching, the data vulnerable to fluctuations, as shown in Figure 3. At this time if the data collection, will be collecting the wrong data, the need to add some software algorithms to prevent this from happening. If we develop the automated data tracking algorithm to automatically track each channel data to determine whether the channel in a stable state, and only the stability of dataacquisition, the volatility of other data. In addition, the software can also add some filtering algorithm (such as limiting filter, etc.) to filter out man-made interference or other factors caused by the mutation data. Limiting filter for(1)Figure 3 channel switching, the data volatilityWhen the new collected data and the data before a difference to the absolute value of more than one set of values that the data is invalid, and the previous data from the current data.3.2 Data Analysis ModuleIn the data analysis module can be added if the algorithm analysis, graphics display and print output, and other useful features, such as gyroscopes and stability in the standard deviation algorithm can function in the course of testing real-time calculation of zero stability, and through chart shows. Zero stability calculation formula as follows:(2)According to first-(2) to prepare an algorithm function, and then call in the analysis module. Analysis module diagram of the procedure shown in Figure 4.Figure 4 data analysis process flow chartBecause the system uses multi-threaded technology, in the cycle of operation and will not affect the acquisition module's operation. The module also in its algorithm in the function of any expansion, forming a algorithm to adapt to different procedures for data analysis.In addition, software design, a friendly user interface is necessary in the process of the functions from the package, through a unified interface to users, to reduce operating difficulties and enhance efficiency.4 system test resultsFigure 5 to 8 in the analysis of data acquisition systems, at the same time two three-axis gyro and a single axis gyroscope total of seven road test data of the situation. Its precise data collection, data analysis can be conducted at the same time, and through real-time charts, user-friendly, easy to operate.Figure 5 8 Data Collection and Analysis System5 ConclusionMulti-channel data acquisition and analysis system for the hardware requirements simple, easy to set up, can be applied to various tests occasions, it can also test multiple products, thereby reducing the cost and enhance efficiency. As a result of a multi-threaded technology, the speed of data acquisition systems and hardware only (instrument) and the response speed of the speed of Communication. With the collection and analysis software algorithm has nothing to do.PAD programming tools can be used to develop a data collection, data analysis, graphics display and print output, and other powerful features and friendly user interface of our software. Software modular design and easy to carry out expansion, according to different algorithm for data analysis at the request of upgrades, and hardware can remain the same. The system give full play to the use of computers and measuring instruments of mutual communication, automation and test advantage.多路数据采集与分析系统的设计及应用摘要:介绍了用一台PC机和一台测量仪表组成的8路数据采集与分析系统。
基于STM32的数据采集系统英文文献
traditional mainstream technology in embedded systems, and the collecting data toward the direction of high real-time, multi-parameter, high-precision, while data storage become large capacity, more miniaturization and portable, and the development of multicommunication mode and long-distance for data transmission. So as to meet the actual acquisition system multitasking requirements, this article has designed based on STM32 micro-controller uC/OS-II system of signal acquisition system. Therefore, in order to meet the actual acquisition system multitask requirements, this novelty of this article has designed a signal acquisition system in micro-controller uC/OS-II based on STM32.
基于 STM32的数据采集系统英文 文献
ቤተ መጻሕፍቲ ባይዱ
Design of the Data Acquisition System Based on STM32
数据挖掘外文翻译参考文献
数据挖掘外文翻译参考文献(文档含中英文对照即英文原文和中文翻译)外文:What is Data Mining?Simply stated, data mining refers to extracting or “mining” knowledge from large amounts of data. The term is actually a misnomer. Remember that the mining of gold from rocks or sand is referred to as gold mining rather than rock or sand mining. Thus, “data mining” should have been more appropriately named “knowledge mining from data”, which is unfortunately somewhat long. “Knowledge mining”, a shorter term, may not reflect the emphasis on mining from large amounts of data. Nevertheless, mining is a vivid term characterizing the processthat finds a small set of precious nuggets from a great deal of raw material. Thus, such a misnomer which carries both “data” and “mining” became a popular choice. There are many other terms carrying a similar or slightly different meaning to data mining, such as knowledge mining from databases, knowledge extraction, data / pattern analysis, data archaeology, and data dredging.Many people treat data mining as a synonym for another popularly used term, “Knowledge Discovery in Databases”, or KDD. Alternatively, others view data mining as simply an essential step in the process of knowledge discovery in databases. Knowledge discovery consists of an iterative sequence of the following steps:· data cleaning: to remove noise or irrelevant data,· data integration: where multiple data sources may be combined,· data selection : where data relevant to the analysis task are retrieved from the database,· data transformati on : where data are transformed or consolidated into forms appropriate for mining by performing summary or aggregation operations, for instance,· data mining: an essential process where intelligent methods are applied in order to extract data patterns,· pattern evaluation: to identify the truly interesting patterns representing knowledge based on some interestingness measures, and· knowledge presentation: where visualization and knowledge representation techniques are used to present the mined knowledge to the user .The data mining step may interact with the user or a knowledge base. The interesting patterns are presented to the user, and may be stored as new knowledge in the knowledge base. Note that according to this view, data mining is only one step in the entire process, albeit an essential one since it uncovers hidden patterns for evaluation.We agree that data mining is a knowledge discovery process. However, in industry, in media, and in the database research milieu, the term “data mining” is becoming more popular than the longer term of “knowledge discovery in databases”. Therefore, in this book, we choose to use the term “data mining”. We adopt a broad view of data mining functionality: data mining is the process of discovering interesting knowledgefrom large amounts of data stored either in databases, data warehouses, or other information repositories.Based on this view, the architecture of a typical data mining system may have the following major components:1. Database, data warehouse, or other information repository. This is one or a set of databases, data warehouses, spread sheets, or other kinds of information repositories. Data cleaning and data integration techniques may be performed on the data.2. Database or data warehouse server. The database or data warehouse server is responsible for fetching the relevant data, based on the user’s data mining request.3. Knowledge base. This is the domain knowledge that is used to guide the search, or evaluate the interestingness of resulting patterns. Such knowledge can include concept hierarchies, used to organize attributes or attribute values into different levels of abstraction. Knowledge such as user beliefs, which can be used to assess a pattern’s interestingness based on its unexpectedness, may also be included. Other examples of domain knowledge are additional interestingness constraints or thresholds, and metadata (e.g., describing data from multiple heterogeneous sources).4. Data mining engine. This is essential to the data mining system and ideally consists of a set of functional modules for tasks such as characterization, association analysis, classification, evolution and deviation analysis.5. Pattern evaluation module. This component typically employs interestingness measures and interacts with the data mining modules so as to focus the search towards interesting patterns. It may access interestingness thresholds stored in the knowledge base. Alternatively, the pattern evaluation module may be integrated with the mining module, depending on the implementation of the data mining method used. For efficient data mining, it is highly recommended to push the evaluation of pattern interestingness as deep as possible into the mining process so as to confine the search to only the interesting patterns.6. Graphical user interface. This module communicates between users and the data mining system, allowing the user to interact with the system by specifying a data mining query or task, providing information to help focus the search, and performing exploratory data mining based on the intermediate data mining results. In addition, this component allows the user to browse database and data warehouse schemas or datastructures, evaluate mined patterns, and visualize the patterns in different forms.From a data warehouse perspective, data mining can be viewed as an advanced stage of on-1ine analytical processing (OLAP). However, data mining goes far beyond the narrow scope of summarization-style analytical processing of data warehouse systems by incorporating more advanced techniques for data understanding.While there may be many “data mining systems” on the market, not all of them can perform true data mining. A data analysis system that does not handle large amounts of data can at most be categorized as a machine learning system, a statistical data analysis tool, or an experimental system prototype. A system that can only perform data or information retrieval, including finding aggregate values, or that performs deductive query answering in large databases should be more appropriately categorized as either a database system, an information retrieval system, or a deductive database system.Data mining involves an integration of techniques from mult1ple disciplines such as database technology, statistics, machine learning, high performance computing, pattern recognition, neural networks, data visualization, informationretrieval, image and signal processing, and spatial data analysis. We adopt a database perspective in our presentation of data mining in this book. That is, emphasis is placed on efficient and scalable data mining techniques for large databases. By performing data mining, interesting knowledge, regularities, or high-level information can be extracted from databases and viewed or browsed from different angles. The discovered knowledge can be applied to decision making, process control, information management, query processing, and so on. Therefore, data mining is considered as one of the most important frontiers in database systems and one of the most promising, new database applications in the information industry.A classification of data mining systemsData mining is an interdisciplinary field, the confluence of a set of disciplines, including database systems, statistics, machine learning, visualization, and information science. Moreover, depending on the data mining approach used, techniques from other disciplines may be applied, such as neural networks, fuzzy and or rough set theory, knowledge representation, inductive logic programming, or high performance computing. Depending on the kinds of data to bemined or on the given data mining application, the data mining system may also integrate techniques from spatial data analysis, Information retrieval, pattern recognition, image analysis, signal processing, computer graphics, Web technology, economics, or psychology.Because of the diversity of disciplines contributing to data mining, data mining research is expected to generate a large variety of data mining systems. Therefore, it is necessary to provide a clear classification of data mining systems. Such a classification may help potential users distinguish data mining systems and identify those that best match their needs. Data mining systems can be categorized according to various criteria, as follows.1) Classification according to the kinds of databases mined.A data mining system can be classified according to the kinds of databases mined. Database systems themselves can be classified according to different criteria (such as data models, or the types of data or applications involved), each of which may require its own data mining technique. Data mining systems can therefore be classified accordingly.For instance, if classifying according to data models, we may have a relational, transactional, object-oriented,object-relational, or data warehouse mining system. If classifying according to the special types of data handled, we may have a spatial, time -series, text, or multimedia data mining system , or a World-Wide Web mining system . Other system types include heterogeneous data mining systems, and legacy data mining systems.2) Classification according to the kinds of knowledge mined. Data mining systems can be categorized according to the kinds of knowledge they mine, i.e., based on data mining functionalities, such as characterization, discrimination, association, classification, clustering, trend and evolution analysis, deviation analysis , similarity analysis, etc. A comprehensive data mining system usually provides multiple and/or integrated data mining functionalities.Moreover, data mining systems can also be distinguished based on the granularity or levels of abstraction of the knowledge mined, including generalized knowledge(at a high level of abstraction), primitive-level knowledge(at a raw data level), or knowledge at multiple levels (considering several levels of abstraction). An advanced data mining system should facilitate the discovery of knowledge at multiple levels of abstraction.3) Classification according to the kinds of techniques utilized.Data mining systems can also be categorized according to the underlying data mining techniques employed. These techniques can be described according to the degree of user interaction involved (e.g., autonomous systems, interactive exploratory systems, query-driven systems), or the methods of data analysis employed(e.g., database-oriented or data warehouse-oriented techniques, machine learning, statistics, visualization, pattern recognition, neural networks, and so on ) .A sophisticated data mining system will often adopt multiple data mining techniques or work out an effective, integrated technique which combines the merits of a few individual approaches.翻译:什么是数据挖掘?简单地说,数据挖掘是从大量的数据中提取或“挖掘”知识。
基于STM32的数据采集系统英文文献
基于STM32的数据采集系统英文文献Design of the Data Acquisition System Based on STM32ABSTRACTEarly detection of failures in machinery equipments is one of the most important concerns to industry. In order to monitor effective of rotating machinery, we development a micro-controller uC/OS-II system of signal acquisition system based on STM32 in this paper. we have given the whole design scheme of system and the multi-channel vibration signal in axis X, Y and Z of the rotary shaft can be acquired rapidly and display in real-time. Our system has the character of simple structure, low power consumption, miniaturization.Keywords: STM32; data acquisition; embedded system;uC/OS-II;1.1. IntroductionThe real-time acquisition of vibration in rotating machinery can effectively predict, assess and diagnose equipment operation state, the industry gets vibration data acquisition Rapidly and analysis in real-time can monitor the rotating machinery state and guarantee the safe running of the equipment. In order to prevent failure, reduce maintenance time, improve the economic efficiency, The purpose of fault diagnosis system can detect these devices through the vibration signal acquisition of rotating machinery, and process the data acquisition, then it will make timely judgment of running state of equipment .While the data acquisition module is the core part of the fault diagnosis system [1-4].The practical application in the industrial field, is the equipment operating parameters will be acquired to monitor equipment operating state. In traditional data acquisition systems, the data from acquisition card are generally send into the computer, and specific software will be developed for the data acquisition. The main contribution of this paper has designed the STM32 platform with ARM technology, that has become atraditional mainstream technology in embedded systems, and the collecting data toward the direction of high real-time, multi-parameter, high-precision, while data storage become large capacity, more miniaturization and portable, and the development of multicommunication mode and long-distance for data transmission. So as to meet the actual acquisition system multitasking requirements, this article has designed based on STM32 micro-controller uC/OS-II system of signal acquisition system. Therefore, in order to meet the actual acquisition system multitask requirements, this novelty of this article has designed a signal acquisition system in micro-controller uC/OS-II based on STM32.2.Architecture of data acquisition systemData acquisition as key technology for monitoring equipment, recently a lot of work has been done on it. An embedded parallel data acquisition system based on FPGA is Optimized designed which will make it reasonable to divide and allocate high-speed and low-speed A/D [5]. Instead, it has use a high-speed A/D converter and Stratix II series of FPGA for data collection and processing, in which the main contribution is used of the Compact Peripheral Component Interconnect, the system has the characters of modularization, sturdiness and scalability [6].But remote control will be needed in Special Conditions, this paper introduce the embedded operating system platform based on Windows CE and uC/OS-II to design a remote acquisition and control system with the GPRS wireless technology [7-8].In order to achieve the data sharing of multi-user, it has build the embedded dynamic website for data acquisition management and dissemination with the ARM9 and Linux operation system [9].A data collection terminal devices is designed based on ARM7 microprocessor LPC2290 and embedded real-time operating system uC/OS-II to solve the real-time acquisition of multichannel small signal and multi-channel transmission[10].On the other hands, two parallel DSP-based system dedicated to the data acquisition on rotating machines, and the inner signal conditioner is used to adapt the sensor output to the input range of the acquisition, and then signal post-processing bythe design software, while the most frequently structure is to use DAS and FPGA-based, and such programs are also dependent on the DAS cost.In order to meet market requirements of low power consumption, low cost, and mobility, Fig.1 in this paper presents the design overall structure diagram of data acquisition system. Through SPI interface, the system gets the data collection with three axis acceleration sensor into the STM32 controller of inner A/D conversion module with 12-bit, this process is non-interfering parallel acquisition. Our system uses 240x400 LCD and touch screen module real-time to display the collected data in real time.2.1. STM32 micro-controllerA 32 bit RISC STM32F103VET6, used as the processor in our system, compared with similar products, the STM32F103VET6 work at 72MHZ, with characters of strong performance and low power consumption, real-time and low-cost. The processor includes: 512K FLASH, 64K SRAM, and it will communicate by using five serial ports which contain a CAN bus, a USB2.0 SLA VE mode and a Ethernet interface, what s more two RS232 ports are alsoincluded. The system in our paper extend the SST25VF016B serial memory through the SPI bus interface, that will regard as the temporary storage when collect large number of data, furthermore, we have the A/D converter with 12 bits resolution, and the fastest conversion up to 1us, with 3.6 V full-scale of the system. In addition to design of the system power supply circuit, the reset circuit, RTC circuit and GPIO port to assurance system needs and normal operation. 2.2. Data acquisitionThe machine state is normal or not is mainly depended on the vibration signal. In this paper, to acquire the vibration data of rotating machinery rotor, we have used vibration acceleration transducers MMA7455L which could collect the data from axis x, y, and z of the company of Free-scale. The kind of vibration acceleration transducers has advantage of low cost and small size, high sensitivity and large dynamic range with small interference. MMA7455L is mainly consists of gravity sensing unit and signal conditioning circuit composition, and this sensor will amplify the tiny data before signal preprocessing. In data acquisition process of our system, the error of sampling stage is mainly caused by quantified, and the error is depended on the bits of the A/D converter ,when we regard the maximum voltage as V max , the AD converter bits is n, and the quantization Q = V max/2n, then, the quantization error is obeyed uniform distribution in [- q / 2, q / 2] [13].The designed STM32 could built at most three 12-bit parallel ADC in this paper , which theoretical index is 72dB and the actual dynamic range is between 54 to 60dB while 2 or 3 bits is impacted by noise, the dynamic range of measurement can up to 1000 times with 60dB. For the vast majority of the vibration signal, the maximum sampling rate of 10kHZ can meet actual demand, and the higher frequency of collection is generally used in the 8-12 bits AD, therefore one of contribution of this work is to choose a built-in 12-bit A/D to meet the accuracy of vibration signal acquisition and lower cost in this experiment.3.Software design3.1. Transplantation of C/OSIn order to ensure real-time and safety data collection requirements, in this system, a kind of RTOS whose source code is open and small is proposed. It also can be easily to be cut down, repotted and solidified, and its basic functions including task management and resource management, storage management and system management. The RTOS embedded system could support 64 tasks, with at most 56 user tasks, and four tasks of the highest and the lowest priorities will be retained in system. The uC/OS-II assigns priorities of the tasks according to their importance, the operation system executive the task from the priority sequence and each task have independent priority. The operating system kernel is streamlined, and multi-tasking function is well compared with others, it can be transplanted to processors that from 8-bit to 64-bit.The transplant in the system are to modify the three file system structure: OS_CPU_C.H OS_CPU.C, OS_CPU_A.ASM. Main transplantation procedure is as follows:A. OS_CPU_C.HIt has defined the data types, the length and growth direction of stack in the processor. Because different microprocessors have different word length,so the uC/OS-II transplantation include a series of type definition to ensure its portability, and the revised code as follows:typedef unsigned char BOOLEAN;typedef unsigned char INT8U;typedef signed char INT8S;typedef unsigned short INT16U;typedef signed short INT16U;typedef unsigned int INT32U;typedef signed int INT32S;typedef float FP32;typedef double FP64;typedef unsigned int OS_STK;typedef unsigned int OS_CPU_SR;Cortex-M3 processor defines the OS_ENTER_CRITICAL () and OS_EXIT_CRITICAL () as opening and closing interrupt, and they must set to 32 bit of the stack OS_STK and CPU register length. In addition, that has defined the stack pointer OS_STK_GROWTH stack growth direction from high address to lower address.B. OS_CPU.CTo modify the function OSTaskStkInit() according to the processor, the nine remaining user interface functions and hook functions can be null without special requirements, they will produce code for these functions only when the OS_CPU_HOOKS_EN is set to 1 in the file of OS_CFG.H. The stack initialization function OSTaskStkInit () return to the new top of the stack pointer.OS_CPU_A.ASMMost of the transplant work are completed in these documents, and modify the following functions.OsStartHighRdy() is used for running the most priority ready task, it will be responsible for stack pointer SP from the highest priority task of TCB control block, and restore the CPU, then the task process created by the user start to control the process.OSCtxSw () is for task switching, When the current task ready queue have a higher priority task, the CPU will start OSCtxSw () task switching to run thehigher priority task and the current task stored in task stack.OSIntCtxSw () has the similar function with OSIntSw (), in order to ensure real-time performance of the system, it will run the higher priority task directly when the interrupt come, and will not store the current task.OSTickISR () is use to handle the clock interrupt, which needs interrupt to schedule its implementation when a higher priority task is waiting for the clock signal.OS_CPU_SR_Save () and OS_CPU_SR_Restore () is completed to switch interrupt while entering and leaving the critical code both functions implement by the critical protection function OS_ENTER_CRITICAL () and OS_EXIT_CRITICAL ().After the completion ofthe above work,uC/OS-II can run on the processors.3.2. Software architectureFig.2 shows the system software architecture, so as to display the data visualized,uC/GUI3.90 and uC/OS-II is transplanted in the system, our system contains six tasks such data acquisition, data transmission, LCD display, touch screen driver, key-press management and uC/GUI interface.First of all, we should set the task priority and the task scheduling based on the priority. It needs complete the required driver design before the data acquisition, such as A/D driver, touch panel driver and system initialization, while the initializations include: hardware platform initialization, system clock initialization, interrupt source configuration, GPIO port configuration, serial port initialization and parameter configuration, and LCD initialization. The process is that the channel module sent sampling command to the AD channel, then to inform the receiver module it has been sent the sample start command, the receiver module is ready to receive and large data will store in the storage module, after the completion of the first sampling, channel module will send the complete command of sampling to the receiver module, the receiver sends an interrupt request to the storage module to stop the data storing, then the data will display on the LCD touch screen. The data acquisition process shown in Fig.34.ExperimentsThe experiment of the embedded system has been done and data acquisition comes from the acceleration of MMA7455L, which is installed on the bench of rotating machine. The data acquisition have displayed as shown in Fig.4 and Fig.5, the system can select three channels to collect the vibration signal from the three directions of X, Y and Z-axis , and in this paper the sampling frequency is 5KHZ and we have collect the vibration signal from normal state of unbalanced state at the same channel. The result shows that our system can display real-time data acquisition and predict the preliminary diagnosis rapidly.5.ConclusionThis paper has designed an embedded signal acquisition system for real time according to the mechanical failure occurred with high frequency of in the rotating machines. The system is based on a low cost microcontroller, Vibration signals is picked by the three axis acceleration sensor which has the performance of low cost and high sensitivity, and the acquisition data from axis x, y, and z. We have designed the system hardware structure, and analyses the working principle of data acquisition module. The proposed system of uC/OS-II realize the data task management and scheduling, and it is compacted with structure and low cost, what's more the system collects the vibration signal and analysis in real-time of the rotating machines, and then quickly gives diagnostic results. AcknowledgementsThis work was supported by The National Natural Science Foundation of China (51175169); China National Key Technology R&D Program(2012BAF02B01); Planned Science and Technology Project of Hunan Province(2009FJ4055);Scientific Research Fund of Hunan Provincial Education Department(10K023).REFERENCES[1] Cheng, L., Yu, H., Research on intelligent maintenance unit of rotary machine, Computer Integrated Manufacturing Systems, vol. 10, Issue: 10, page 1196-1198, 2004.[2] Yu, C., Zhong, Ou., Zhen, D., Wei, F., .Design and Implementation of Monitoring and Management Platform in Embedded Fault Diagnosis System, Computer Engineering, vol. 34 , Issue: 8, page 264-266, 2008.[3]Bi, D., Gui, T., Jun, S., Dynam . Behavior of a High-speed Hybrid Gas Bearing-rotor System for a Rotating ramjet, Journal of Vibration and Shock, vol. 28, Issue: 9, page 79-80, 2009.[4] Hai, L., Jun, S., Research of Driver Based on Fault Diagnosis System Data Acquisition Module, Machine Tool& Hydraulics, vol. 38 , Issue: 13, page 166-168, 2011.[5] Hao, W., Qin, W., Xiao, S., Optimized. Design of Embedded Parallel Data Acquisition System, Computer Engineering and Design, vol. 32, Issue: 5, page 1622-1625, 2011.[6] Lei, S., Ming, N., Design and Implementation of High Speed Data Acquisition System Based on FPGA, Computer Engineering, vol. 37, Issue: 19, page 221-223, 2011.[7] Chao, T., Jun, Z., Ru, G., Design of remote data acquisition and control system based on Window CE, Microcomputer& Its Applications , vol. 30, Issue: 14, page 21-27, 2011.[8]Xiao, W., Bin, W., SMS controlled information collection system based on uC/OS-II, Computer Application, vol. 12, Issue: 31, page 29-31, 2011.[9]Ting,Y., Zhong, C., Construction of Data Collection& Release in Embedded System, Computer Engineering, vol. 33, Issue: 19, page 270-272, 2007.[10]Yong, W., Hao, Z., Peng,D., Design and Realization of Multi-function Data Acquisition System Based on ARM, Process Automation Instrumentation, vol. 32, Issue: 1, page: 13-16, 2010.[11] Betta, G., Liguori, C., Paolillo, A., A DSP-Based FFT Analyzer for the Fault Diagnosis of Rotating Machine Based on Vibration Analysis, IEEE Transaction on Instrumentation and Measurement, vol. 51, Issue: 6, 2002.[12] Contreras-Medina LM., Romero Troncoso RJ., Millan Almarez JR., FPGA Based Multiple-Channel Vibration Analyzer Embedded System for Industrial Application in Automatic Failure Detection, IEEE transactions on International and measurement, vol. 59, Issue: 1, page 63-67, 2008.[13]Chon, W., Shuang, C., Design and implementation of signal detection system based on ARM for ship borne equipment, Computer Engineering and Design, vol. 32, Issue: 4, page: 1300-1301, 2011.[14]Miao, L., Tian, W., Hong, W., Real-time Analysis of Embedded CNC System Based on uC/OS-II, Computer Engineering, vol. 32, Issue: 22, page 222-223, 2006.。
数据采集系统中英文对照外文翻译文献
中英文对照外文翻译(文档含英文原文和中文翻译)Data Acquisition SystemsData acquisition systems are used to acquire process operating data and store it on,secondary storage devices for later analysis. Many or the data acquisition systems acquire this data at very high speeds and very little computer time is left to carry out any necessary, or desirable, data manipulations or reduction. All the data are stored on secondary storage devices and manipulated subsequently to derive the variables ofin-terest. It is very often necessary to design special purpose data acquisition systems and interfaces to acquire the high speed process data. This special purpose design can be an expensive proposition.Powerful mini- and mainframe computers are used to combine the data acquisition with other functions such as comparisons between the actual output and the desirable output values, and to then decide on the control action which must be taken to ensure that the output variables lie within preset limits. The computing power required will depend upon the type of process control system implemented. Software requirements for carrying out proportional, ratio or three term control of process variables are relatively trivial, and microcomputers can be used to implement such process control systems. It would not be possible to use many of the currently available microcomputers for the implementation of high speed adaptive control systems which require the use of suitable process models and considerable online manipulation of data.Microcomputer based data loggers are used to carry out intermediate functions such as data acquisition at comparatively low speeds, simple mathematical manipulations of raw data and some forms of data reduction. The first generation of data loggers, without any programmable computing facilities, was used simply for slow speed data acquisition from up to one hundred channels. All the acquired data could be punched out on paper tape or printed for subsequent analysis. Such hardwired data loggers are being replaced by the new generation of data loggers which incorporate microcomputers and can be programmed by the user. They offer an extremely good method of collecting the process data, using standardized interfaces, and subsequently performing the necessary manipulations to provide the information of interest to the process operator. The data acquired can be analyzed to establish correlations, if any, between process variables and to develop mathematical models necessary for adaptive and optimal process control.The data acquisition function carried out by data loggers varies from one to 9 in system to another. Simple data logging systems acquire data from a few channels while complex systems can receive data from hundreds, or even thousands, of input channels distributed around one or more processes. The rudimentary data loggers scan the selected number of channels, connected to sensors or transducers, in a sequential manner and the data are recorded in a digital format. A data logger can be dedicated in the sense that it can only collect data from particular types of sensors and transducers. It is best to use a nondedicated data logger since any transducer or sensor can be connected to the channels via suitable interface circuitry. This facility requires the use of appropriate signal conditioning modules.Microcomputer controlled data acquisition facilitates the scanning of a large number of sensors. The scanning rate depends upon the signal dynamics which means that some channels must be scanned at very high speeds in order to avoid aliasing errors while there is very little loss of information by scanning other channels at slower speeds. In some data logging applications the faster channels require sampling at speeds of up to 100 times per second while slow channels can be sampled once every five minutes. The conventional hardwired, non-programmable data loggers sample all the channels in a sequential manner and the sampling frequency of all the channels must be the same. This procedure results in the accumulation of very large amounts of data, some of which is unnecessary, and also slows down the overall effective sampling frequency. Microcomputer based data loggers can be used to scan some fast channels at a higher frequency than other slow speed channels.The vast majority of the user programmable data loggers can be used to scan up to 1000 analog and 1000 digital input channels. A small number of data loggers, with a higher degree of sophistication, are suitable for acquiring data from up to 15, 000 analog and digital channels. The data from digital channels can be in the form of Transistor- Transistor Logic or contact closure signals. Analog data must be converted into digital format before it is recorded and requires the use of suitable analog to digital converters (ADC).The characteristics of the ADC will define the resolution that can be achieved and the rate at which the various channels can be sampled. An in-crease in the number of bits used in the ADC improves the resolution capability. Successive approximation ADC's arefaster than integrating ADC's. Many microcomputer controlled data loggers include a facility to program the channel scanning rates. Typical scanning rates vary from 2 channels per second to 10, 000 channels per second.Most data loggers have a resolution capability of ±0.01% or better, It is also pos-sible to achieve a resolution of 1 micro-volt. The resolution capability, in absolute terms, also depends upon the range of input signals, Standard input signal ranges are 0-10 volt, 0-50 volt and 0-100 volt. The lowest measurable signal varies form 1 t, volt to 50, volt. A higher degree of recording accuracy can be achieved by using modules which accept data in small, selectable ranges. An alternative is the auto ranging facil-ity available on some data loggers.The accuracy with which the data are acquired and logged-on the appropriate storage device is extremely important. It is therefore necessary that the data acquisi-tion module should be able to reject common mode noise and common mode voltage. Typical common mode noise rejection capabilities lie in the range 110 dB to 150 dB. A decibel (dB) is a tern which defines the ratio of the power levels of two signals. Thus if the reference and actual signals have power levels of N, and Na respectively, they will have a ratio of n decibels, wheren=10 Log10(Na /Nr)Protection against maximum common mode voltages of 200 to 500 volt is available on typical microcomputer based data loggers.The voltage input to an individual data logger channel is measured, scaled and linearised before any further data manipulations or comparisons are carried out.In many situations, it becomes necessary to alter the frequency at which particu-lar channels are sampled depending upon the values of data signals received from a particular input sensor. Thus a channel might normally be sampled once every 10 minutes. If, however, the sensor signals approach the alarm limit, then it is obviously desirable to sample that channel once every minute or even faster so that the operators can be informed, thereby avoiding any catastrophes. Microcomputer controlledintel-ligent data loggers may be programmed to alter the sampling frequencies depending upon the values of process signals. Other data loggers include self-scanning modules which can initiate sampling.The conventional hardwired data loggers, without any programming facilities, simply record the instantaneous values of transducer outputs at a regular samplingin-terval. This raw data often means very little to the typical user. To be meaningful, this data must be linearised and scaled, using a calibration curve, in order to determine the real value of the variable in appropriate engineering units. Prior to the availability of programmable data loggers, this function was usually carried out in the off-line mode on a mini- or mainframe computer. The raw data values had to be punched out on pa-per tape, in binary or octal code, to be input subsequently to the computer used for analysis purposes and converted to the engineering units. Paper tape punches are slow speed mechanical devices which reduce the speed at which channels can be scanned. An alternative was to print out the raw data values which further reduced the data scanning rate. It was not possible to carry out any limit comparisons or provide any alarm information. Every single value acquired by the data logger had to be recorded eventhough it might not serve any useful purpose during subsequent analysis; many data values only need recording when they lie outside the pre-set low and high limits.If the analog data must be transmitted over any distance, differences in ground potential between the signal source and final location can add noise in the interface design. In order to separate common-mode interference form the signal to be recorded or processed, devices designed for this purpose, such as instrumentation amplifiers, may be used. An instrumentation amplifier is characterized by good common-mode- rejection capability, a high input impedance, low drift, adjustable gain, and greater cost than operational amplifiers. They range from monolithic ICs to potted modules, and larger rack-mounted modules with manual scaling and null adjustments. When a very high common-mode voltage is present or the need for extremely-lowcom-mon-mode leakage current exists(as in many medical-electronics applications),an isolation amplifier is required. Isolation amplifiers may use optical or transformer isolation.Analog function circuits are special-purpose circuits that are used for a variety of signal conditioning operations on signals which are in analog form. When their accu-racy is adequate, they can relieve the microprocessor of time-consuming software and computations. Among the typical operations performed are multiplications, division, powers, roots, nonlinear functions such as for linearizing transducers, rimsmeasure-ments, computing vector sums, integration and differentiation, andcurrent-to-voltage or voltage- to-current conversion. Many of these operations can be purchased in available devices as multiplier/dividers, log/antilog amplifiers, and others.When data from a number of independent signal sources must be processed by the same microcomputer or communications channel, a multiplexer is used to channel the input signals into the A/D converter.Multiplexers are also used in reverse, as when a converter must distribute analog information to many different channels. The multiplexer is fed by a D/A converter which continually refreshes the output channels with new information.In many systems, the analog signal varies during the time that the converter takes to digitize an input signal. The changes in this signal level during the conversion process can result in errors since the conversion period can be completed some time after the conversion command. The final value never represents the data at the instant when the conversion command is transmitted. Sample-hold circuits are used to make an acquisition of the varying analog signal and to hold this signal for the duration of the conversion process. Sample-hold circuits are common in multichannel distribution systems where they allow each channel to receive and hold the signal level.In order to get the data in digital form as rapidly and as accurately as possible, we must use an analog/digital (A/D) converter, which might be a shaft encoder, a small module with digital outputs, or a high-resolution, high-speed panel instrument. These devices, which range form IC chips to rack-mounted instruments, convert ana-log input data, usually voltage, into an equivalent digital form. The characteristics of A/D converters include absolute and relative accuracy, linearity, monotonic, resolu-tion, conversion speed, and stability. A choice of input ranges, output codes, and other features are available. The successive-approximation technique is popular for a large number ofapplications, with the most popular alternatives being the counter-comparator types, and dual-ramp approaches. The dual-ramp has been widely-used in digital voltmeters.D/A converters convert a digital format into an equivalent analog representation. The basic converter consists of a circuit of weighted resistance values or ratios, each controlled by a particular level or weight of digital input data, which develops the output voltage or current in accordance with the digital input code. A special class of D/A converter exists which have the capability of handling variable reference sources. These devices are the multiplying DACs. Their output value is the product of the number represented by the digital input code and the analog reference voltage, which may vary form full scale to zero, and in some cases, to negative values.Component Selection CriteriaIn the past decade, data-acquisition hardware has changed radically due to ad-vances in semiconductors, and prices have come down too; what have not changed, however, are the fundamental system problems confronting the designer. Signals may be obscured by noise, rfi,ground loops, power-line pickup, and transients coupled into signal lines from machinery. Separating the signals from these effects becomes a matter for concern.Data-acquisition systems may be separated into two basic categories:(1)those suited to favorable environments like laboratories -and(2)those required for hostile environments such as factories, vehicles, and military installations. The latter group includes industrial process control systems where temperature information may be gathered by sensors on tanks, boilers, wats, or pipelines that may be spread over miles of facilities. That data may then be sent to a central processor to provide real-time process control. The digital control of steel mills, automated chemical production, and machine tools is carried out in this kind of hostile environment. The vulnerability of the data signals leads to the requirement for isolation and other techniques.At the other end of the spectrum-laboratory applications, such as test systems for gathering information on gas chromatographs, mass spectrometers, and other sophis-ticated instruments-the designer's problems are concerned with the performing of sen-sitive measurements under favorable conditions rather than with the problem ofpro-tecting the integrity of collected data under hostile conditions.Systems in hostile environments might require components for wide tempera-tures, shielding, common-mode noise reduction, conversion at an early stage, redun-dant circuits for critical measurements, and preprocessing of the digital data to test its reliability. Laboratory systems, on the other hand, will have narrower temperature ranges and less ambient noise. But the higher accuracies require sensitive devices, and a major effort may be necessary for the required signal /noise ratios.The choice of configuration and components in data-acquisition design depends on consideration of a number of factors:1. Resolution and accuracy required in final format.2. Number of analog sensors to be monitored.3. Sampling rate desired.4. Signal-conditioning requirement due to environment and accuracy.5. Cost trade-offs.Some of the choices for a basic data-acquisition configuration include:1 .Single-channel techniques.A. Direct conversion.B. Preamplification and direct conversion.C. Sample-hold and conversion.D. Preamplification, sample-hold, and conversion.E. Preamplification, signal-conditioning, and direct conversion.F. Preamplification, signal-conditioning, sample-hold, and conversion.2. Multichannel techniques.A. Multiplexing the outputs of single-channel converters.B. Multiplexing the outputs of sample-holds.C. Multiplexing the inputs of sample-holds.D. Multiplexing low-level data.E. More than one tier of multiplexers.Signal-conditioning may include:1. Radiometric conversion techniques.B. Range biasing.D. Logarithmic compression.A. Analog filtering.B. Integrating converters.C. Digital data processing.We shall consider these techniques later, but first we will examine some of the components used in these data-acquisition system configurations.MultiplexersWhen more than one channel requires analog-to-digital conversion, it is neces-sary to use time-division multiplexing in order to connect the analog inputs to a single converter, or to provide a converter for each input and then combine the converter outputs by digital multiplexing.Analog MultiplexersAnalog multiplexer circuits allow the timesharing of analog-to-digital converters between a numbers of analog information channels. An analog multiplexer consists of a group of switches arranged with inputs connected to the individual analog channels and outputs connected in common(as shown in Fig. 1).The switches may be ad-dressed by a digital input code.Many alternative analog switches are available in electromechanical and solid-state forms. Electromechanical switch types include relays, stepper switches,cross-bar switches, mercury-wetted switches, and dry-reed relay switches. The best switching speed is provided by reed relays(about 1 ms).The mechanical switches provide high do isolation resistance, low contact resistance, and the capacity to handle voltages up to 1 KV, and they are usually inexpensive. Multiplexers using mechanical switches are suited to low-speed applications as well as those having high resolution requirements. They interface well with the slower A/D converters, like the integrating dual-slope types. Mechanical switches have a finite life, however, usually expressed innumber of operations. A reed relay might have a life of 109 operations, which wouldallow a 3-year life at 10 operations/second.Solid-state switch devices are capable of operation at 30 ns, and they have a life which exceeds most equipment requirements. Field-effect transistors(FETs)are used in most multiplexers. They have superseded bipolar transistors which can introduce large voltage offsets when used as switches.FET devices have a leakage from drain to source in the off state and a leakage from gate or substrate to drain and source in both the on and off states. Gate leakage in MOS devices is small compared to other sources of leakage. When the device has a Zener-diode-protected gate, an additional leakage path exists between the gate and source.Enhancement-mode MOS-FETs have the advantage that the switch turns off when power is removed from the MUX. Junction-FET multiplexers always turn on with the power off.A more recent development, the CMOS-complementary MOS-switch has the advantage of being able to multiplex voltages up to and including the supply voltages. A±10-V signal can be handled with a ±10-V supply.Trade-off Considerations for the DesignerAnalog multiplexing has been the favored technique for achieving lowest system cost. The decreasing cost of A/D converters and the availability of low-cost, digital integrated circuits specifically designed for multiplexing provide an alternative with advantages for some applications. A decision on the technique to use for a givensys-tem will hinge on trade-offs between the following factors:1. Resolution. The cost of A/D converters rises steeply as the resolution increases due to the cost of precision elements. At the 8-bit level, the per-channel cost of an analog multiplexer may be a considerable proportion of the cost of a converter. At resolutions above 12 bits, the reverse is true, and analog multiplexing tends to be more economical.2. Number of channels. This controls the size of the multiplexer required and the amount of wiring and interconnections. Digital multiplexing onto a common data bus reduces wiring to a minimum in many cases. Analog multiplexing is suited for 8 to 256 channels; beyond this number, the technique is unwieldy and analog errors be-come difficult to minimize. Analog and digital multiplexing is often combined in very large systems.3. Speed of measurement, or throughput. High-speed A/D converters can add a considerable cost to the system. If analog multiplexing demands a high-speedcon-verter to achieve the desired sample rate, a slower converter for each channel with digital multiplexing can be less costly.4. Signal level and conditioning. Wide dynamic ranges between channels can be difficult with analog multiplexing. Signals less than 1V generally require differential low-level analog multiplexing which is expensive, with programmable-gain amplifiers after the MUX operation. The alternative of fixed-gain converters on each channel, with signal-conditioning designed for the channel requirement, with digital multi-plexing may be more efficient.5. Physical location of measurement points. Analog multiplexing is suitedfor making measurements at distances up to a few hundred feet from the converter, since analog lines may suffer from losses, transmission-line reflections, and interference. Lines may range from twisted wire pairs to multiconductor shielded cable, depending on signal levels, distance, and noise environments. Digital multiplexing is operable to thousands of miles, with the proper transmission equipment, for digital transmission systems can offer the powerful noise-rejection characteristics that are required for29 Data Acquisition Systems long-distance transmission.Digital MultiplexingFor systems with small numbers of channels, medium-scale integrated digital multiplexers are available in TTL and MOS logic families. The 74151 is a typical example. Eight of these integrated circuits can be used to multiplex eight A/D con-verters of 8-bit resolution onto a common data bus.This digital multiplexing example offers little advantages in wiring economy, but it is lowest in cost, and the high switching speed allows operation at sampling rates much faster than analog multiplexers. The A/D converters are required only to keep up with the channel sample rate, and not with the commutating rate. When large numbers of A/D converters are multiplexed, the data-bus technique reduces system interconnections. This alone may in many cases justify multiple A/D converters. Data can be bussed onto the lines in bit-parallel or bit-serial format, as many converters have both serial and parallel outputs. A variety of devices can be used to drive the bus, from open collector and tristate TTL gates to line drivers and optoelectronic isolators. Channel-selection decoders can be built from 1-of-16 decoders to the required size. This technique also allows additional reliability in that a failure of one A/D does not affect the other channels. An important requirement is that the multiplexer operate without introducing unacceptable errors at the sample-rate speed. For a digital MUX system, one can determine the speed from propagation delays and the time required to charge the bus capacitance.Analog multiplexers can be more difficult to characterize. Their speed is a func-tion not only of internal parameters but also external parameters such as channel, source impedance, stray capacitance and the number of channels, and the circuit lay-out. The user must be aware of the limiting parameters in the system to judge their ef-fect on performance.The nonideal transmission and open-circuit characteristics of analog multiplexers can introduce static and dynamic errors into the signal path. These errors include leakage through switches, coupling of control signals into the analog path, and inter-actions with sources and following amplifiers. Moreover, the circuit layout can com-pound these effects.Since analog multiplexers may be connected directly to sources which may have little overload capacity or poor settling after overloads, the switches should have a break-before-make action to prevent the possibility of shorting channels together. It may be necessary to avoid shorted channels when power is removed and a chan-nels-off with power-down characteristic is desirable. In addition to the chan-nel-addressing lines, which are normally binary-coded, it is useful to have inhibited or enable lines to turn all switches off regardless of the channel being addressed. This simplifies the external logic necessary to cascade multiplexers and can also be useful in certain modes of channeladdressing. Another requirement for both analog and digital multiplexers is the tolerance of line transients and overload conditions, and the ability to absorb the transient energy and recover without damage.数据采集系统数据采集系统是用来获取数据处理和存储在二级存储设备,为后来的分析。
RFID技术外文翻译文献
RFID技术外文翻译文献RFID技术外文翻译文献(文档含中英文对照即英文原文和中文翻译)原文:Current RFID TechnologyThis section describes out of which parts RFID tags consist of, how they work in principle, and what types of tags do exist. It focuses on how tags are powered and what frequency ranges is used. The section concludes by covering a few important standards.RFID transponders (tags) consist in general of: Micro chip, Antenna, Case, Battery (for active tags only)The size of the chip depends mostly on the Antenna. Its size and form is dependent on the frequency the tag is using. The size of a tag alsodepends on its area of use. It can range from less than a millimeter for implants to the size of a book in container logistic. In addition to the micro chip, some tags also have rewritable memory attached where the tag can store updates between reading cycles or new data like serial numbers.A RFID tag is shown in figure 1. The antenna is clearly visible. As said before the antenna has the largest impact of the size of the tag. The microchip is visible in the center of the tag, and since this is a passive tag it does not have an internal power source In principle an RFID tag works as follows: the reading unit generates an electro-magnetic field which induces a current into the tag's antenna. The current is used to power the chip. In passive tags the current also charges a condenser which assures uninterrupted power for the chip. In active tags a battery replacesthe condenser. The difference between active and passive tags is explained shortly. Once activated the tag receives commands from the reading unit and replies by sending its serial number or the requested information. In general the tag does not have enough energy to create its own electro-magnetic field, instead it uses back scattering to modulate (reflect/absorb) the field sent by the reading unit. Because most fluids absorb electro-magnetic fields and most metal reflect those fields the reading of tags in presence of those materials is complicated.During a reading cycle, the reader has to continuously power the tag. The created field is called continuous wave, and because the strength of the field decreases with the square of the distance the readers have to use a rather large power. That field overpowers any response a tag could give, so therefore tags reply on side-channels which are located directly below and above the frequency of the continuous wave.1. Energy SourcesWe distinguish 3 types of RFID tags in relation to power or energy: Passive, Semi-passive, Active Passive tags do not have an internal power source, and they therefore rely on the power induced by the reader. This means that the reader has to keep up its field until the transaction is completed. Because of the lack of a battery, these tags are the smallest and cheapest tags available; however it also restricts its reading range to a range between 2mm and a few meters. As an added benefit those tags are also suitable to be produced by printing. Furthermore their lifespan is unlimited since they do not depend on an internal power source.The second type of tags is semi-passive tags. Those tags have an internal power source that keeps the micro chip powered at all times. There are many advantages: Because the chip is alwayspowered it can respond faster tore quests, therefore increasing the number of tags that can be queried per second which is important to some applications. Furthermore, since the antenna is not required for collecting power it canbe optimized for back scattering and therefore increasing the reading range. And last but not least, since the tag does not use any energy from the field the back scattered signal is stronger, increasing the range even further. Because of the last two reasons, a semi-active tag has usually a range larger than a passive tag.The third type of tags is active tags. Like semi-active tags they contain an internal power source but they use the energy supplied for both, to power the micro chip and to generate a signal on the antenna. Active tags that send signals without being queried are called beacons. An active tag's range can be tens of meters, making it ideal for locating objects or serving as landmark points. The lifetime is up to 5 years.2. Frequency BandsRFID tags fall into three regions in respect to frequency: Low frequency (LF, 30- 500kHz), High frequency (HF.10-15MHz), Ultra high frequency (UHF, 850- 950MHz, 2.4-2.5GHz, 5.8GHz) Low frequency tags are cheaper than any of the higher frequency tags. They are fast enough for most applications, however for larger amounts of data the time a tag has to stay in a readers range will increase. Another advantage is that low frequency tags are least affected by the presence of fluids or metal. The disadvantage of such tags is their short reading range. The most common frequencies used for low frequency tags are 125-134.2 kHz and 140-148.5 kHz.High frequency tags have higher transmission rates and ranges but also cost more than LF tags. Smart tags are the mostcommon member of this group and they work at 13.56MHz. UHF tags have the highest range of all tags. It ranges from 3-6 meters for passive tags and 30+ meters for active tags. In addition the transmission rate is also very high, which allows to read a single tag in a very short time. This feature is important where tagged entities are moving with a high speed and remain only for a short time in a readers range. UHF tags are also more expensive than any other tag and are severely affected by fluids and metal. Those properties make UHF mostly useful in automated toll collection systems. Typical frequencies are 868MHz (Europe), 915MHz (USA), 950MHz (Japan), and 2.45GHz.Frequencies for LF and HF tags are license exempt and can be used worldwide; however frequencies for UHF tags differ from country to country and require a permit.3. StandardsThe wide range of possible applications requires many different types of tags, often with conflicting goals (e.g. low cost vs. security). That is reflected in the number of standards. A short list of RFID standards follows: ISO11784, ISO11785, ISO14223, ISO10536, ISO14443, ISO15693, ISO18000. Note that this list is not exhaustive. Since the RFID technology is not directly Internet related it is not surprising that there are no RFCs available. There cent hype around RFID technologyhas resulted in an explosion in patents. Currently there are over 1800 RFID related patents issued (from 1976 to 2001) and over 5700 patents describing RFID systems or applications are backlogged.4. RFID SystemsA RFID reader and a few tags are in general of little use. The retrieval of a serial number does not provide much informationto the user nor does it help to keep track of items in a production chain. The real power of RFID comes in combination with a backend that stores additional information such as descriptions for products and where and when a certain tag was scanned. In general a RFID system has a structure as depicted in figure 2. RFID readers scan tags, and then forward the information to the backend. The backend in general consists of a database and a well defined application interface. When the backend receives new information, it adds it to the database and if needed performs some computation on related fields. The application retrieves data from the backend. In many cases, the application is collocated with the reader itself. An example is the checkout point in a supermarket (Note that the given example uses barcodes instead of RFID tags since they are more common; however, the system would behave in exactly the same way if tags were used). When the reader scans the barcode, the application uses the derived identifier to look up the current price. In addition, the backend also provides discount information for qualifying products. The backendalso decreases the number of available products of that kind and notifies the manager if the amount falls below a certain threshold.This section describes how RFID tags work in general, what types of tags exist and how they differ. The three frequency ranges that RFID tags typically use are LF, HF, and UHF. Also the difference between passive, semi-passive, and active tags was explained and their advantages and disadvantages were compared. The section concluded by looking at different standards and showed the great interest of the industry by counting the number of issued and backlogged patents [USPatent Office].翻译:当前的RFID技术该节描述的是RFID标签由哪些部分组成、工作原理和确实存在的标签类型,关注标签的供电方式和使用频率范围。
传感器技术外文文献及中文翻译
Sensor technologyA sensor is a device which produces a signal in response to its detecting or measuring a property ,such as position , force , torque ,pressure , temperature ,humidity , speed ,acceleration ,or vibration 。
Traditionally ,sensors (such as actuators and switches )have been used to set limits on the performance of machines .Common examples are (a)stops on machine tools to restrict work table movements ,(b) pressure and temperature gages with automatics shut-off features ,and (c)governors on engines to prevent excessive speed of operation . Sensor technology has become an important aspect of manufacturing processes and systems 。
It is essential for proper data acquisition and for the monitoring ,communication ,and computer control of machines and systems 。
Because they convert one quantity to another , sensors often are referred to as transducers .Analog sensors produce a signal , such as voltage ,which is proportional to the measured quantity .Digital sensors have numeric or digital outputs that can be transferred to computers directly 。
外文翻译-----单片机数据采集接口
附录二外文原文及翻译Single-Chip Data Acquisition InterfaceGintaras PaukstaitisAbstractThis paper presents a single-chip data acquisition interface. It’s devoted for from one to eight analogous signals input to RAM of IBM PC or compatible computers. Maximal signal sampling rate is 80 kHz. Interface has programmable gain for analogous signals as well as programmable sampling rate and number of channels. Some functional unit was designed using synthesis from VHDL with help of Synopsys. Interface was based on 1 mm CMOS process from ATMEL-ES2. It was verified using kit for DFWII of Cadence. The Place & Route tools from Cadence have been used to obtain the circuit layout.Table of containsAbstract1. Introduction2. Steps of Designing3. Analogous Part4. Digital Part5. Interface Testing6. Creation of Layout7.Technical Data8. Conclusions9. Acknowledgements10.References1. IntroductionNowadays units with VLSI are widely used in the world. It is really important for miniaturisation. Circuits with some IC redesigned to VLSI reduce its area many times. By the way, relatively VLSI itself becomes cheaper. While using units with VLSI gets less damage, as well as uses less power. Using of CAD makes easier and faster complicated IC designing. Cheaper computers give an opportunity to get servers not only for big companies and institutions of education but also for medium firms. This stride encouraged such complex circuits designing programs as Synopsys and Cadence creation. While using them it is possible to design suitable circuits for fabrication or layout creation. Synopsys simulates functions described in VHDL andfrom its description synthesises circuits which can be made from Cadence libraries elements. It abounds to transform them to Cadence and to create the layout of IC. The steps of Cadence designing are illustrated in Fig. 1.Single-chip data acquisition interface was designed according to basic circuit of data acquisition board. It was designed by Department of Applied Electronics in Kaunas University of Technology. It is used in medicine. Created single-chip interface has better electrical parameters. That’s way it could be used wider. Prototype board was designed in TTL element base. Single-chip interface is designed in CMOS element base. While converting the circuit there were no complicated problems. The delay of CMOS elements is less than TTL. That’s way the delay of signals was not bigger and didn’t change the first work of the circuit. ISA bus sig nals of IBM PC are TTL element logic levels, therefore interface should be connected through buffers for TTL and CMOS logic levels reconciliation.2. Steps of DesigningA circuit was designed according to a basic circuit. That is way Semi-Custom Design method was used. The flow-chart of interface is shown in the Fig. 2.It was necessary to use 8 operational amplifiers (OA) to fit eight analogous signals to A/D converter's limits. OA has programmable established gain. In many cases it could let analogous signal without any additional amplifiers to give to A/D converters. Gain for every OA separately fixed with Gain Control Block. Two converters change analogous signal to the digital one. Eachof converters has 4 switch-able inputs. Converters work method is comparison of every bit. Channel Control Block establishes the order of signal switching.Programmable interval timer establishes the frequency on signal switching as well as the data sampling rate. It has three counters, which work in frequency dividing and one-shot modes. Dividing coefficients of timer is settings through Internal Bus. The length of dividing coefficientis 16 bits. The timer divides 894kHz frequency signal therefore minimal interface sampling rate is FMIN = 894 / 216 = 14 Hz. Maximal sampling rate limits speed characteristic of A/D converters. It is equal FMAX = 80 kHz. Gain of OA, sampling rate and number of switching channels is set while sending charging words to the ports which are established by Address Decoding Block. Data to PC is fed in a single Direct Memory Access (DMA) mode. DMA controller is in charge of commuting protocol from PC. DMA Control Block is responsible from the side of interface. Clock Signal Block sets clock frequency of 1,8 MHz for converters and 0,9 MHz for timer.Control logic consists of simple gates and flip-flops. That is way gates and Flip-flops of the ES2 1mm CMOS element's library was used to design it. The reason why the 1mm CMOS ES2 technology library was chosen was the wide choice of it’s analogous elements for Semi-Custom Design. But ES2 library has no some functional elements which were used in the circuit. For example Intel 8253 programmable interval timer, binary counter or address decoder. Therefore these elements was described in VHDL. While using elements of 1mm CMOS ES2 technologylibrary with the assistance of Synopsys necessary circuits ware synthesised. EDIF of circuits was transported to Cadence. Having been connected with the left control logic and with the analogous signals converting part they made a full functioning interface. The stages of designing are shown3. Analogous PartAlternating analogous voltage signals are changed to pulsate one from 0 to +5 V signal in the analogous part of interface. As converter is made of CMOS elements and it’s power supply is 0 and +5 V so it can change only signal between 0 and +5 V limits. In order to reduce converting mistake converters are given analogous signal which should as close as possible to the limits. Programmable OA makes stronger analogous signals. It has 16 possible gains which are selected with the help of four bit code. They have non-inverting input which has a pad for external analogous signal input. To change the alternating voltage ( ~2,5 V) to pulsate one (0 to +5 V) "virtual ground" pad of OA is connected with +2,5 V and signal source "ground" (its "ground" voltage must be 2,5 V). Design of interface was simulated with Verilog-XL program. It simulates only digital signals. That is way while simulating analogous signals they were described as 8-bit digital vectors. Verilog HDL models of analogous elements are used for this simulation. HDL models are changed into layout models for the creation of layout. A/D converter of ES2 library is divided into 2 parts: analogous part consist of D/A converter and comparator. There is control logic and registers in the digital part. That is way only analogous part is changed in converters when layout is being created.4. Digital PartControl Block of interface was designed while changing discrete components of board to accordingly chip components of ES2 library. Some changes through different control of ES2 library and prototype board analogous elements were made. It was timer described in VHDL for its designing. Three models were created: two models for clock frequency dividing from coefficient which length is 16 and 8 bit and another one for one-shot mode. The length of control word is 8 bit. Standard packages of IEEE library were used for description of the models. It made easier operations themselves with vector data. VHDL models were simulated with Synopsys VHDL Debugger. Functional correct VHDL models of timer counters were synthesised by using elements of ES2 library. While synthesisingoptimisation was done. Because the delay of circuits signal (few nanoseconds) is comparing with clock period (1,2 mm) is less so optimisation was only worth for small areas. Set_max_area command was used for this goal. The area rapport summary of 16 bits timer counter synthesis is shown in the Table 1. It is clear that a number of counters elements becomes smaller approximately for 13%. But their area becomes smaller only for 1,5%. The reason is that the number of elements was being diminished with diminishing of combinational logic. While element of combinational logic comparing with noncombinational onetakes much small area. Besides some elements often are changed by one with same function but not much small area, in example 2 OR and 1 AND element are changed to one OR-AND.While synthesising binary counter which purpose is dividing external clock signal for converters and timers were used commands which put buffers on output signals wires. It is done because clock signal is delivered for many flip-flops (on timer). Primary synthesised circuit and a circuit with additional buffers and the number of diminished elements are shown in Fig. 4. EDIF of synthesised functional elements was transported to Cadence and there it is connected with control logic and analogous elements.Table 1. Summary of the Counter’s Area Optimisation5. Interface TestingIt was simulated full work for the verification of interface with Verilog-XL. Test programs are wrote in STL: control words fed for OA, Channel Control Block and timer, data scanning. Single-chip interface is good-working and has technical data as shown in Table 2.6. Creation of LayoutAnalogous elements used in layout were changed from Verilog HDL to physical. They are put on periphery of the chip. It is done because they have pads which are connected with IC package's pins. The pads of digital signals are put separately from analogous elements. The reason is that analogous elements have two power supply rails. And digital pads have four rails. Corner elements witch supply powers for periphery pads have four rails too. Therefore analogous elements are separated from corner elements by special elements. Analogous power supply is given by these special elements for ADC and OA. Designer-guided automatic method was used for the creation of layout. It was used automatic standard logic placement and routing tools for Cadence. To reduce the influence of noise region for standard logic was created as far as possible from the analogous elements part. Analogous elements are connected among themselves outsideIC. If the chip OA parameters are not sufficient it is possible to use outside placed OA. The layout of chip is shown in Fig 5. Chip has much empty area because its area is limited by pads of periphery. The total area is required is 21,5 mm2 (4,7´4,6 mm), with an active area of 1.6 mm2 (1,39´1,17 mm).7. Technical Data8. ConclusionsIn this paper I have presented a single chip analogous data acquisition interface. Complex functional blocks was described in VHDL. With help of Synopsys full functional unit was synthesised. Units were excess, so the optimisation was done for small area. After transporting to Cadence synthesised units were worked according to the set function. All circuits of interface, including models of analogous elements, were verified with Verilog-XL. The chip layout based on 1.0 mm CMOS process from ATMEL-ES2 was created.My diploma thesis was based on this project.9. AcknowledgementsThanks to prof. R.Ðeinauskas for his directing, dipl. eng.A.Maèiulis to give me a basic circuit of prototypic board and assoc. prof. R.Benisevièiûtë for they valuable suggestions.10. References[1] Data Acquisition Boards Catalogue. KethlerMetraByte, 1996-1997, vol. 28.[2] ZanalabedinNavabi. Beginning VHDL: An Introduction Language Concept,Boston-Massachusetts, 1994.[3] User Guide for the ES2 0.7mm/1.0mm CMOS Library Design Kit on CADENCE DFWII Software (Design Kit/User Guide Version: 4.1e1), July, 1996.单片机数据采集接口摘要本文提出了一种单芯片的数据采集接口。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
数据采集技术外文翻译文献(文档含中英文对照即英文原文和中文翻译)译文:数据采集系统数据采集系统,正如名字所暗示的,是一种用来采集信息成文件或分析一些现象的产品或过程。
在最简单的形式中,技术人员将烤箱的温度记录在一张纸上就是数据采集。
随着技术的发展,通过电子设备,这个过程已经得到简化和变得比较精确、多用途和可靠。
设备从简单的存储器发展到复杂的电脑系统。
数据采集产品像聚焦点一样为系统服务,和一系列产品一起,诸如传感器显示温度、水流、程度或者过程。
数据采集技术在过去30到40年以来已经取得了很大的飞跃。
举例来说,在 40 年以前,在一个著名的学院实验室中,为追踪用青铜做的坩埚中的温度上升情况的装置是由热电偶、继电器、查询台、一捆纸和一支铅笔。
今天的大学学生很可能在PC机上自动处理和分析数据,有很多种可供你选择的方法去采集数据。
至于选择哪一种方法取决于多种因素,包括任务的复杂度、你所需要的速度和精度、你想要的证据资料等等。
无论是简单的还是复杂的,数据采集系统都能够运行并发挥它的作用。
用铅笔和纸的旧方式对于一些情形仍然是可行的,而且它便宜、易获得、快速和容易开始。
而你所需要的就是捕捉到多路数字信息(DMM),然后开始用手记录数据。
不幸的是这种方法容易发生错误、采集数据变慢和需要太多的人工分析。
此外,它只能单通道采集数据;但是当你使用多通道DMM时,系统将很快变得非常庞大和呆笨拙。
精度取决于誊写器的水平,并且你可能需要自己动手依比例输入。
举例来说, 如果DMM 没有配备处理温度的传感器,旧需要动手找比例。
考虑到这些限制,只有当你需要实行一个快速实验时,它才是一个可接受的方法。
现代多种版本的长条图表记录仪允许你从多个输入取得数据。
他们提供数据的长备纸记录,因为数据是图解的格式,他们易于现场采集数据。
一旦建立了长条图表记录仪,在没有操作员或计算机的情况下,大多数记录仪具有足够的内部智能运行。
缺点是缺乏灵活性和相对的精度低,时常限制在百分点。
你能很清楚地感觉到与笔只有小的改变。
在多通道内较长时间的监控,记录仪能发挥很好的作用,除此之外,它们的价值得到限制。
举例来说,他们不能够与另外的装置轮流作用。
其他的顾虑就是笔和纸的维护,纸的供给和数据的存储,最重要的是纸的滥用和浪费。
然而,记录仪相当容易建立和操作,为数据快速而简单的分析提供永久的记录。
一些 benchtop DMMs 提供可选择的扫描能力。
仪器的背面有一个槽孔接收一张在较多输入时能多重发讯的扫描仪卡片,通常是8到10通道的mux。
固有的在仪器的前面嵌板中的受到限制。
它的柔韧性也受到限制,因为它不能超过可用通道数。
外部的PC机通常处理数据采集和分析。
PC机插件卡片是单板测量系统,它利用ISA或PCI总线在PC机内扩大插槽。
它们时常具有高达每秒1000的阅读速率。
8到16通道是普遍的,采集的数据直接存储在电脑里,然后进行分析。
因为卡片本质上是计算机的一部分,建立测试是容易的。
PC机卡也相对的便宜,一部分地,因为他们以来主机PC去提供能源、机械附件和使用界面。
数据采集的选择在缺点上,PC机插件卡片时常只有12字的容量,因此你不能察觉输入信号的小变化。
此外,PC机内的电子环境经常很容易发出噪声、产生高速率的时钟和总线噪声,电子接触面限制PC机插件卡片的精度。
这些插件卡片也测量一定范围的电压。
为了测量其他输入信号,如电压、温度和阻力,你也许需要一些外部信号监测的器件。
其它关心包括复杂的校正和全部的系统成本,尤其如果你需要购买额外信号监测器件或用PC机适应插件卡片。
把这些考虑进去,如果你的需要在卡片的能力和限制范围内变动,PC机插件卡片给数据采集提供吸引人的方法。
数据电子自动记录仪是典型的单机仪器,一旦配备它们,就能测量、记录和显示数据而不需要操作员或计算机参与。
它们能够处理多信号输入,有时可达120通道。
精度可与无与伦比的台式 DMMs 匹敌,由于它在22字、 0.004个百分率的精度范围内运转。
一些数据电子自动记录仪有能力按比例测量,检查结果不受使用者定义的限制,而且输出为控制作信号。
使用数据电子自动记录仪的一个好处就是他们的内部监测信号。
大部分能够直接地测量若干不同的输入信号,而不需要额外的信号监测器件。
一个通道能够监测热电偶、温阻器(RTD)和电压。
热电偶为准确的温度测量提供具有参考价值的补偿,是很典型的配备了多路插件卡片。
内设智能数据电子自动记录仪帮助你设定测量周期和具体指定每个通道的参数。
一旦你全部设定好,数据电子自动记录仪就如同无与伦比的装置运行。
它们存储的数据分布在内存中,能够容纳500000或更多的阅读量。
与PC机连接容易将数据传送到电脑进行进一步的分析。
大多数数据电子自动记录仪可设计为柔性和简单的组态和操作, 而且经由电池包裹或其它方法,多数提供远程位置的操作选项。
靠 A/ D 转换技术,一定的数据电子自动记录仪阅读的速率比较低,尤其是跟PC机插件卡片比较。
然而,每秒250的阅读速率比较少见。
要牢记正在测量的许多现象本质上是物理的,如温度、压力和流量,而且一般有较少的变动。
此外,因为数据电子自动记录仪的监测精度,多量且平均阅读没有必要,就像它们经常在PC记插件卡片一样。
前端数据采集经常做成模块而且是典型地与PC机或控制器连接。
他们被用于自动化的测试中,为其它测试装备采集数据、控制和循环检测信号。
发送信号测试装备的零配件。
前端运转的效率是非常高的,能与速度和精度与最好的单机仪器匹敌。
前端数据采集在很多模型里都能运行,包括VXI版本,如Agilent E1419A 多功能测量和VXI控制模型,还有专有的卡片升降室。
虽然前端器成本已经降低,但是这些系统可能会非常贵,除非你需要提供高的运转,而查找它们的价格是禁止的。
另一方面,它们的确能够提供相当多的可挠性和测量能力。
好的、成本低的数据电子自动记录仪有合适的通道数(20-60通道)和扫描速率相对低但对于多数工程师的普遍应用已足够。
一些关键的应用包括:•产品特征•电子产品的热靠模切削•环境的测试环境的监测•组成物特征•电池测试建筑物和计算机容量监测DATA ACQUISITION SYSTEMSData acquisition systems, as the name implies, are products and/or processes used to collect information to document or analyze some phenomenon. In the simplest form, a technician logging the temperature of an oven on a piece of paper is performing data acquisition. As technology has progressed, this type of process has been simplified and made more accurate, versatile, and reliable through electronic equipment. Equipment ranges from simple recorders to sophisticated computer systems. Data acquisition products serve as a focal point in a system, tying together a wide variety of products, such as sensors that indicate temperature, flow, level, or pressure. Some common data acquisition terms are shown below.Data acquisition technology has taken giant leaps forward over the last 30 to 40 years. For example, 40 years ago, in a typical college lab, apparatus for tracking the temperature rise in a crucible of sodium tungsten- bronze consisted of a thermocouple, a bridge, a lookup table, a pad of paper and a pencil.Today’s college students are much more likely to use an automated process and analyze the data on a PC Today, numerous options are available for gathering data. The optimal choice depends on several factors, including the complexity of the task, the speed and accuracy yourequire, and the documentation you want. Data acquisition systems range from the simple to the complex, with a range of performance and functionality.The old pencil and paper approach is still viable for some situations, and it is inexpensive, readily available, quick and easy to get started. All you need to do is hook up a digital multiple meters (DMM) and begin recording data by hand.Unfortunately, this method is error-prone, tends to be slow and requires extensive manual analysis. In addition, it works only for a single channel of data; while you can use multiple DMMs, the system will quickly becomes bulky and awkward. Accuracy is dependent on the transcribers level of fastidiousness and you may need to scale input manually. For example, if the DMM is not set up to handle temperature sensors, manual scaling will be required. Taking these limitations into account, this is often an acceptable method when you need to perform a quick experiment.Modern versions of the venerable strip chart recorder allow you to capture data from several inputs. They provide a permanent paper record of the data, and because this data is in graphical format, they allow you to easily spot trends. Once set up, most recorders have sufficient internal intelligence to run unattended — without the aid of either an operator or a computer. Drawbacks include a lack of flexibility and relatively lowaccuracy, which is often constrained to a few percentage points. You can typically perceive only small changes in the pen plots. While recorders perform well when monitoring a few channels over a long period of time, their value can be limited. For example, they are unable to turn another device on or off. Other concerns include pen and paper maintenance, paper supply and data storage, all of which translate into paper overuse and waste. Still, recorders are fairly easy to set up and operate, and offer a permanent record of the data for quick and simple analysis.Some bench top DMMs offer an optional scanning capability. A slot in the rear of the instrument accepts a scanner card that can multiplex between multiple inputs, with 8 to 10 channels of mux being fairly common. DMM accuracy and the functionality inherent in the instruments front panel are retained. Flexibility is limited in that it is not possible to expand beyond the number of channels available in the expansion slot. An external PC usually handles data acquisition and analysis.PC plug-in cards are single-board measurement systems that take advantage of the ISA or PCI-bus expansion slots in a PC. They often have reading rates as high as 100,000 readings per second. Counts of 8 to 16 channels are common, and acquired data is stored directly into the computer, where it can then be analyzed. Because the card is essentially part of the computer, it is easy to set up tests. PC cards also are relativelyinexpensive, in part, because they rely on the host PC to provide power, the mechanical enclosure and the user interface.In the downside, PC plug-in cards often have only 12 bits of resolution, so you can’t perceive small variations with the input signal. Furthermore, the electrical environment inside a PC tends to be noisy, with high-speed clocks and bus noise radiated throughout. Often, this electrical interference limits the accuracy of the PC plug-in card to that of a handheld DMM .These cards also measure a fairly limited range of dc voltage. To measure other input signals, such as ac voltage, temperature or resistance, you may need some sort of external signal conditioning. Additional concerns include problematic calibration and overall system cost, especially if you need to purchase additional signal conditioning accessories or a PC to accommodate the cards. Taking that into consideration, PC plug-in cards offer an attractive approach to data acquisition if your requirements fall within the capabilities and limitations of the card.Data loggers are typically stand-alone instruments that, once they are setup, can measure, record and display data without operator or computer intervention. They can handle multiple inputs, in some instances up to 120 channels. Accuracy rivals that found in standalone bench DMMs, with performance in the 22-bit, 0.004-percent accuracy range. Some dataloggers have the ability to scale measurements, check results against user-defined limits, and output signals for control.One advantage of using data loggers is their built-in signal conditioning. Most are able to directly measure a number of different inputs without the need for additional signal conditioning accessories. One channel could be monitoring a thermocouple, another a resistive temperature device (RTD) and still another could be looking at voltage.Thermocouple reference compensation for accurate temperature measurement is typically built into the multiplexer cards. A data logger built-in intelligence helps you set up the test routine and specify the parameters of each channel. Once you have completed the setup, data loggers can run as standalone devices, much like a recorder. They store data locally in internal memory, which can accommodate 50,000 readings or more.PC connectivity makes it easy to transfer data to your computer for in-depth analysis. Most data loggers are designed for flexibility and simple configuration and operation, and many provide the option of remote site operation via battery packs or other methods. Depending on the A/D converter technique used, certain data loggers take readings at a relatively slow rate, especially compared to many PC plug-in cards. Still, reading speeds of 250 readings/second are not uncommon. Keep in mind that many of the phenomena being monitored are physical in nature —such as temperature, pressure and flow — and change at a fairly slow rate. Additionally, because of a data logger superior measurement accuracy, multiple readings and averaging are not necessary, as they often are in PC plug-in solutions.Data acquisition front ends are often modular and are typically connected to a PC or controller. They are used in automated test applications for gathering data and for controlling and routing signals in other parts of the test setup. Front end performance can be very high, with speed and accuracy rivaling the best standalone instruments. Data acquisition front ends are implemented in a number of formats, including VXI versions, such as the Agilent E1419A multifunction measurement and control VXI module, and proprietary card cages.. Although front-end cost has been decreasing, these systems can be fairly expensive, and unless you require the high performance they provide, you may find their price to be prohibitive. On the plus side, they do offer considerable flexibility and measurement capability.A good, low-cost data logger with moderate channel count (20 - 60 channels) and a relatively slow scan rate is more than sufficient for many of the applications engineers commonly face. Some key applications include:• Product characterization• Thermal profiling of electronic products• Environmental testing; environmental monitorin g • Component characterization• Battery testing• Building and computer room monitoring。