计算机 外文翻译 外文文献 英文文献 基于拓扑结构的分布式无线传感器网络的功率控制
计算机网络体系结构论文外文翻译
附录AWith the new network technology and application of the continuous rapid development of the computer network should. Use of becoming increasingly widespread, the role played by the increasingly important computer networks and human. More inseparable from the lives of the community's reliance on them will keep growing.In order for computers to communicate, they must speak the same language or protocol. In the early days of networking, networks were disorganized in many ways. Companies developed proprietary network technologies that had great difficulties in exchanging information with other or existing technologies; so network interconnections were very hard to build. To solve this problem, the International Organization for Standardization(ISO)created a network model that helps vendors to create networks compatible with each other.Finding the best software is not easy. A better understanding of what you need and asking the right questions makes it easier. The software should be capable of handling challenges specific to your company. If you operate multiple distribution centers, it may be beneficial to create routes with product originating from more than one depot. Few software providers though, are capable of optimizing routes using multiple depots. The provider should be able to support installation of its product. Make sure to clearly understand what training and software maintenance is offered.Obviously, selecting the right routing/scheduling software is critically important. Unfortunately, some companies are using software that may not be best suited to their operation. Logistics actives with responsibility for approving the software ought to be comfortable they've made the right decision. It is important to realize that not all routing/scheduling software is alike!There questions to ask are: Which operating system is used? How easy is the software to use? Here is a good way to tell. Ask if its graphical user interface(GUI)is flexible. Find out about installation speed - how long does it take? Is the software able to route third party customers with your core business?When was the software originally released and when was it last upgraded?In 1984, ISO released the Open Systems Interconnection(OSI)reference model, which is a well-defined set of specifications that ensures greater compatibility among various technologies. In fact, OSI is a description of network communication that everyone refers to. It is not the only network model, but it has become the primary model for network communication. You will see further in this chapter, that the TCP/IP model is only a reduced version of the OSI model. The OSI model consists of seven layers, each illustrating a particular network function.meanwhile, ASP continues to evolve. With the arrival of the millennium came the arrival of ASP version 3. 0. Version 3. 0 was released along with Internet Information Server(IIS)version 5. 0 as part of the highly anticipated Microsoft Windows 2000. By far, the most important new feature of version 3.0 is the addition of a seventh, intrinsic object called ASP Error which should greatly simplify error handling. Other new features include the addition of three new methods to the Server object, and two new methods to both the Application object and the Session object.When programmers design an image editor for example, they don't have to think about adding OSI Layer 7 capabilities to that software, because it has no need for communication with other computers. On the other hand, when creating an FTP client, they must add communication capabilities to that software. At Layer 7 we usually find Telnet, FTP, HTTP, SMTP, SNMP, or SSH. When we say, For Example, Layer 7 filtering, we refer to filtering application data, regardless of what port or computer it may come from.OSI will be a computer network architecture(architecture)is divided into the following seven:The first layer:physical layer(Physical Layer), provides communications equipment for the mechanical, electrical and functional characteristics and process for the establishment, maintenance and removal of the physical link connection. Specifically, the provisions of the mechanical properties required for network connectivity connector dimensions, pin number and order situation, etc. ;the provisions of the electrical characteristics of the physical connection in the bit stream transmission line signal level of the size, impedance matching , transfer rate from the constraints; features refers to the distribution of the various signals to the exact meaning of the signal, that is the definition of the DTE and DCE function between the various lines; order characteristics of the definition of the use of bit stream signal transmission lines for a group of rules refers to the physical connection of the establishment, maintenance, exchange of information, DTE and DCE on the circuit on double-action series. In this layer, data units known as bits (bit). Belong to the typical definition of the physical layer specification included: EIA / TIA RS-232, EIA / TIA RS-449, V. 35, RJ-45 and so on.The second layer: data link layer(Data Link Layer): in the physical layer bit stream to provide services based on adjacent node between the data link, through the provision of error control data frame(Frame)in the channel error-free transmission, and the action of the series circuit. Data link layer in the physical media is not reliable to provide reliable transmission. The role of this layer include: addressing the physical address, data framing, flow control, data error, such as re-issued. In this layer, data units known as the frame(frame). Data link layer protocol, including representatives of: SDLC, HDLC, PPP, STP, such as frame relay.The third layer is the network layerIn the computer network to communicate between two computers may be a lot of data link may also go through a lot of communication subnet. Network layer of the task is to choose a suitable inter-network routing and switching nodes, to ensure timely delivery of data. Network layer will provide the data link layer packet frame components, including network layer in the package header, which contains the logical address information - - the source site and destination site address of the network address. If you're talking about an IP address, then you are in dealing with the problem of Layer 3, which is “data packets”, rather than layer 2 of the “frame. ” IP is layer 3 part of the problem, in addition to a number of routing protocols and ARP(ARP). All things related to routing in Layer 3 processing. Address resolution and routing is an important objective of Level 3. Network layer can also achieve congestion control features such as Internet interconnection. In this layer, data packets as the unit(packet). Representatives of the network layer protocol, including: IP, IPX, RIP, OSPF, etc…The fourth tier is the transport layer process information. At fourth floor unit, also known as data packets(packets). However, when you talk about TCP protocol, such as concrete and specific when the name, TCP data unit known as paragraph(segments)and the UDP protocol data unit referred to as “datagram (data grams)”. This layer is responsible for obtaining all the information, therefore, it must be tracking data cell debris, out-of-order packets arrive in the transfer process and other possible risk. No. 4 for the upper layer to provide end-to-end(the end-user to end-users)of a transparent and reliable data transmission services. Transparent by means of transmission is transmitted in the communication process of the upper layer shielding the communication transmission system details. Representatives of the Transport Protocol: TCP, UDP, SPX, etc…The fifth layer is the session layerThis layer can also be known as the dialogue meeting between layers or layer, in the session layer and above the high-level, the data transmission is no longer the other named units, known collectively as the message. Session layer does not participate in specific transmission, It provides, including access to authentication and session management, including the establishment and maintenance of mechanisms for communication between applications. If the server to verify user login is completed by the session layer.The sixth layer is Presentation LayerThe main solution to support this level of information that the problem of syntax. For the exchange of data will be suitable for a user from the abstract syntax, into a system suitable for the use of OSI transfer syntax. To provide formatting and conversion of that data services. Data compression and decompression, encryption and decryption so that the layers are responsible for.The seventh layer application layer, application layer for the operating system, applications or network services access the network interface. Agreement on behalf of the application layer, including: Telnet, FTP, HTTP, SNMP, etc. .Through the OSI layers, information from a computer software application for transfer to another application. For example, computer A on the application to send information to the computer application B, then A computer application in information need to be sent to the Application Layer(seventh layer), and then this layer will be sent to that level of information(sixth floor), indicating that the data layer will be transferred to the session layer(fifth layer), and so continue until the physical layer(first layer). In the physical layer, data is placed in the physical network media and sent to computer B. The physical layer of computer B to receive data from the physical media, and then send information up to data link layer(second layer), data link layer and then to the network layer, until the information in order to arrive at the application layer of computer B. Finally, the application layer of computer B and then the application information to the receiving end, thus completing the communication process. The following describes the icons in the process.OSI's seven control the use of a wide range of information and other computer systems to communicate the corresponding layer. Control information contained in these special requests, and show that they correspond to the OSI layer exchange. Data at every level of the head and tail to bring the two basic forms of control information.For one to send down from the previous data, additional control information in the front as the head, attached to the back of the end of the control information is called. However, data from one level to increase the agreement and the agreement the end of the first of a OSI layer is not necessary.When the data transmission between each floor, each level in the data can be added to the head and tail, and these data have been included to increase the level of the head and tail. Agreement includes the first layer and the communication of information between layers. Head and tail as well as the data is associated with the concept, they depend on the analysis of the protocol layer module. For example, the transport layer header includes only the transport layer can be seen the information, the transport layer of the other layers below this only the first part of a data transmission. For the network layer, an information unit from the third layer composed of the first and data. The data link layer, network layer by passing down all the information that is the first and third data layer is seen as data. In other words, a given OSI layer, the information unit that contains the data from all parts of the upper head and tail, as well as data, referred to as packaging. For example, if computer A to a certain application data sent to computer B, the first data sent to the application layer. A computer in the application layer protocol data to add up and the application layer of computer B communications. Formed by the first information unit includes an agreement, data, and possibly the end of the agreement was sent to that layer, that layer of computer B and then add that layer of the control information to understand the agreement first. Information on the size of units in each level in agreement with the agreement the end of the first and add the increase in the first of these agreements and agreements contained in the computer B the end of the corresponding layers of control information to be used. In the physical layer, the entire information unit through the network transmission medium.Computer B in the physical layer unit of information received and sent to the data link layer; and B data link layer in computer A to read the data link layer protocol header added to the control of information;and then remove the agreement and the end of the first agreement, sent to the remainder of the network layer. Perform the same at every level of action: from the corresponding first layer to read the agreement and protocol tail, and remove, and then send the remaining first floor. End of application-layer implementation of these actions, the data sent to computer B on the application, the data and computer applications A is exactly the same as sent.An OSI layer and another layer of communication between the second layer is the use of the services completed. Services provided by adjacent layers help a OSI layer with another layer corresponds to the computer system to communicate. A particular layer of the OSI model is often associated with three other OSI layers contact: with the layer directly adjacent to and under the floor, as well as the objectives of the corresponding computer systems networking layer. For example, computer A's data link layer should be with the network layer, physical layer of computer B, as well as the data link layer communication.附录B为了让电脑来沟通,就必须讲同样的语言或议定书。
英文文献科技类原文及翻译1
英文文献科技类原文及翻译1On the deployment of V oIP in Ethernet networks:methodology and case studyAbstractDeploying IP telephony or voice over IP (V oIP) is a major and challenging task for data network researchers and designers. This paper outlines guidelines and a step-by-step methodology on how V oIP can be deployed successfully. The methodology can be used to assess the support and readiness of an existing network. Prior to the purchase and deployment of V oIP equipment, the methodology predicts the number of V oIP calls that can be sustained by an existing network while satisfying QoS requirements of all network services and leaving adequate capacity for future growth. As a case study, we apply the methodology steps on a typical network of a small enterprise. We utilize both analysis and simulation to investigate throughput and delay bounds. Our analysis is based on queuing theory, and OPNET is used for simulation. Results obtained from analysis and simulation are in line and give a close match. In addition, the paper discusses many design and engineering issues. These issues include characteristics of V oIP traffic and QoS requirements, V oIP flow and call distribution, defining future growth capacity, and measurement and impact of background traffic. Keywords: Network Design,Network Management,V oIP,Performance Evaluation,Analysis,Simulation,OPNET1 IntroductionThese days a massive deployment of V oIP is taking place over data networks. Most of these networks are Ethernet based and running IP protocol. Many network managers are finding it very attractive and cost effective to merge and unify voice and data networks into one. It is easier to run, manage, and maintain. However, one has to keep in mind that IP networks are best-effort networks that were designed for non-real time applications. On the other hand, V oIP requires timely packet delivery with low latency, jitter, packet loss, andsufficient bandwidth. To achieve this goal, an efficient deployment of V oIP must ensure these real-time traffic requirements can be guaranteed over new or existing IP networks. When deploying a new network service such as V oIP over existing network, many network architects, managers, planners, designers, and engineers are faced with common strategic, and sometimes challenging, questions. What are the QoS requirements for V oIP? How will the new V oIP load impact the QoS for currently running network services and applications? Will my existing network support V oIP and satisfy the standardized QoS requirements? If so, how many V oIP calls can the network support before upgrading prematurely any part of the existing network hardware? These challenging questions have led to the development of some commercial tools for testing the performance of multimedia applications in data networks. A list of the available commercial tools that support V oIP is listed in [1,2]. For the most part, these tools use two common approaches in assessing the deployment of V oIP into the existing network. One approach is based on first performing network measurements and then predicting the network readiness for supporting V oIP. The prediction of the network readiness is based on assessing the health of network elements. The second approach is based on injecting real V oIP traffic into existing network and measuring the resulting delay, jitter, and loss. Other than the cost associated with the commercial tools, none of the commercial tools offer a comprehensive approach for successful V oIP deployment. I n particular, none gives any prediction for the total number of calls that can be supported by the network taking into account important design and engineering factors. These factors include V oIP flow and call distribution, future growth capacity, performance thresholds, impact of V oIP on existing network services and applications, and impact background traffic on V oIP. This paper attempts to address those important factors and layout a comprehensive methodology for a successful deployment of any multimedia application such as V oIP and video conferencing. However, the paper focuses on V oIP as the new service of interest to be deployed. The paper also contains many useful engineering and design guidelines, and discusses many practical issues pertaining to the deployment of V oIP. These issues include characteristics of V oIP traffic and QoS requirements, V oIP flow and call distribution, defining future growth capacity, and measurement and impact of background traffic. As a case study, we illustrate how ourapproach and guidelines can be applied to a typical network of a small enterprise. The rest of the paper is organized as follows. Section 2 presents a typical network topology of a small enterprise to be used as a case study for deploying V oIP. Section 3 outlines practical eight-step methodology to deploy successfully V oIP in data networks. Each step is described in considerable detail. Section 4 describes important design and engineering decisions to be made based on the analytic and simulation studies. Section 5 concludes the study and identifies future work.2 Existing network3 Step-by-step methodologyFig. 2 shows a flowchart of a methodology of eight steps for a successful V oIP deployment. The first four steps are independent and can be performed in parallel. Before embarking on the analysis and simulation study, in Steps 6 and 7, Step 5 must be carried out which requires any early and necessary redimensioning or modifications to the existing network. As shown, both Steps 6 and 7 can be done in parallel. The final step is pilot deployment.3.1. VoIP traffic characteristics, requirements, and assumptionsFor introducing a new network service such as V oIP, one has to characterize first the nature of its traffic, QoS requirements, and any additional components or devices. For simplicity, we assume a point-to-point conversation for all V oIP calls with no call conferencing. For deploying V oIP, a gatekeeper or Call Manager node has to be added to the network [3,4,5]. The gatekeeper node handles signaling for establishing, terminating, and authorizing connections of all V oIP calls. Also a V oIP gateway is required to handle external calls. A V oIP gateway is responsible for converting V oIP calls to/from the Public Switched Telephone Network (PSTN). As an engineering and design issue, the placement of these nodes in the network becomes crucial. We will tackle this issue in design step 5. Otherhardware requirements include a V oIP client terminal, which can be a separate V oIP device, i.e. IP phones, or a typical PC or workstation that is V oIP-enabled. A V oIP-enabled workstation runs V oIP software such as IP Soft Phones .Fig. 3 identifies the end-to-end V oIP components from sender to receiver [9]. The first component is the encoder which periodically samples the original voice signal and assigns a fixed number of bits to each sample, creating a constant bit rate stream. The traditional sample-based encoder G.711 uses Pulse Code Modulation (PCM) to generate 8-bit samples every 0.125 ms, leading to a data rate of 64 kbps . The packetizer follows the encoder and encapsulates a certain number of speech samples into packets and adds the RTP, UDP, IP, and Ethernet headers. The voice packets travel through the data network. An important component at the receiving end, is the playback buffer whose purpose is to absorb variations or jitter in delay and provide a smooth playout. Then packets are delivered to the depacketizer and eventually to the decoder which reconstructs the original voice signal. We will follow the widely adopted recommendations of H.323, G.711, and G.714 standards for V oIP QoS requirements.Table 1 compares some commonly used ITU-T standard codecs and the amount ofone-way delay that they impose. To account for upper limits and to meet desirable quality requirement according to ITU recommendation P.800, we will adopt G.711u codec standards for the required delay and bandwidth. G.711u yields around 4.4 MOS rating. MOS, Mean Opinion Score, is a commonly used V oIP performance metric given in a scale of 1–5, with 5 is the best. However, with little compromise to quality, it is possible to implement different ITU-T codecs that yield much less required bandwidth per call and relatively a bit higher, but acceptable, end-to-end delay. This can be accomplished by applying compression, silence suppression, packet loss concealment, queue management techniques, and encapsulating more than one voice packet into a single Ethernet frame.3.1.1. End-to-end delay for a single voice packetFig. 3 illustrates the sources of delay for a typical voice packet. The end-to-end delay is sometimes referred to by M2E or Mouth-to-Ear delay. G.714 imposes a maximum total one-way packet delay of 150 ms end-to-end for V oIP applications . In [22], a delay of up to 200 ms was considered to be acceptable. We can break this delay down into at least three different contributing components, which are as follows (i) encoding, compression, and packetization delay at the sender (ii) propagation, transmission and queuing delay in the network and (iii) buffering, decompression, depacketization, decoding, and playback delay at the receiver.3.1.2. Bandwidth for a single callThe required bandwidth for a single call, one direction, is 64 kbps. G.711 codec samples 20 ms of voice per packet. Therefore, 50 such packets need to be transmitted per second. Each packet contains 160 voice samples in order to give 8000 samples per second. Each packet is sent in one Ethernet frame. With every packet of size 160 bytes, headers of additional protocol layers are added. These headers include RTP+UDP+IP+Ethernet with preamble of sizes 12+8+20+26, respectively. Therefore, a total of 226 bytes, or 1808 bits, needs to be transmitted 50 times per second, or 90.4 kbps, in one direction. For both directions, the required bandwidth for a single call is 100 pps or 180.8 kbps assuming a symmetric flow.3.1.3. Other assumptionsThroughout our analysis and work, we assume voice calls are symmetric and no voice conferencing is implemented. We also ignore the signaling traffic generated by the gatekeeper. We base our analysis and design on the worst-case scenario for V oIP call traffic. The signaling traffic involving the gatekeeper is mostly generated prior to the establishment of the voice call and when the call is finished. This traffic is relatively small compared to the actual voice call traffic. In general, the gatekeeper generates no or very limited signaling traffic throughout the duration of the V oIP call for an already established on-going call. In this paper, we will implement no QoS mechanisms that can enhance the quality of packet delivery in IP networks.A myriad of QoS standards are available and can be enabled for network elements. QoS standards may i nclude IEEE 802.1p/Q, the IETF’s RSVP, and DiffServ.Analysis of implementation cost, complexity, management, and benefit must be weighed carefully before adopting such QoS standards. These standards can be recommended when the cost for upgrading some network elements is high and the network resources are scarce and heavily loaded.3.2. VoIP traffic flow and call distributionKnowing the current telephone call usage or volume of the enterprise is an important step for a successful V oIP deployment. Before embarking on further analysis or planning phases for a V oIP deployment, collecting statistics about of the present call volume and profiles is essential. Sources of such information are organization’s PBX, telephone records and bills. Key characteristics of existing calls can include the number of calls, number of concurrent calls, time, duration, etc. It is important to determine the locations of the call endpoints, i.e. the sources and destinations, as well as their corresponding path or flow. This will aid in identifying the call distribution and the calls made internally or externally. Call distribution must include percentage of calls within and outside of a floor, building, department, or organization. As a good capacity planning measure, it is recommended to base the V oIP call distribution on the busy hour traffic of phone calls for the busiest day of a week or a month. This will ensure support of the calls at all times with high QoS for all V oIP calls.When such current statistics are combined with the projected extra calls, we can predict the worst-case V oIP traffic load to be introduced to the existing network.Fig. 4 describes the call distribution for the enterprise under study based on the worst busy hour and the projected future growth of V oIP calls. In the figure, the call distribution is described as a probability tree. It is also possible to describe it as a probability matrix. Some important observations can be made about the voice traffic flow for inter-floor and external calls. For all these type of calls, the voice traffic has to be always routed through the router. This is so because Switchs 1 and 2 are layer 2 switches with VLANs configuration. One can observe that the traffic flow for inter-floor calls between Floors 1 and 2 imposes twice the load on Switch 1, as the traffic has to pass through the switch to the router and back to the switch again. Similarly, Switch 2 experiences twice the load for external calls from/to Floor 3.3.3. Define performance thresholds and growth capacityIn this step, we define the network performance thresholds or operational points for a number of important key network elements. These thresholds are to be considered when deploying the new service. The benefit is twofold. First, the requirements of the new service to be deployed are satisfied. Second, adding the new service leaves the network healthy and susceptible to future growth. Two important performance criteria are to be taken into account.First is the maximum tolerable end-to-end delay; and second is the utilization bounds or thresholds of network resources. The maximum tolerable end-to-end delay is determined by the most sensitive application to run on the network. In our case, it is 150 ms end-to-end for V oIP. It is imperative to note that if the network has certain delay sensitive applications, the delay for these applications should be monitored, when introducing V oIP traffic, such that they do not exceed their required maximum values. As for the utilization bounds for network resources, such bounds or thresholds are determined by factors such as current utilization, future plans, and foreseen growth of the network. Proper resource and capacity planning is crucial. Savvy network engineers must deploy new services with scalability in mind, and ascertain that the network will yield acceptable performance under heavy and peak loads, with no packet loss. V oIP requires almost no packet loss. In literature, 0.1–5% packet loss was generally asserted. However, in [24] the required V oIP packet loss was conservatively suggested to be less than 105 . A more practical packet loss, based on experimentation, of below 1% was required in [22]. Hence, it is extremely important not to utilize fully the network resources. As rule-of-thumb guideline for switched fast full-duplex Ethernet, the average utilization limit of links should be 190%, and for switched shared fast Ethernet, the average limit of links should be 85% [25]. The projected growth in users, network services, business, etc. must be all taken into consideration to extrapolate the required growth capacity or the future growth factor. In our study, we will ascertain that 25% of the available network capacity is reserved for future growth and expansion. For simplicity, we will apply this evenly to all network resources of the router, switches, and switched-Ethernet links. However, keep in mind this percentage in practice can be variable for each network resource and may depend on the current utilization and the required growth capacity. In our methodology, the reservation of this utilization of network resources is done upfront, before deploying the new service, and only the left-over capacity is used for investigating the network support of the new service to be deployed.3.4. Perform network measurementsIn order to characterize the existing network traffic load, utilization, and flow, networkmeasurements have to be performed. This is a crucial step as it can potentially affect results to be used in analytical study and simulation. There are a number of tools available commercially and noncommercially to perform network measurements. Popular open-source measurement tools include MRTG, STG, SNMPUtil, and GetIF [26]. A few examples of popular commercially measurement tools include HP OpenView, Cisco Netflow, Lucent VitalSuite, Patrol DashBoard, Omegon NetAlly, Avaya ExamiNet, NetIQ Vivinet Assessor, etc. Network measurements must be performed for network elements such as routers, switches, and links. Numerous types of measurements and statistics can be obtained using measurement tools. As a minimum, traffic rates in bits per second (bps) and packets per second (pps) must be measured for links directly connected to routers and switches. To get adequate assessment, network measurements have to be taken over a long period of time, at least 24-h period. Sometimes it is desirable to take measurements over several days or a week. One has to consider the worst-case scenario for network load or utilization in order to ensure good QoS at all times including peak hours. The peak hour is different from one network to another and it depends totally on the nature of business and the services provided by the network.Table 2 shows a summary of peak-hour utilization for traffic of links in both directions connected to the router and the two switches of the network topology of Fig. 1. These measured results will be used in our analysis and simulation study.外文文献译文以太网网络电话传送调度:方法论与案例分析摘要对网络数据研究者与设计师来说,IP电话或者语音IP电话调度是一项重大而艰巨的任务。
电子信息及自动化 外文翻译 外文文献 英文文献 基于ZigBee无线传感器网络的矿工的位置探测研究
基于ZigBee无线传感器网络的矿工的位置探测研究张秀萍, 韩广杰, 朱昌平, 窦燕, 陶剑锋河海大学计算机与信息工程学院中国常州E-mail:zhangxiup@ Zhucp315@摘要:随着计算机的飞速发展,通信和网络技术,特别是无线传感器和嵌入式技术的应用,使得无线传感器网络(WSNs)技术在产业领域和我们的日常生活得到了广泛关注。
基于ARM7TDMI-S CPU和ZigBee 的WSNs在提速和优化网络移动节点的应用,丰富的信息采集中,以及在通信中实时时间的协调均有可取之处,具有低功耗连续作业特点,因此它是非常适合用于确定矿工在地下的位置。
本文提出和分划WSN的网络计划及信息处理与通信技术,重点专注于实时协作。
通过传感器准确获得矿工的移动信息。
之后的位置信息传送可靠的监控中心。
不断变化的运行测试结果表明没有信息丢失或者没有未被采集到的信息。
因此,这个计划是稳定和有效的,将在煤矿安全中发挥积极作用,在我看来这正是Zigbee 无线传感器网络的正确特点。
关键词:ZigBee的ARM7TDMI-S内核; CC2420的; 无线传感器网络;矿工位置确定一、简介无线传感器网络(WSNs)是规模大,无线自组织网络。
它是整合计算机通信,网络技术,嵌入式MCU和无线传感器技术,具有感知和沟通能力。
【1】节点有低低成本,小尺寸特点。
其中大部分可以工作区域传播,收集数据,并进行处理数据和通信。
无线传感器节点通常工作在无线电频率(RF)频段。
节点构成一个分层架构现场监测数据的网络。
它通常适用在工业,农业,远程医疗和环境监测。
我们都知道,煤炭生产中的威胁复杂的工作条件,如有毒气体,透水,塌陷,顶板等。
【2】一旦发生事故发生时,它会危及矿工的生命。
因此它是地面人员的当务之急,要明确矿工的确切位置,以便为及时采取措施。
因此为矿工成立一个无线传感器网络监控矿井有很大的应用价值。
二、方案优选矿工的位置监测系统主要技术规范要求归纳如下:(1)定位精度为10米。
计算机网络中英文对照外文翻译文献
中英文资料外文翻译计算机网络计算机网络,通常简单的被称作是一种网络,是一家集电脑和设备为一体的沟通渠道,便于用户之间的沟通交流和资源共享。
网络可以根据其多种特点来分类。
计算机网络允许资源和信息在互联设备中共享。
一.历史早期的计算机网络通信始于20世纪50年代末,包括军事雷达系统、半自动地面防空系统及其相关的商业航空订票系统、半自动商业研究环境。
1957年俄罗斯向太空发射人造卫星。
十八个月后,美国开始设立高级研究计划局(ARPA)并第一次发射人造卫星。
然后用阿帕网上的另外一台计算机分享了这个信息。
这一切的负责者是美国博士莱德里尔克。
阿帕网于来于自印度,1969年印度将其名字改为因特网。
上世纪60年代,高级研究计划局(ARPA)开始为美国国防部资助并设计高级研究计划局网(阿帕网)。
因特网的发展始于1969年,20世纪60年代起开始在此基础上设计开发,由此,阿帕网演变成现代互联网。
二.目的计算机网络可以被用于各种用途:为通信提供便利:使用网络,人们很容易通过电子邮件、即时信息、聊天室、电话、视频电话和视频会议来进行沟通和交流。
共享硬件:在网络环境下,每台计算机可以获取和使用网络硬件资源,例如打印一份文件可以通过网络打印机。
共享文件:数据和信息: 在网络环境中,授权用户可以访问存储在其他计算机上的网络数据和信息。
提供进入数据和信息共享存储设备的能力是许多网络的一个重要特征。
共享软件:用户可以连接到远程计算机的网络应用程序。
信息保存。
安全保证。
三.网络分类下面的列表显示用于网络分类:3.1连接方式计算机网络可以据硬件和软件技术分为用来连接个人设备的网络,如:光纤、局域网、无线局域网、家用网络设备、电缆通讯和G.hn(有线家庭网络标准)等等。
以太网的定义,它是由IEEE 802标准,并利用各种媒介,使设备之间进行通信的网络。
经常部署的设备包括网络集线器、交换机、网桥、路由器。
无线局域网技术是使用无线设备进行连接的。
计算机专业中英文文献翻译
1In the past decade the business environment has changed dramatically. The world has become a small and very dynamic marketplace. Organizations today confront new markets, new competition and increasing customer expectations. This has put a tremendous demand on manufacturers to; 1) Lower total costs in the complete supply chain 2) Shorten throughput times 3) Reduce stock to a minimum 4) Enlarge product assortment 5) Improve product quality 6) Provide more reliable delivery dates and higher service to the customer 7) Efficiently coordinate global demand, supply and production. Thus today's organization have to constantly re-engineer their business practices and procedures to be more and more responsive to customers and competition. In the 1990's information technology and business process re-engineering, used in conjunction with each other, have emerged as important tools which give organizations the leading edge.ERP Systems EvolutionThe focus of manufacturing systems in the 1960's was on inventory control. Most of the software packages then (usually customized) were designed to handle inventory based on traditional inventory concepts. In the 1970's the focus shifted to MRP (Material Requirement Planning) systems which translatedthe Master Schedule built for the end items into time-phased net requirements for the sub-assemblies, components and raw materials planning and procurement,In the 1980's the concept of MRP-II (Manufacturing Resources Planning) evolved which was an extension of MRP to shop floor and distribution management activities. In the early 1990's, MRP-II was further extended to cover areas like Engineering, Finance, Human Resources, Projects Management etc i.e. the complete gamut of activities within any business enterprise. Hence, the term ERP (Enterprise Resource Planning) was coined.In addition to system requirements, ERP addresses technology aspects like client/server distributedarchitecture, RDBMS, object oriented programming etc. ERP Systems-Bandwidth ERP solutions address broad areas within any business like Manufacturing, Distribution, Finance, Project Management, Service and Maintenance, Transportation etc. A seamless integration is essential to provide visibility and consistency across the enterprise.An ERP system should be sufficiently versatile to support different manufacturing environments like make-to-stock, assemble-to-order and engineer-to-order. The customer order decoupling point (CODP) should be flexible enough to allow the co-existence of these manufacturing environments within the same system. It is also very likely that the same product may migrate from one manufacturing environment to another during its produce life cycle.The system should be complete enough to support both Discrete as well as Process manufacturing scenario's. The efficiency of an enterprise depends on the quick flow of information across the complete supply chain i.e. from the customer to manufacturers to supplier. This places demands on the ERP system to have rich functionality across all areas like sales, accounts receivable, engineering, planning, inventory management, production, purchase, accounts payable, quality management, distribution planning and external transportation. EDI (Electronic Data Interchange) is an important tool in speeding up communications with trading partners.More and more companies are becoming global and focusing on down-sizing and decentralizing their business. ABB and Northern Telecom are examples of companies which have business spread around the globe. For these companies to manage their business efficiently, ERP systems need to have extensive multi-site management capabilities. The complete financial accounting and management accounting requirementsof the organization should be addressed. It is necessary to have centralized or de-centralized accounting functions with complete flexibility to consolidate corporate information.After-sales service should be streamlined and managed efficiently. A strong EIS (Enterprise Information System) with extensive drill down capabilities should be available for the top management to get a birds eye view of the health of their organization and help them to analyze performance in key areas.Evaluation CriteriaSome important points to be kept in mind while evaluating an ERP software include: 1) Functional fit with the Company's business processes 2) Degree of integration between the various components of the ERP system 3) Flexibility and scalability 4) Complexity; user friendliness 5) Quick implementation; shortened ROI period 6) Ability to support multi-site planning and control 7) Technology; client/server capabilities, database independence, security 8)Availability of regular upgrades 9) Amount of customization required 10) Local support infrastructure II) Availability of reference sites 12) Total costs,including cost of license, training, implementation, maintenance, customization and hardware requirements.ERP Systems-ImplementationThe success of an ERP solution depends on how quick the benefits can be reaped from it. This necessitates rapid implementations which lead to shortened ROI periods. Traditional approach to implementation has been to carry out a Business Process Re-engineering exercise and define a "TO BE"model before the ERP system implementation. This led to mismatches between the proposed model and the ERP functionality, the consequence of which was customizations, extended implementation time frames, higher costs and loss of user confidence.ERP Systems-The FutureThe Internet represents the next major technology enabler which allows rapid supply chain management between multiple operations and trading partners. Most ERP systems are enhancing their products to become "Internet Enabled" so that customers worldwide can have direct to the supplier's ERP system. ERP systems are building in the Workflow Management functionally which provides a mechanism to manage and controlthe flow of work by monitoring logistic aspects like workload, capacity, throughout times, work queue lengths and processing times.译文1在过去十年中,商业环境发生了巨大的变化。
关于计算机的英文文献写作范文摘要
关于计算机的英文文献写作范文摘要全文共10篇示例,供读者参考篇1Title: All About ComputersHey guys! Have you ever wondered how computers work and why they are so important in our daily lives? In this article, we will dive into the fascinating world of computers and learn all about their history, functions, and impact on society.First off, let's talk about the history of computers. Did you know that the first computer was invented in the early 20th century? It was a huge machine that took up an entire room! But over the years, computers have become smaller, faster, and more powerful. Nowadays, we have laptops, tablets, and smartphones that can fit in the palm of our hands.So, what do computers actually do? Well, they can process information, store data, and perform calculations at incredible speeds. This allows us to do all sorts of cool things like play video games, surf the internet, and communicate with people all over the world.But computers aren't just for fun and games – they also play a crucial role in many industries like healthcare, education, and business. For example, doctors use computers to analyze medical images and diagnose diseases, while teachers use them to create interactive lessons for their students.In conclusion, computers are an essential part of our modern world and they have revolutionized the way we live, work, and play. So the next time you turn on your computer, remember how amazing this technology is and how lucky we are to have it in our lives. Let's give a big shoutout to all the brilliant minds who have made computers possible!篇2Once upon a time, there was a magical invention called a computer. It can do all kinds of cool things like playing games, watching videos, and even helping with homework!Computers are made up of many parts, like the screen, keyboard, and mouse. They also have something called a CPU, which is like the brain of the computer. It helps the computer think and do all the things we want it to do.One important thing about computers is that they can store lots of information. This is called memory. Without memory, thecomputer wouldn't be able to remember all the things we tell it to do.Another cool thing about computers is that they can connect to the internet. The internet is like a giant web that connects all the computers in the world. We can use it to find information, talk to our friends, or even play games with people from other countries!In conclusion, computers are amazing inventions that help us in so many ways. From helping with homework to connecting us with people around the world, computers have made our lives easier and more fun. Let's all give a big round of applause to the wonderful world of computers!篇3Title: Let's Learn About ComputersHey guys! Have you ever wondered how computers work? Well, I'm here to tell you all about it! Computers are super cool machines that can do all sorts of things, like play games, surf the internet, and even help with homework. But how do they actually work?First off, computers have a bunch of different parts that all work together to make them run. There's the CPU, which is like the brain of the computer, the motherboard, which holds all the other parts in place, and the hard drive, which stores all your files and pictures. And let's not forget about the keyboard and mouse, which help you control the computer.But how does all this stuff actually work? Well, when you type something on the keyboard or click the mouse, it sends a signal to the CPU. The CPU then processes that signal and sends it to the motherboard, which tells the other parts of the computer what to do. It's like having a bunch of little helpers inside the computer, all working together to make sure everything runs smoothly.And did you know that computers can do math really fast? That's because they use something called binary code, which is just a bunch of ones and zeros. By combining these numbers in different ways, computers can do all sorts of calculations in the blink of an eye.So next time you're playing a game or doing homework on the computer, remember all the cool stuff that's going on behind the scenes. Computers may seem like magic, but with a little bitof knowledge, you can understand how they work and maybe even become a computer whiz yourself!篇4Computer is a super cool thing in our life! It can help us do so many things, like playing games, watching movies, and even doing homework! In this article, we are going to talk about the history of computers, how they work, and some fun facts about them.First of all, do you know when the first computer was invented? It was actually a long time ago, in the 1940s! Back then, computers were huge machines that took up whole rooms. But now, we have laptops and smartphones that are much smaller and faster.So, how does a computer work? Well, it has a brain called a central processing unit (CPU) that does all the thinking and calculations. It also has memory to store information and input devices like keyboards and mice to help us give commands.There are also different types of computers, like desktops, laptops, tablets, and smartphones. Each of them has its own features and uses. For example, desktops are great for work and gaming, while laptops are good for when we are on the go.Did you know that the first computer bug was actually a real bug? Back in the 1940s, a moth got stuck in a computer and caused it to malfunction. That's why we now call any glitch in a computer system a "bug"!In conclusion, computers are amazing inventions that have changed the way we live and work. They are constantly evolving and becoming more powerful. So next time you use a computer, remember how far technology has come and how much more it can do in the future!篇5Computer is a super cool machine that can do a lot of fun and useful stuff! It has a super fast brain called a processor, and it can store a lot of information in its memory. We can use computers to play games, watch videos, do homework, and even talk to our friends online.There are different parts of a computer, like the monitor, keyboard, mouse, and CPU. The monitor is like a TV screen where we can see what the computer is doing. The keyboard helps us type words and numbers, while the mouse lets us click on things on the screen. The CPU is where all the magic happens – it's like the computer's brain!Computers can also connect to the internet, which is like a super big library with all the information in the world. We can use the internet to search for things, watch videos, play games, and even talk to people from far away. It's really cool how computers can help us learn and have fun at the same time.In conclusion, computers are amazing machines that can do so many things to help us in our daily lives. From playing games to doing homework to connecting with friends online, computers have become an essential part of our lives. We should all be grateful for the technology that allows us to use computers and make our lives easier and more enjoyable. Thank you, computers, for being so awesome!篇6Today I want to talk about something super cool and exciting - computers! Computers are amazing machines that can do so many things to help us in our daily lives. In this article, we will explore the history of computers, how they work, and some of the amazing things they can do.Computers have come a long way since they were first invented. Did you know that the first computer was as big as a room and could only do simple calculations? Now we havecomputers that can fit in the palm of our hands and can do things like play games, send emails, and even help us with our homework.But how do computers actually work? Well, computers are made up of many different parts, like the motherboard, CPU, and hard drive. These parts all work together to process information and carry out tasks. When you type on the keyboard or click the mouse, the computer sends this information to the CPU, which then processes it and sends it back to the screen so you can see the result.Computers can do so many amazing things, like help scientists discover new things, help doctors save lives, and even help us connect with people all over the world. They are truly incredible machines that have changed the way we live our lives.In conclusion, computers are an essential part of our lives and have revolutionized the way we do things. From simple calculations to complex tasks, computers have come a long way and will continue to shape our future. So next time you use a computer, remember how amazing and powerful it is!篇7Title: Computers: The Magical MachinesHey guys! Let's talk about computers today! Computers are like magic machines that can do all sorts of cool things. They can help us play games, do homework, watch videos, and even chat with our friends. Isn't that super awesome?First of all, let's talk about what a computer actually is. A computer is a machine that can be programmed to carry out sequences of arithmetic or logical operations automatically. It has a bunch of parts like a keyboard, a monitor, a mouse, and a CPU (Central Processing Unit). All these parts work together to make the computer run smoothly.Next, let's chat about how computers work. When we type something on the keyboard or click the mouse, the computer sends all that information to the CPU. The CPU then processes that information and sends it to the monitor, so we can see what we're doing on the screen. It's like a big dance party inside the computer!Now, let's talk about the different types of computers. There are personal computers (PCs) that we use at home or school, laptops that we can take with us anywhere, and even super powerful computers called supercomputers that can do really complex tasks really fast. It's amazing how many different kinds of computers there are!In conclusion, computers are truly magical machines that have changed the way we live and work. They help us do so many things and make our lives easier in so many ways. So next time you're using a computer, remember how incredible it is and how lucky we are to have such amazing technology at our fingertips. Yay for computers!篇8Computers are super cool things that can do a lot of awesome stuff! They can help us play games, do homework, and even talk to our friends online. But do you know how computers work and who invented them? Let me tell you all about it!The first computers were really big and took up a whole room! They were invented a long time ago by some super smart people who wanted to help with really hard math problems. Over time, computers got smaller and faster, and now we have laptops and smartphones that we can carry around with us.Computers are made up of a bunch of parts like the CPU, motherboard, RAM, and hard drive. They all work together to help the computer run smoothly and do all the things we need it to do. When we type on the keyboard or move the mouse, the computer uses these parts to figure out what we want it to do.We can also use programs and apps on the computer to do different things like make art, write stories, or even code our own games! It's so much fun to explore all the things computers can do and learn how to use them in new and exciting ways.So next time you turn on your computer or tablet, remember all the amazing things it can do and how lucky we are to have such cool technology to help us learn and puters are awesome, and I can't wait to see what new things they can do in the future!篇9Title: Let's Talk About Computers!Hey guys! Today I want to talk to you all about computers. Computers are super cool machines that can do so many things, like play games, watch videos, and even help us do our homework. They have different parts like the monitor, keyboard, mouse, and CPU. The CPU is like the brain of the computer, it helps it run smoothly and do all the things we ask it to.Computers also have something called software, which is like all the programs and apps we use on it. We can use software to write documents, edit photos, or even make music. There are so many cool things we can do with computers!But do you know how computers actually work? Well, inside the CPU there are tiny electronic parts called transistors. These transistors help the computer process information quickly and efficiently. Computers also use something called binary code, which is a series of 0s and 1s that the computer understands. It's like a secret language that only computers can understand!So next time you use a computer, remember all the cool things it can do and how it works behind the scenes. Computers are amazing machines that help us in so many ways. Let's keep exploring and learning about them together!篇10Hello everyone, today I want to talk about computers! Computers are super cool machines that can help us do all kinds of things like play games, do homework, and even talk to our friends far away. In this article, I will share some fun facts and information about computers.First of all, did you know that the first computer was as big as a room? Isn't that crazy? Nowadays, our computers can fit in our pockets or even on our wrists like a watch. Computers have become smaller and faster over the years, thanks to amazing technology.Computers have different parts that work together to make them run. There is the central processing unit (CPU) that acts as the brain of the computer, the hard drive where we store all our files and games, and the monitor where we can see everything. It's like a big puzzle that needs all the pieces to work properly.We can use computers for so many things. We can do research for school projects, watch videos on YouTube, or even make our own art using cool programs. The possibilities are endless with a computer by our side.But we also need to be careful when using computers. We should always ask our parents for permission before going online and never share personal information with strangers. It's important to stay safe and responsible when using technology.In conclusion, computers are amazing machines that have changed the way we live and learn. They help us in so many ways and make our lives easier. So next time you use a computer, remember to appreciate all the hard work and technology that goes into making it run smoothly. Thank you for reading and happy computing!。
关于计算机的英文文献
关于计算机的英文文献以下是一些关于计算机的英文文献:1. "Computer Science: The Discipline" by David Gries and Fred B. Schneider, published in 1993 in the journal Communications of the ACM.2. "The Art of Computer Programming" by Donald E. Knuth, published in three volumes between 1968 and 1973.3. "A Mathematical Theory of Communication" by Claude Shannon, published in 1948 in the Bell System Technical Journal.4. "Operating Systems Design and Implementation" by Andrew S. Tanenbaum and Albert S. Woodhull, published in 1997.5. "The Structure and Interpretation of Computer Programs" by Harold Abelson and Gerald Jay Sussman, published in 1984.6. "Computer Networks" by Andrew S. Tanenbaum, published in 1981.7. "Introduction to Algorithms" by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein, published in 1990.8. "Foundations of Computer Science" by Alfred Aho and Jeffrey Ullman, published in 1992.9. "Computer Architecture: A Quantitative Approach" byJohn L. Hennessy and David A. Patterson, first published in 1990.10. "Artificial Intelligence: A Modern Approach" by Stuart Russell and Peter Norvig, first published in 1995.。
计算机英文文献加翻译
Management Information System OverviewManagement Information System is that we often say that the MIS, is a human, computers and other information can be composed of the collection, transmission, storage, maintenance and use of the system, emphasizing the management, stressed that the modern information society In the increasingly popular. MIS is a new subject, it across a number of areas, such as scientific management and system science, operations research, statistics and computer science. In these subjects on the basis of formation of information-gathering and processing methods, thereby forming a vertical and horizontal weaving, and systems.The 20th century, along with the vigorous development of the global economy, many economists have proposed a new management theory. In the 1950s, Simon made dependent on information management and decision-making ideas. Wiener published the same period of the control theory, that he is a management control process. 1958, Gail wrote: "The management will lower the cost of timely and accurate information to better control." During this period, accounting for the beginning of the computer, data processing in the term.1970, Walter T. Kenova just to the management information system under a definition of the term: "verbal or written form, at the right time to managers, staff and outside staff for the past, present, the projection of future Enterprise and its environment-related information 原文请找腾讯3249114六,维^论~文.网 no application model, no mention of computer applications.1985, management information systems, the founder of the University of Minnesota professor of management at the Gordon B. Davis to a management information system a more complete definition of "management information system is a computer hardware and software resources, manual operations, analysis, planning , Control and decision-making model and the database - System. It provides information to support enterprises or organizations of the operation, management and decision-making function. "Comprehensive definition of thisExplained that the goal of management information system, functions and composition, but also reflects the management information system at the time of level.With the continuous improvement of science and technology, computer science increasingly mature, the computer has to be our study and work on the run along. Today, computers are already very low price, performance, but great progress, and it was used in many areas, the computer was so popular mainly because of the following aspects: First, the computer can substitute for many of the complex Labor. Second, the computer can greatly enhance people's work efficiency. Third, the computer can save a lot of resources. Fourth, the computer can make sensitive documents more secure.Computer application and popularization of economic and social life in various fields. So that the original old management methods are not suited now more and social development. Many people still remain in the previous manual. This greatly hindered the economic development of mankind. In recent years, with the University of sponsoring scale is growing, the number of students in the school also have increased, resulting in educational administration is the growing complexity of the heavy work, to spend a lot of manpower, material resources, and the existing management of student achievement levels are not high, People have been usin g the traditional method of document management student achievement, the management there are many shortcomings, such as: low efficiency, confidentiality of the poor, and Shijianyichang, will have a large number of documents and data, which is useful for finding, updating andmaintaining Have brought a lot of difficulties. Such a mechanism has been unable to meet the development of the times, schools have become more and more day-to-day management of a bottleneck. In the information age this traditional management methods will inevitably be computer-based information management replaced.As part of the computer application, the use of computers to students student performance information for management, with a manual management of the incomparable advantages for example: rapid retrieval, to find convenient, high reliability and large capacity storage, the confidentiality of good, long life, cost Low. These advantages can greatly improve student performance management students the efficiency of enterprises is also a scientific, standardized management, and an important condition for connecting the world. Therefore, the development of such a set of management software as it is very necessary thing.Design ideas are all for the sake of users, the interface nice, clear and simple operation as far as possible, but also as a practical operating system a good fault-tolerant, the user can misuse a timely manner as possible are given a warning, so that users timely correction . T o take full advantage of the functions of visual FoxPro, design powerful software at the same time, as much as possible to reduce the occupiers system resources.Visual FoxPro the command structure and working methods:Visual FoxPro was originally called FoxBASE, the U.S. Fox Software has introduced a database products, in the run on DOS, compatible with the abase family. Fox Software Microsoft acquisition, to be developed so that it can run on Windows, and changed its name to Visual FoxPro. Visual FoxPro is a powerful relational database rapid application development tool, the use of Visual FoxPro can create a desktop database applications, client / server applications and Web services component-based procedures, while also can use ActiveX controls or API function, and so on Ways to expand the functions of Visual FoxPro.1651First, work methods1. Interactive mode of operation(1) order operationVF in the order window, through an order from the keyboard input of all kinds of ways to complete the operation order.(2) menu operationVF use menus, windows, dialog to achieve the graphical interface features an interactive operation. (3) aid operationVF in the system provides a wide range of user-friendly operation of tools, such as the wizard, design, production, etc..2. Procedure means of implementationVF in the implementation of the procedures is to form a group of orders and programming language, an extension to save. PRG procedures in the document, and then run through the automatic implementation of this order documents and award results are displayed.Second, the structure of command1. Command structure2. VF orders are usually composed of two parts: The first part is the verb order, also known as keywords, for the operation of the designated order functions; second part of the order clause, for an order that the operation targets, operating conditions and other information . VF order form are as follows:3. <Order verb> "<order clause>"4. Order in the format agreed symbols5. VF in the order form and function of the use of the symbol of the unity agreement, the meaning of these symbols are as follows:6. Than that option, angle brackets within the parameters must be based on their format input parameters.7. That may be options, put in brackets the parameters under specific requ ests from users choose to enter its parameters.8. Third, the project manager9. Create a method10. command window: CREA T PROJECT <file name>11. Project Manager12. tab13. All - can display and project management applications of all types of docume nts, "All" tab contains five of its right of the tab in its entirety.14. Data - management application projects in various types of data files, databases, free form, view, query documents.15. Documentation - display 原文请找腾讯3249114六,维^论~文.网 , statements, documents, labels and other documents.16. Category - the tab display and project management applications used in the class library documents, including VF's class library system and the user's own design of the library.17. Code - used in the project management procedures code documents, such as: program files (. PRG), API library and the use of project management for generation of applications (. APP).18. (2) the work area19. The project management work area is displayed and management of all types of document window.20. (3) order button21. Project Manager button to the right of the order of the work area of the document window to provide command.22. 4, project management for the use of23. 1. Order button function24. New - in the work area window selected certain documents, with new orders button on the new document added to the project management window.25. Add - can be used VF "file" menu under the "new" order and the "T ools" menu under the "Wizard" order to create the various independent paper added to the project manager, unified organization with management.26. Laws - may amend the project has been in existence in the various documents, is still to use such documents to modify the design interface.27. Sports - in the work area window to highlight a specific document, will run the paper.28. Mobile - to check the documents removed from the project.29. Even the series - put the item in the relevant documents and even into the application executable file.Database System Design :Database design is the logical database design, according to a forthcoming data classification system and the logic of division-level organizations, is user-oriented. Database design needsof various departments of the integrated enterprise archive data and data needs analysis of the relationship between the various data, in accordance with the DBMS.管理信息系统概要管理信息系统就是我们常说的MIS(Management Information System),是一个由人、计算机等组成的能进行信息的收集、传送、储存、维护和使用的系统,在强调管理,强调信息的现代社会中它越来越得到普及。
计算机专业中英文翻译外文翻译文献翻译
英文参考文献及翻译Linux - Operating system of cybertimes Though for a lot of people , regard Linux as the main operating system to make up huge work station group, finish special effects of " Titanic " make , already can be regarded as and show talent fully. But for Linux, this only numerous news one of. Recently, the manufacturers concerned have announced that support the news of Linux to increase day by day, users' enthusiasm to Linux runs high unprecedentedly too. Then, Linux only have operating system not free more than on earth on 7 year this piece what glamour, get the favors of such numerous important software and hardware manufacturers as the masses of users and Orac le , Informix , HP , Sybase , Corel , Intel , Netscape , Dell ,etc. , OK?1.The background of Linux and characteristicLinux is a kind of " free (Free ) software ": What is called free,mean users can obtain the procedure and source code freely , and can use them freely , including revise or copy etc.. It is a result of cybertimes, numerous technical staff finish its research and development together through Inte rnet, countless user is it test and except fault , can add user expansion function that oneself make conveniently to participate in. As the most outstanding one in free software, Linux has characteristic of the following:(1)Totally follow POSLX standard, expand the network operatingsystem of supporting all AT&T and BSD Unix characteristic. Because of inheritting Unix outstanding design philosophy , and there are clean , stalwart , high-efficient and steady kernels, their all key codes are finished by Li nus Torvalds and other outstanding programmers, without any Unix code of AT&T or Berkeley, so Linu x is not Unix, but Linux and Unix are totally compatible.(2)Real many tasks, multi-user's system, the built-in networksupports, can be with such seamless links as NetWare , Windows NT , OS/2 ,Unix ,etc.. Network in various kinds of Unix it tests to be fastest in comparing and assess efficiency. Support such many kinds of files systems as FAT16 , FAT32 , NTFS , Ex t2FS , ISO9600 ,etc. at the same time .(3) Can operate it in many kinds of hardwares platform , including such processors as Alpha , SunSparc , PowerPC , MIPS ,etc., to various kinds of new-type peripheral hardwares, can from distribute on global numerous programmer there getting support rapidly too.(4) To that the hardware requires lower, can obtain very good performance on more low-grade machine , what deserves particular mention is Linux outstanding stability , permitted " year " count often its running times.2.Main application of Linux At present,Now, the application of Linux mainly includes:(1) Internet/Intranet: This is one that Linux was used most at present, it can offer and include Web server , all such Inter net services as Ftp server , Gopher server , SMTP/POP3 mail server , Proxy/Cache server , DNS server ,etc.. Linux kernel supports IPalias , PPP and IPtunneling, these functions can be used for setting up fictitious host computer , fictitious service , VPN (fictitious special-purpose network ) ,etc.. Operating Apache Web server on Linux mainly, the occupation rate of market in 1998 is 49%, far exceeds the sum of such several big companies as Microsoft , Netscape ,etc..(2) Because Linux has outstanding networking ability , it can be usedin calculating distributedly large-scaly, for instance cartoon making , scientific caculation , database and file server ,etc..(3) As realization that is can under low platform fullness of Unix that operate , apply at all levels teaching and research work of universities and colleges extensively, if Mexico government announce middle and primary schools in the whole country dispose Linux and offer Internet service for student already.(4) Tabletop and handling official business appliedly. Application number of people of in this respect at present not so good as Windows of Microsoft far also, reason its lie in Lin ux quantity , desk-top of application software not so good as Windows application far not merely, because the characteristic of the freedom software makes it not almost have advertisement thatsupport (though the function of Star Office is not second to MS Office at the same time, but there are actually few people knowing).3.Can Linux become a kind of major operating system?In the face of the pressure of coming from users that is strengthened day by day, more and more commercial companies transplant its application to Linux platform, comparatively important incident was as follows, in 1998 ①Compaq and HP determine to put forward user of requirement truss up Linux at their servers , IBM and Dell promise to offer customized Linux system to user too. ②Lotus announce, Notes the next edition include one special-purpose edition in Linux. ③Corel Company transplants its famous WordPerfect to on Linux, and free issue. Corel also plans to move the other figure pattern process products to Linux platform completely.④Main database producer: Sybase , Informix , Oracle , CA , IBM have already been transplanted one's own database products to on Linux, or has finished Beta edition, among them Oracle and Informix also offer technical support to their products.4.The gratifying one is, some farsighted domestic corporations have begun to try hard to change this kind of current situation already. Stone Co. not long ago is it invest a huge sum of money to claim , regard Linux as platform develop a Internet/Intranet solution, regard this as the core and launch Stone's system integration business , plan to set up nationwide Linux technical support organization at the same time , take the lead to promote the freedom software application and development in China. In addition domestic computer Company , person who win of China , devoted to Linux relevant software and hardware application of system popularize too. Is it to intensification that Linux know , will have more and more enterprises accede to the ranks that Linux will be used with domestic every enterprise to believe, more software will be planted in Linux platform. Meanwhile, the domestic university should regard Linux as the original version and upgrade already existing Unix content of courses , start with analysing the source code and revising the kernel and train a large number of senior Linux talents, improve our country's own operating system. Having only really grasped the operating system, the software industry of our country could be got rid of and aped sedulously at present, the passive state led by the nose by others, create conditions for revitalizing the software industry of our country fundamentally.中文翻译Linux—网络时代的操作系统虽然对许多人来说,以Linux作为主要的操作系统组成庞大的工作站群,完成了《泰坦尼克号》的特技制作,已经算是出尽了风头。
计算机英文参考文献
Progress in ComputersPrestige Lecture delivered to IEE, Cambridge, on 5 February 2004Maurice WilkesComputer LaboratoryUniversity of CambridgeThe first stored program computers began to work around 1950. The one we built in Cambridge, the EDSAC was first used in the summer of 1949.These early experimental computers were built by people like myself with varying backgrounds. We all had extensive experience in electronic engineering and were confident that that experience would stand us in good stead. This proved true, although we had some new things to learn. The most important of these was that transients must be treated correctly; what would cause a harmless flash on the screen of a television set could lead to a serious error in a computer.As far as computing circuits were concerned, we found ourselves with an embarass de richess. For example, we could use vacuum tube diodes for gates as we did in the EDSAC or pentodes with control signals on both grids, a system widely used elsewhere. This sort of choice persisted and the term families of logic came into use. Those who have worked in the computer field will remember TTL, ECL and CMOS. Of these, CMOS has now become dominant.In those early years, the IEE was still dominated by power engineering and we had to fight a number of major battles in order to get radio engineering along with the rapidly developing subject of electronics.dubbed in the IEE light current electrical engineering.properly recognised as an activity in its own right. I remember that we had some difficulty in organising a conference because the power engineers’ ways of doing things were not our ways. A minor source of irritation was that all IEE published papers were expected to start with a lengthy statement of earlier practice, something difficult to do when there was no earlier practiceConsolidation in the 1960sBy the late 50s or early 1960s, the heroic pioneering stage was over and the computer field was starting up in real earnest. The number of computersin the world had increased and they were much more reliable than the very early ones . To those years we can ascribe the first steps in high level languages and the first operating systems. Experimental time-sharing was beginning, and ultimately computer graphics was to come along.Above all, transistors began to replace vacuum tubes. This change presented a formidable challenge to the engineers of the day. They had to forget what they knew about circuits and start again. It can only be said that they measured up superbly well to the challenge and that the change could not have gone more smoothly.Soon it was found possible to put more than one transistor on the same bit of silicon, and this was the beginning of integrated circuits. As time went on, a sufficient level of integration was reached for one chip to accommodate enough transistors for a small number of gates or flip flops. This led to a range of chips known as the 7400 series. The gates and flip flops were independent of one another and each had its own pins. They could be connected by off-chip wiring to make a computer or anything else.These chips made a new kind of computer possible. It was called a minicomputer. It was something less that a mainframe, but still very powerful, and much more affordable. Instead of having one expensive mainframe for the whole organisation, a business or a university was able to have a minicomputer for each major department.Before long minicomputers began to spread and become more powerful. The world was hungry for computing power and it had been very frustrating for industry not to be able to supply it on the scale required and at a reasonable cost. Minicomputers transformed the situation.The fall in the cost of computing did not start with the minicomputer; it had always been that way. This was what I meant when I referred in my abstract to inflation in the computer industry ‘going the other way’. As time goes on people get more for their money, not less.Research in Computer Hardware.The time that I am describing was a wonderful one for research in computer hardware. The user of the 7400 series could work at the gate and flip-flop level and yet the overall level of integration was sufficient to give a degree of reliability far above that of discreet transistors. The researcher, in auniversity or elsewhere, could build any digital device that a fertile imagination could conjure up. In the Computer Laboratory we built the Cambridge CAP, a full-scale minicomputer with fancy capability logic.The 7400 series was still going strong in the mid 1970s and was used for the Cambridge Ring, a pioneering wide-band local area network. Publication of the design study for the Ring came just before the announcement of the Ethernet. Until these two systems appeared, users had mostly been content with teletype-based local area networks.Rings need high reliability because, as the pulses go repeatedly round the ring, they must be continually amplified and regenerated. It was the high reliability provided by the 7400 series of chips that gave us the courage needed to embark on the project for the Cambridge Ring.The RISC Movement and Its AftermathEarly computers had simple instruction sets. As time went on designers of commercially available machines added additional features which they thought would improve performance. Few comparative measurements were done and on the whole the choice of features depended upon the designer’s intuition.In 1980, the RISC movement that was to change all this broke on the world. The movement opened with a paper by Patterson and Ditzel entitled The Case for the Reduced Instructions Set Computer.Apart from leading to a striking acronym, this title conveys little of the insights into instruction set design which went with the RISC movement, in particular the way it facilitated pipelining, a system whereby several instructions may be in different stages of execution within the processor at the same time. Pipelining was not new, but it was new for small computers The RISC movement benefited greatly from methods which had recently become available for estimating the performance to be expected from a computer design without actually implementing it. I refer to the use of a powerful existing computer to simulate the new design. By the use of simulation, RISC advocates were able to predict with some confidence that a good RISC design would be able to out-perform the best conventional computers using the same circuit technology. This prediction was ultimately born out in practice.Simulation made rapid progress and soon came into universal use by computer designers. In consequence, computer design has become more of ascience and less of an art. Today, designers expect to have a roomful of, computers available to do their simulations, not just one. They refer to such a roomful by the attractive name of computer farm.The x86 Instruction SetLittle is now heard of pre-RISC instruction sets with one major exception, namely that of the Intel 8086 and its progeny, collectively referred to as x86. This has become the dominant instruction set and the RISC instruction sets that originally had a considerable measure of success are having to put up a hard fight for survival.This dominance of x86 disappoints people like myself who come from the research wings.both academic and industrial.of the computer field. No doubt, business considerations have a lot to do with the survival of x86, but there are other reasons as well. However much we research oriented people would like to think otherwise. high level languages have not yet eliminated the use of machine code altogether. We need to keep reminding ourselves that there is much to be said for strict binary compatibility with previous usage when that can be attained. Nevertheless, things might have been different if Intel’s major attempt to produ ce a good RISC chip had been more successful.I am referring to the i860 (not the i960, which was something different). In many ways the i860 was an excellent chip, but its software interface did not fit it to be used in a workstation.There is an interesting sting in the tail of this apparently easy triumph of the x86 instruction set. It proved impossible to match the steadily increasing speed of RISC processors by direct implementation of the x86 instruction set as had been done in the past. Instead, designers took a leaf out of the RISC book; although it is not obvious, on the surface, a modern x86 processor chip contains hidden within it a RISC-style processor with its own internal RISC coding. The incoming x86 code is, after suitable massaging, converted into this internal code and handed over to the RISC processor where the critical execution is performed.In this summing up of the RISC movement, I rely heavily on the latest edition of Hennessy and Patterson’s books on computer design as my supporting authority; see in particular Computer Architecture, third edition, 2003, pp 146, 151-4, 157-8.The IA-64 instruction set.Some time ago, Intel and Hewlett-Packard introduced the IA-64instruction set. This was primarily intended to meet a generally recognised need for a 64 bit address space. In this, it followed the lead of the designers of the MIPS R4000 and Alpha. However one would have thought that Intel would have stressed compatibility with the x86; the puzzle is that they did the exact opposite.Moreover, built into the design of IA-64 is a feature known as predication which makes it incompatible in a major way with all other instruction sets. In particular, it needs 6 extra bits with each instruction. This upsets the traditional balance between instruction word length and information content, and it changes significantly the brief of the compiler writer.In spite of having an entirely new instruction set, Intel made the puzzling claim that chips based on IA-64 would be compatible with earlierx86 chips. It was hard to see exactly what was meant.Chips for the latest IA-64 processor, namely, the Itanium, appear to have special hardware for compatibility. Even so, x86 code runs very slowly.Because of the above complications, implementation of IA-64 requires a larger chip than is required for more conventional instruction sets. This in turn implies a higher cost. Such at any rate, is the received wisdom, and, as a general principle, it was repeated as such by Gordon Moore when he visited Cambridge recently to open the Betty and Gordon Moore Library. I have, however, heard it said that the matter appears differently from within Intel. This I do not understand. But I am very ready to admit that I am completely out of my depth as regards the economics of the semiconductor industry.AMD have defined a 64 bit instruction set that is more compatible with x86 and they appear to be making headway with it. The chip is not a particularly large one. Some people think that this is what Intel should have done. [Since the lecture was delivered, Intel have announced that they will market a range of chips essentially compatible with those offered by AMD.] The Relentless Drive towards Smaller TransistorsThe scale of integration continued to increase. This was achieved by shrinking the original transistors so that more could be put on a chip. Moreover, the laws of physics were on the side of the manufacturers. The transistors also got faster, simply by getting smaller. It was therefore possible to have, at the same time, both high density and high speed.There was a further advantage. Chips are made on discs of silicon,known as wafers. Each wafer has on it a large number of individual chips, which are processed together and later separated. Since shrinkage makes it possible to get more chips on a wafer, the cost per chip goes down.Falling unit cost was important to the industry because, if the latest chips are cheaper to make as well as faster, there is no reason to go on offering the old ones, at least not indefinitely. There can thus be one product for the entire market.However, detailed cost calculations showed that, in order to maintain this advantage as shrinkage proceeded beyond a certain point, it would be necessary to move to larger wafers. The increase in the size of wafers was no small matter. Originally, wafers were one or two inches in diameter, and by 2000 they were as much as twelve inches. At first, it puzzled me that, when shrinkage presented so many other problems, the industry should make things harder for itself by going to larger wafers. I now see that reducing unit cost was just as important to the industry as increasing the number of transistors on a chip, and that this justified the additional investment in foundries and the increased risk.The degree of integration is measured by the feature size, which, for a given technology, is best defined as the half the distance between wires in the densest chips made in that technology. At the present time, production of 90 nm chips is still building upSuspension of LawIn March 1997, Gordon Moore was a guest speaker at the celebrations of the centenary of the discovery of the electron held at the Cavendish Laboratory. It was during the course of his lecture that I first heard the fact that you can have silicon chips that are both fast and low in cost described as a violation of Murphy’s law.or Sod’s law as it is usually called in the UK. Moore said that experience in other fields would lead you to expect to have to choose between speed and cost, or to compromise between them. In fact, in the case of silicon chips, it is possible to have both.In a reference book available on the web, Murphy is identified as an engineer working on human acceleration tests for the US Air Force in 1949. However, we were perfectly familiar with the law in my student days, when we called it by a much more prosaic name than either of those mentioned above, namely, the Law of General Cussedness. We even had a mock examination question in which the law featured. It was the type of question inwhich the first part asks for a definition of some law or principle and the second part contains a problem to be solved with the aid of it. In our case the first part was to define the Law of General Cussedness and the second was the problem;A cyclist sets out on a circular cycling tour. Derive an equation giving the direction of the wind at any time.The single-chip computerAt each shrinkage the number of chips was reduced and there were fewer wires going from one chip to another. This led to an additional increment in overall speed, since the transmission of signals from one chip to another takes a long time.Eventually, shrinkage proceeded to the point at which the whole processor except for the caches could be put on one chip. This enabled a workstation to be built that out-performed the fastest minicomputer of the day, and the result was to kill the minicomputer stone dead. As we all know, this had severe consequences for the computer industry and for the people working in it.From the above time the high density CMOS silicon chip was Cock of the Roost. Shrinkage went on until millions of transistors could be put on a single chip and the speed went up in proportion.Processor designers began to experiment with new architectural features designed to give extra speed. One very successful experiment concerned methods for predicting the way program branches would go. It was a surprise to me how successful this was. It led to a significant speeding up of program execution and other forms of prediction followedEqually surprising is what it has been found possible to put on a single chip computer by way of advanced features. For example, features that had been developed for the IBM Model 91.the giant computer at the top of the System 360 range.are now to be found on microcomputersMurphy’s Law remained in a state of suspension. No longer did it make sense to build experimental computers out of chips with a small scale of integration, such as that provided by the 7400 series. People who wanted to do hardware research at the circuit level had no option but to design chips and seek for ways to get them made. For a time, this was possible, if not easy Unfortunately, there has since been a dramatic increase in the cost of making chips, mainly because of the increased cost of making masks for lithography, a photographic process used in the manufacture of chips. It has,in consequence, again become very difficult to finance the making of research chips, and this is a currently cause for some concern.The Semiconductor Road MapThe extensive research and development work underlying the above advances has been made possible by a remarkable cooperative effort on the part of the international semiconductor industry.At one time US monopoly laws would probably have made it illegal for US companies to participate in such an effort. However about 1980 significant and far reaching changes took place in the laws. The concept of pre-competitive research was introduced. Companies can now collaborate at the pre-competitive stage and later go on to develop products of their own in the regular competitive manner.The agent by which the pre-competitive research in the semi-conductor industry is managed is known as the Semiconductor Industry Association (SIA). This has been active as a US organisation since 1992 and it became international in 1998. Membership is open to any organisation that can contribute to the research effort.Every two years SIA produces a new version of a document known as the International Technological Roadmap for Semiconductors (ITRS), with an update in the intermediate years. The first volume bearing the title‘Roadmap’ was issued in 1994 but two reports, written in 1992 and distributed in 1993, are regarded as the true beginning of the series.Successive roadmaps aim at providing the best available industrial consensus on the way that the industry should move forward. They set out in great detail.over a 15 year horizon. the targets that must be achieved if the number of components on a chip is to be doubled every eighteen months.that is, if Moore’s law is to be maintained.-and if the cost per chip is to fall.In the case of some items, the way ahead is clear. In others, manufacturing problems are foreseen and solutions to them are known, although not yet fully worked out; these areas are coloured yellow in the tables. Areas for which problems are foreseen, but for which no manufacturable solutions are known, are coloured red. Red areas are referred to as Red Brick Walls.The targets set out in the Roadmaps have proved realistic as well as challenging, and the progress of the industry as a whole has followed the Roadmaps closely. This is a remarkable achievement and it may be said thatthe merits of cooperation and competition have been combined in an admirable manner.It is to be noted that the major strategic decisions affecting the progress of the industry have been taken at the pre-competitive level in relative openness, rather than behind closed doors. These include the progression to larger wafers.By 1995, I had begun to wonder exactly what would happen when the inevitable point was reached at which it became impossible to make transistors any smaller. My enquiries led me to visit ARPA headquarters in Washington DC, where I was given a copy of the recently produced Roadmap for 1994. This made it plain that serious problems would arise when a feature size of 100 nm was reached, an event projected to happen in 2007, with 70 nm following in 2010. The year for which the coming of 100 nm (or rather 90 nm) was projected was in later Roadmaps moved forward to 2004 and in the event the industry got there a little sooner.I presented the above information from the 1994 Roadmap, along with such other information that I could obtain, in a lecture to the IEE in London, entitled The CMOS end-point and related topics in Computing and delivered on 8 February 1996.The idea that I then had was that the end would be a direct consequence of the number of electrons available to represent a one being reduced from thousands to a few hundred. At this point statistical fluctuations would become troublesome, and thereafter the circuits would either fail to work, or if they did work would not be any faster. In fact the physical limitations that are now beginning to make themselves felt do not arise through shortage of electrons, but because the insulating layers on the chip have become so thin that leakage due to quantum mechanical tunnelling has become troublesome.There are many problems facing the chip manufacturer other than those that arise from fundamental physics, especially problems with lithography. In an update to the 2001 Roadmap published in 2002, it was stated that the continuation of progress at present rate will be at risk as we approach 2005 when the roadmap projects that progress will stall without researchbreak-throughs in most technical areas “. This was the most specific statement about the Red Brick Wall, that had so far come from the SIA and it was a strong one. The 2003 Roadmap reinforces this statement by showing many areas marked red, indicating the existence of problems for which nomanufacturable solutions are known.It is satisfactory to report that, so far, timely solutions have been foundto all the problems encountered. The Roadmap is a remarkable document and, for all its frankness about the problems looming above, it radiates immense confidence. Prevailing opinion reflects that confidence and there is a general expectation that, by one means or another, shrinkage will continue, perhaps down to 45 nm or even less.However, costs will rise steeply and at an increasing rate. It is cost that will ultimately be seen as the reason for calling a halt. The exact point at which an industrial consensus is reached that the escalating costs can no longer be met will depend on the general economic climate as well as on the financial strength of the semiconductor industry itself.。
英文文献翻译(关于zigbee)
英文文献翻译1.1 StandarsWireless sensor standards have been developed with the key design requirement for low power consumption. The standard defines the functions and protocols necessary for sensor nodes to interface with a variety of networks.Someof these standardincludeIEEE802.15.4,ZigBee,WirelessHART,ISA100.11,IETF6LoW-PAN,IE EE802.15.3,Wibree.The follow-ing paragraphs describes these standards in more detail.IEEE802.15.4:IEEE802.15.4[37] is the proposed stan-dard for low rate wireless personal area networks (LR-WPAN's).IEEE802.15.4 focuses on low cost of deployment,low complexity, and low power consumption.IEEE802.15.4 is designed for wireless sensor applications that require short range communication to maximize battery life. The standard allows the formation of the star and peer-to-peer topology for communication between net-work devices.Devices in the star topology communicate with a central controller while in the peer-to-peer topol-ogy ad hoc and self-configuring networks can be formed.IEEE802.15.4devices are designed to support the physical and data-link layer protocols.The physical layer supports 868/915 MHz low bands and 2.4 GHz high bands. The MAC layer controls access to the radio channel using the CSMA-CA mechanism.The MAC layer is also responsible for validating frames, frame delivery, network interface, network synchronization, device association, and secure services.Wireless sensor applications using IEEE802.15.4 include residential, industrial, and environment monitor-ing, control and automation.ZigBee [38,39] defines the higher layer communication protocols built on the IEEE 802.15.4 standards for LR-PANs. ZigBee is a simple, low cost, and low power wireless com- munication technology used in embedded applications.ZigBee devices can form mesh networks connecting hun- dreds to thousands of devices together. ZigBee devices use very little power and can operate on a cell battery for many years. There are three types of ZigBee devices:Zig-Bee coordinator,ZigBee router, and ZigBee end device.Zig-Bee coordinator initiates network formation,stores information, and can bridge networks together. ZigBee routers link groups of devices together andprovide mul-ti-hop communication across devices. ZigBee end devic consists of the sensors, actuators, and controllers that col-lects data and communicates only with the router or the coordinator. The ZigBee standard was publicly available as of June 2005.WirelessHART:The WirelessHART[40,41] standard pro-vides a wireless network communication protocol for pro-cess measurement and control applications.The standard is based on IEEE802.15.4 for low power 2.4 GHz operation. WirelessHART is compatible with all existing devices, tools, and systems. WirelessHART is reliable, secure, and energy efficient. It supports mesh networking,channel hopping, and time-synchronized work com-munication is secure with encryption,verification,authen-tication,and key management.Power management options enable the wireless devices to be more energy effi-cient.WirelessHART is designed to support mesh, star, and combined network topologies. A WirelessHART network consists of wireless field devices,gateways, process auto- mation controller, host applications,and network man-ager.Wireless field devices are connected to process or plant equipment.Gateways enable the communication be-tween the wireless field devices and the host applications.The process automation controller serves as a single con-troller for continuous process.The network manager con-figures the network and schedule communication between devices. It also manages the routing and network traffic. The network manager can be integrated into the gateway, host application, or process automation control-ler. WirelessHART standards were released to the industry in September 2007 and will soon be available in commer- cial products.ISA100.11a: ISA100.11a [42] standard is designed for low data rate wireless monitoring and process automation applications. It defines the specifications for the OSI layer, security, and system management.The standard focuses on low energy consumption,scalability, infrastructure,robustness, and interoperability with other wireless de-vices. ISA100.11a networks use only 2.4 GHz radio and channel hopping to increase reliability and minimize inter-ference.It offers both meshing and star network topolo-gies. ISA100.11a also provides simple, flexible, and scaleable security functionality. 6LoWPAN: IPv6-based Low power Wireless Personal Area Networks [43-45] enables IPv6 packets communica-tion over an IEEE802.15.4 based network.Low power device can communicate directly with IP devices using IP-based protocols. Using 6LoWPAN,low power devices have all the benefits of IPcommunication and management.6LoWPAN standard provides an adaptation layer, new packet format, and address management. Because IPv6 packet sizes are much larger than the frame size of IEEE 802.15.4, an adaptation layer is used. The adaptation layer carries out the functionality for header compression. With header compression, smaller packets are created to fit into an IEEE 802.15.4 frame size. Address management mecha- nism handles the forming of device addresses for commu-nication. 6LoWPAN is designed for applications with low data rate devices that requires Internet communication.IEEE802.15.3:IEEE802.15.3[46] is a physical and MAC layer standard for high data rateWPAN. It is designed to support real-time multi-media streaming of video and mu-sic.IEEE802.15.3 operates on a 2.4 GHz radio and has data rates starting from 11 Mbps to 55 Mbps.The standard uses time division multiple access (TDMA) to ensure quality of service. It supports both synchronous and asynchronous data transfer and addresses power consumption, data rate scalability, and frequency performance. The standard is used in devices such as wireless speakers, portable video electronics, and wireless connectivity for gaming, cordless phones, printers, and televisions.Wibree: Wibree [47] is a wireless communication tech-nology designed for low power consumption, short-range communication, and low cost devices. Wibree allows the communication between small battery-powered devices and Bluetooth devices.Small battery powered devices in-clude watches, wireless keyboard, and sports sensors which connect to host devices such as personal computer or cellular phones. Wibree operates on 2.4 GHz and has a data rate of 1 Mbps. The linking distance between the de-vices is 5-10 m.Wibree is designed to work with Blue-tooth. Bluetooth with Wibree makes the devices smaller and more energy-efficient. Bluetooth-Wibree utilizes the existing Bluetooth RF and enables ultra-low power con-sumption. Wibree was released publicly in October 2006.1.2 IntroductionWireless sensor networks (WSNs) have gained world-wide attention in recent years,particularly with the prolif-eration in Micro-Electro-Mechanical Systems (MEMStechnology which has facilitated the development of smart sensors.These sensors are small, with limited processing and computing resources, and they areinexpensive com-pared to traditional sensors. These sensor nodes can sense, measure, and gather information from the environment and, based on some local decision process, they can trans-mit the sensed data to the user.Smart sensor nodes are low power devices equipped with one or more sensors, a processor, memory, a power supply, a radio, and an actuator. 1 A variety of mechanical, thermal, biological, chemical, optical, and magnetic sensors may be attached to the sensor node to measure properties of he environment. Since the sensor nodes have limited memory and are typically deployed in difficult-to-access locations, a radio is implemented for wireless communica- tion to transfer the data to a base station (e.g., a laptop, a personal handheld device, or an access point to a fixed infra-structure). Battery is the main power source in a sensor node. Secondary power supply that harvests power from the environment such as solar panels may be added to the node depending on the appropriateness of the environment where the sensor will be deployed. Depending on the appli- cation and the type of sensors used, actuators may be incor- porated in the sensors.A WSN typically has ittle or no infrastructure. It con-sists of a number of sensor nodes (few tens to thousands) working together to monitor a region to obtain data about the environment. There are two types of WSNs: structured and unstructured. An unstructured WSN is one that con-tains a dense collection of sensor nodes. Sensor nodes 2 may be deployed in an ad hoc manner into the field. Once 2 In ad hoc deployment, sensor nodes may be randomly placed into the deployed, the network is left unattended to perform moni-toring and reporting functions. In an unstructured WSN, net-work maintenance such as managing connectivity and detecting failures is difficult since there are so many nodes. In a structured WSN, all or some of the sensor nodes are de-ployed in a pre-planned manner.3The advantage of a struc-tured network is that fewer nodes can be deployed with lower network maintenance and management cost.Fewer nodes can be deployed now since nodes are placed at spe-cific locations to provide coverage while ad hoc deployment can have uncovered regions.WSNs have great potential for many applications in sce-narios such as military target tracking and surveillance [2,3], natural disaster relief [4], biomedical health monitor- ing [5,6], and hazardous environment exploration and seis-mic sensing [7].Inmilitary target tracking and surveillance, a WSN can assist in intrusion detection and identification. Specific examples include spatially-corre-lated and coordinated troop and tank movements. With natural disasters, sensor nodes can sense and detect the environment to forecast disasters before they occur. In bio-medical applications, surgical implants of sensors can help monitor a patient's health.For seismic sensing, ad hoc deployment of sensors along the volcanic area can detect the development of earthquakes and eruptions.Unlike traditional networks,a WSN has its own design resource constraints.Resource constraints include a limited amount of energy,short communication range, low bandwidth, and limited processing and storage in each node. Design constraints are application dependent and are based on the monitored environment. The environment plays a key role in determining the size of the network, the deployment scheme, and the network topology. The size of the network varies with the monitored environ-ment. For indoor environments, fewer nodes are required to form a network in a limited space whereas outdoor envi-ronments may require more nodes to cover a larger area. An ad hoc deployment is preferred over pre-planned deployment when the environment is inaccessible by hu-mans or when he network is composed of hundreds to thousands of nodes. Obstructions in the environment can also limit communication between nodes, which in turn af-fects the network connectivity (or topology).Research in WSNs aims to meet the above constraints by introducing new design concepts,creating or improving existing protocols, building new applications, and develop-ingnewalgorithms.Inthisstudy,wepresentatop-downap-proach to survey different protocols and algorithms proposed in recent years. Our work differs from other sur-veys as follows:•While our survey is similar to [1], our focus has been to survey the more recent literature.•We address the issues in a WSN both at the individual sensor node level as well as a group level.•We survey the current provisioning, management and control issues in WSNs.These include issues such as localization, coverage, synchronization, network secu-rity, and data aggregation and compression.•We compare and contrast the various types of wireless sensor networks.•Finally, we provide a summary of the current sensor technologies.The remainder of this paper is organized as follows: Section 2 gives an overview of the key issues in a WSN. Section 3 compares the different types of sensor networks. Section 4 discusses several applications of WSNs.Section 5 presents issues in operating system support, supporting standards, storage, and physical testbed. Section 6 summa-rizes the control and management issues. Section 7 classi-fies and compares the proposed physical layer,data-link layer, network layer, and transport layer protocols. Section 8 concludes this paper. Appendix A compares the existing types of WSNs. Appendix B summarizes the sensor tech-nologies. Appendix C compares sensor applications with the protocol stack.1.3 Overview of key issuesCurrent state-of-the-art sensor technology provides a solution to design and develop many types of wireless sen-sor applications. A summary of existing sensor technolo-gies is provided in Appendix A. Available sensors in the market include generic (multi-purpose) nodes and gate- way (bridge) nodes. A generic (multi-purpose) sensor node's task is to take measurements from the monitored environment. It may be equipped with a variety of devices which can measure various physical attributes such as light, temperature, humidity, barometric pressure, veloc-ity, acceleration, acoustics, magnetic field, etc.Gateway (bridge) nodes gather data from generic sensors and relay them to the base station. Gateway nodes have higher pro-cessing capability,battery power, and transmission (radio) range. A combination of generic and gateway nodes is typ-ically deployed to form a WSN.To enable wireless sensor applications using sensor tech-nologies, the range of tasks can be broadly classified into three groups as shown in Fig. 1. The first group is the system. Eachsensor nodeis an individual system.In order to support different application software on a sensor system, develop-ment of new platforms, operating systems, and storage schemes are needed. The second group is communication protocols, which enable communication between the appli-cation and sensors. They also enable communication be-tween the sensor nodes. The last group is services which are developed to enhance the application and to improve system performance and network efficiency.From application requirements and network manage-ment perspectives, it isimportant th asensor nodes are capable of self-organizing themselves. That is, the sensor nodes can organize themselves into a network and subse-quently are able to control and manage themselves effi-ciently. As sensor nodes are limited in power, processing capacity, and storage, new communication protocols and management requirements.The communication protocol consists of five standard protocol layers for packet switching:application layer,transport layer, network layer, data-link layer, and physical layer. In this survey, we study how protocols at different layers address network dynamics and energy efficiency.Functions such as localization, coverage, storage, synchro- nization, security, and data aggregation and compression are explored as sensor network services.Implementation of protocols at different layers in the protocol stack can significantly affect energy consumption, end-to-end delay, and system efficiency. It is important to optimize communication and minimize energy usage. Tra-ditional networking protocols do not work well in a WSN since they are not designed to meet these requirements.Hence, new energy-efficient protocols have been proposed for all layers of the protocol stack. These protocols employ cross-layer optimization by supporting interactions across the protocol layers.Specifically, protocol state information at a particular layer is shared across all the layers to meet the specific requirements of the WSN.As sensor nodes operate on limited battery power, en-ergy usage is a very important concern in a WSN; and there has been significant research focus that revolves around harvesting and minimizing energy. When a sensor node is depleted of energy, it will die and disconnect from the network which can significantly impact the performance of the application. Sensor network lifetime depends on the number of active nodes and connectivity of the net- work, so energy must be used efficiently in order to maxi- mize the network lifetime.Energy harvesting involves nodes replenishing its en-ergy from an energy source. Potential energy sources in- clude solar cells [8,9], vibration [10], fuel cells, acoustic noise, and a mobile supplier [11]. In terms of harvesting energy from the environment [12], solar cell is the current mature technique that harvest energy from light. There is also work in using a mobile energy supplier such as a robot to replenish energy. The robots would be responsible in charging themselves with energy and then deliveringen- ergy to the nodes.Energy conservation in a WSN maximizes network life-time and is addressed through efficient reliable wireless communication, intelligent sensor placement to achieve adequate coverage, security and efficient storage manage-ment, and through data aggregation and data compression. The above approaches aim to satisfy both the energy con-straint and provide quality of service (QoS) 4 for the applica- tion. For reliable communication, services such as congestion control, active buffer monitoring, acknowledge-ments, and packet-loss recovery are necessary to guarantee reliable packet delivery. Communication strength is depen-dent on the placement of sensor nodes. Sparse sensor place-ment may result in long-range transmission and higher energy usage while dense sensor placement may result in short-range transmission and less energy consumption. Cov-erage is interrelated to sensor placement. The total number of sensors in the network and their placement determine the degree of network coverage. Depending on the application, a higher degree of coverage may be required to increase the accuracy of the sensed data. In this survey, we review new protocols and algorithms developed in these areas.1.1 标准协议:无线传感器标准已经发展出关键的设计要求低功率消耗。
无线传感器网络论文英文版
无线传感器网络论文英文版Wireless Sensor Networks: A Research PaperAbstract:Wireless Sensor Networks (WSNs) have emerged as a revolutionary technology in the field of wireless communication. This research paper aims to provide an overview of WSNs, their applications, challenges, and future prospects.1. Introduction:Wireless Sensor Networks are interconnected nodes that can communicate with each other through wireless protocols. These nodes, equipped with sensors, provide real-time data from physical environments. WSNs have gained significant attention due to their applicability in various industries such as healthcare, agriculture, environmental monitoring, and surveillance.2. Architecture of Wireless Sensor Networks:The architecture of WSNs consists of three main components: sensor nodes, sinks or base stations, and a network infrastructure. Sensor nodes gather information from the environment and transmit it to the sink or base station via multi-hopping or direct transmission. The network infrastructure manages the routing and data aggregation processes.3. Applications of Wireless Sensor Networks:3.1 Environmental Monitoring:WSNs play a crucial role in monitoring environmental parameters such as temperature, humidity, air quality, and water quality. This data is essential for environmental research, disaster management, and habitat monitoring.3.2 Healthcare:WSNs have revolutionized the healthcare industry by enabling remote patient monitoring, fall detection, and medication adherence. These networks assist in providing personalized and timely healthcare services.3.3 Agriculture:In the agricultural sector, WSNs are deployed for crop monitoring, irrigation management, and pest control. The data collected by these networks help farmers enhance crop productivity and reduce resource wastage.3.4 Surveillance:WSNs are extensively employed in surveillance systems to monitor public areas, monitor traffic congestion, and ensure public safety. These networks provide real-time data for efficient decision-making and threat detection.4. Challenges in Wireless Sensor Networks:4.1 Energy Efficiency:Sensor nodes in WSNs are usually battery-powered, making energy efficiency a critical challenge. Researchers are focused on developing energy-efficient protocols and algorithms to prolong the network's lifespan.4.2 Security and Privacy:As WSNs collect sensitive data, ensuring the security and privacy of transmitted information is crucial. Encryption techniques, intrusion detection systems, and secure routing protocols are being developed to address these concerns.4.3 Scalability:Scalability is a critical challenge in large-scale deployment of WSNs. Designing scalable architectures and protocols enable efficient communication and management of a large number of sensor nodes.5. Future Prospects of Wireless Sensor Networks:The future of WSNs is promising, with advancements in technologies such as Internet of Things (IoT) and Artificial Intelligence (AI). Integration of WSNs with IoT devices will enable seamless communication and data exchange. AI algorithms can facilitate intelligent data analysis and decision-making.Conclusion:Wireless Sensor Networks have shown tremendous potential in various fields and continue to evolve with advancements in technology. Addressing energy efficiency, security, and scalability challenges will contribute to the widespread adoption of WSNs. As researchers continue to explore new possibilities, WSNs will become an integral part of our daily lives, transforming industries and enhancing our quality of life.。
英文文献翻译(计算机方面的)
网络的可信性,容错性,可靠性,安全性和生存性的分析比较(原文名字A Comparative Analysis of Network Dependability,Fault-tolerance,Reliability,Security, and Survivabilit)M. Al-Kuwaiti IEEE成员, N. Kyriakopoulos IEEE高级成员 S. Hussein, Member IEEE成员摘要人们用一些定性和定量的术语来描述众所周知的信息系统、网络或基础设施的性能。
然而,为严格评价那些系统的性能而定义了的一些术语中,存在一些重复定义或者歧义的问题。
这种问题的产生是因为信息技术学科包含了各种各样的学科,而那些学科中已经定义了自己独特的用语。
本文提出了一种系统的方法,来确定五个被广泛应用的概念的通用和互补的特征,这五个概念分别是:可信性,容错性,可靠性,安全性和生存性。
并分析了五个概念的定义,探讨了它们之间的相似性和差异。
关键字:可信性,容错性,可靠性,安全性和生存性。
/////////////////////////////////////////////////////////////////////////////////// ///概述各种基础操作的混乱使建立减少混乱的影响改善基础的性能的机制显得非常重要。
问题从基础的构成开始出现。
它们组成系统,这些系统通过不同的学科发展成熟了。
信息基础设施的硬件部分包括来自各个领域的电气工程设备,软件部分包括计算机科学的所有学科的发展,这里仅举两个例子。
不同领域的产品组成复杂的系统,包括人员组成,给以分析和改善基础设施运作为目的的高效机制的发展增添了困难。
其中一个问题可以被归结于描述在不同领域的表现的术语的歧义。
一个设计者或用户面对的术语中一些可能是互相补充的或是同义的或者是介于两者之间的。
因此,有必要为制定一些术语而达成一致,这些术语的含义不涉及具体学科的并且能被最广泛使用。
英文文献 科技类 原文及翻译 (电子 电气 自动化 通信…) 8
Switching Power Supply目录1 Switching Power Supply (1)Linear versus Switching Power Supplies (1)Basic Converters (2)1.2.1Forward-Mode Converter Fundamentals (2)1.2.3 Flyback or Boost-mode Converter Fundamentals (4)1.3 Topologies (5)1 开关电源 (7)1.1 线性电源和开关电源之比拟 (7)根本转换器 (8)1.2.1 前向模式转换器根底 (8)12.2 增压模式转换器根底 (8)1.3 拓扑结构 (9)2 Operational Amplifiers (10)2 放大器 (14)1 Switching Power SupplyEvery new electronic product , except those that battery powered, requires converting off-line 115V ac or 230V ac power to some dc voltage for powering the electronics. Efficient conversion of electrical power is becoming a primary concern to companies and to society as a whole.Switching power supplies offer not only higher efficiencies but also offer greater flexibility to the designer. Recent advances in semiconductor, magnetic and passive technologies make the switching power supply an ever more popular choice in the power conversion arena today.1.1 Linear versus Switching Power SuppliesHistorically, the linear regulator was the primary method of creating a regulated output voltage. It operates by reducing a higher input voltage down to the lower output voltage by linearly controlling the conductivity of a series pass power device in response to changes in its load. This results in a large voltage being placed across the pass unit with the load current flowing through it.This headroom loss ()V I⨯ causes the linear regulator to only be 30 todrop load50 percent efficient. That means that for each watt delivered to the load , at least a watt has to be dissipated in heat. The cost of the heatsink actually makes the linear regulator uneconomical above 10watts for small applications. Below that point, however, they are cost effective in step-down applications.The switching regulator operates the power devices in the full-on and cutoff states. This then results in either large currents being passed through the power devices with a low“on〞voltage or no current flowing with high voltage across the device. This results in a much lower power being dissipated within the supply.The average switching power-supply exhibits efficiencies of between 70 to 90 percent, regardless of the input voltage.Higher levers of integration have driven the cost of switching power supplies downward which makes in an attractive choice for output powers greater than 10 watts or where multiple outputs are desired.1.2 Basic ConvertersForward-Mode Converter FundamentalsThe most elementary forward-mode converter is the Buck or Step-down Converter which can be seen in Figure 3.1.Its operation can be seen as having two distinct time periods which occur when the series power switch is on and off. When the power switch is on ,the inputvoltage is connected to the input of the inductor .The output of switch of inductor is the output voltage, and the rectifier is back-biased. During this period, since there is a constant voltage source connected across the inductor, the inductor current begins to linearly ramp upward which is described by:()()in out on L on V V t i L -⨯=During the “on 〞 period , energy is being stored within the core material of the inductor in the form of flux. There is sufficient energy stored to carry the requirements of the load during the next off period.The next period is the “off 〞 period of the power switch .When the power switch turns off, the input voltage of the inductor flies below ground and is clamped at one diode drop below ground by the catch diode. Current now begins to flow through the catch diode thus maintaining the load current loop. This remove the stored energy from the inductor, The inductor. The inductor current during this time is:()()out D offL off V V t i L -⨯=This period ends when the power switch is once again turned on.Regulation is accomplished by varying the on-to-off duty cycle of the power switch. The relationship which approximately describes its operation is:out in V V ≈∂⨯Where ∂ is the duty cycle (()/on on off t t t ∂=+).The buck converter is capable of kilowatts of output power, but suffers from one serious shortcoming which would occur if the power switch were to fail short-circuited, the input power source is connected directly to the load circuitry with usually produces catastrophic results. To avoid this situation, a crowbar is placed across the output. A crowbar is a latching SCR which is fired when the output is sensed as entering an overvoltage condition. The buckconverter should only be used for board-level regulation.Flyback or Boost-mode Converter FundamentalsThe most elementary flyback-mode converter is the boost or Step-up Converter. Its schematic can be seen in Figure3.2.Its operation can also be broken into two distinct periods where the power switch is on or off. When power switch turns on, the input voltage source is placed directly across the inductor. This causes the current to begin linearly ramping upwards from zero and is described by:()in on L on V t i L ⨯=Once again, energy is being stored during each cycle times the frequency of operation must b higher than the power demands of the load or,20.5sto pkop out P L I f P =⨯⨯>The power switch then turns off and the inductor voltage flies back abovethe input voltage and is clamped and is clamed by the rectifier at the output voltage .The current then begins to linearly ramp downward until the until the energy within the core is completely depleted. Its waveform which is shown in Figure 3.3 is determined by:()()out in offL off V V t i L -⨯=The boost converter should also be only used for board-level regulation.1.3 TopologiesA topology is the arrangement of the power devices and their magnetic elements. Each topology has its own merits within certain applications. Some of the factors which determine the suitability of a particular topology to a certain application are:1) Is the topology electrically isolated from the input to the output or not.2) How much of the input voltage is placed across the inductor or transformer.3) What is the peakcurrent flowing through the power semiconductors.4) Are multiple outputs required.5) How much voltage appears across the power semiconductors.The first choice that faces the designer is whether to have input to output transformer isolation. Non-isolated switching power supplies are typically used for board-level regulation where a dielectric barrier is provided elsewhere within the system. Non-isolated topologies should also be used where the possibility of a failure does not connect the input power source to the fragile load circuitry. Transformer isolation should be used in all other situations. Associated with that is the need for multiple output voltages. Transformers provide an easy method for adding additional output voltage to the switching power supply. The companies building their own power systems are leaning toward transformer isolation in as many power supplies as possible since it prevents a domino effect during failure conditions.1 开关电源除了那些用电池做电源的电子产品外,每个新型电子产品都需要将115V或者230V 的交流电源转换为直流电源,为电路供电。
外文文献—计算机网络
英文原文:Computer networkA computer network, often simply referred to as a network, is a collection of computers and devices interconnected by communications channels that facilitate communications among users and allows users to share resources. Networks may be classified according to a wide variety of characteristics. A computer network allows sharing of resources and information among interconnected devices.History :Early networks of communicating computers included the military radar system Semi-Automatic Ground Environment (SAGE) and its relative the commercial airline reservation system Semi-Automatic Business Research Environment (SABRE),started in the late 1950s.[1][2]When Russia launched His SPUTNIK Satellite in Space In 1957.The American Started Agency Names ADV ANCE RESEARCH PROJECT AGENCY (ARPA) & launched THis 1st Satellite Within 18 Month After Establishment.Then Sharing Of TheInformation InAnother Computer They Use ARPANET.And This All Responsibility On America's Dr.LIED LIEDER.Then in 1969,ARPANET Comes in INDIA And INDIAN Switched This Name To NETWORK. In the 1960s, the Advanced Research Projects Agency (ARPA) started funding the design of the Advanced Research Projects Agency Network (ARPANET) for the United States Department of Defense. Development of the network began in 1969, based on designs developed during the1960s.[3] The ARPANET evolved into the modern Internet.Purpose :Computer networks can be used for a variety of purposes: Facilitating communications. Using a network, people can communicate efficiently and easily via email, instant messaging, chat rooms, telephone, video telephone calls, and video conferencing.Sharing hardware.:In a networked environment, each computer on a network may access and use hardware resources on the network, such as printing a document on a shared network printer.Sharing files, data, and information. In a network environment, authorized user may access data and information stored on other computers on the network. The capability of providing access to data and information on shared storage devices is an important feature of many networks.Sharing software.:Users connected to a network may run application programs on remote computers.information preservationSecurityNetwork classification The following list presents categories used for classifying networks.Connection method :Computer networks can be classified according to thehardware and software technology that is used to interconnect the individual devices in the network, such as optical fiber, Ethernet, wireless LAN, HomePNA, power line communication or G.hn.Ethernet as it is defined by IEEE 802 utilizes various standards and mediums that enable communication between devices. Frequently deployed devices include hubs, switches, bridges, or routers. Wireless LAN technology is designed to connect devices without wiring. These devices use radio waves or infrared signals as a transmission medium. ITU-T G.hn technology uses existing home wiring (coaxial cable, phone lines and power lines) to create a high-speed (up to 1 Gigabit/s) local area network.Wired technologies :Twisted pair wire is the most widely used medium for telecommunication.Twisted-pair cabling consist of copper wires that are twisted into pairs. Ordinary telephone wires consist of two insulated copper wires twisted into pairs. Computer networking cabling consist of 4 pairs of copper cabling that can be utilized for both voice and data transmission. The use of two wires twisted together helps to reduce crosstalk and electromagnetic induction. The transmission speed ranges from 2 million bits per second to 100 million bits per second. Twisted pair cabling comes in two forms which are Unshielded Twisted Pair (UTP) and Shielded twisted-pair (STP) which are rated in categories which are manufactured in different increments for various scenarios.Coaxial cable is widely used for cable television systems, office buildings, and other work-sites for local area networks. The cables consist of copper or aluminum wire wrapped with insulating layer typically of a flexible material with a high dielectric constant, all of which are surrounded by a conductive layer. The layers of insulation help minimize interference and distortion. Transmissionspeed range from 200 million to more than 500 million bits per second.Optical fiber cable consists of one or more filaments of glass fiber wrapped in protective layers. It transmits light which can travel over extended distances.Fiber-optic cables are not affected by electromagnetic radiation. Transmission speedmay reach trillions of bits per second. The transmission speed of fiber optics is hundreds of times faster than for coaxial cables and thousands of times faster than atwisted-pair wire.[citation needed]Wireless technologies :Terrestrial microwave – Terrestrial microwaves use Earth-based transmitter and receiver. The equipment looks similar to satellite dishes. Terrestrial microwaves use low-gigahertz range, which limits all communications to line-of-sight. Path between relay stations spaced approx, 30 miles apart. Microwave antennas are usually placed on top of buildings, towers, hills, and mountain peaks.Communications satellites –The satellites use microwave radio as their telecommunications medium which are not deflected by the Earth's atmosphere. The satellites are stationed in space, typically 22,000 miles (for geosynchronous satellites) above the equator. These Earth-orbiting systems are capable of receiving and relayingvoice, data, and TV signals.Cellular and PCS systems – Use several radio communications technologies. The systems are divided to different geographic areas. Each area has a low-power transmitter or radio relay antenna device to relay calls from one area to the next area.Wireless LANs –Wireless local area network use a high-frequency radio technology similar to digital cellular and a low-frequency radio technology. Wireless LANs use spread spectrum technology to enable communication between multiple devices in a limited area. An example of open-standards wireless radio-wave technology is IEEE.Infrared communication , which can transmit signals between devices within small distances not more than 10 meters peer to peer or ( face to face ) without any body in the line of transmitting.Scale:Networks are often classified as local area network (LAN), wide area network (WAN), metropolitan area network (MAN), personal area network (PAN), virtual private network (VPN), campus area network (CAN), storage area network (SAN), and others, depending on their scale, scope and purpose, e.g., controller area network (CAN) usage, trust level, and access right often differ between these types of networks. LANs tend to be designed for internal use by an organization's internal systems and employees in individual physical locations, such as a building, while WANs may connect physically separate parts of an organization and may include connections to third parties.Functional relationship (network architecture) :Computer networks may be classified according to the functional relationships which exist amongthe elements of the network,e.g., active networking, client–server, Wireless ad hoc network andpeer-to-peer (workgroup) architecture.Network topology :Main article: Network topology Computer networks may be classified according to the network topology upon which the network is based, such as bus network, star network, ring network, mesh network.Network topology is the coordination by which devices in the network are arranged in their logical relations to one another, independent of physical arrangement. Even if networked computers are physically placed in a linear arrangement and are connected to a hub, the network has a star topology, rather than a bus topology. In this regard the visual and operational characteristics of a network are distinct. Networks may be classified based on the method of data used to convey the data, these include digital and analog networks.Types of networks based on physical scopeCommon types of computer networks may be identified by their scale.Local area network:A local area network (LAN) is a network that connects computers and devices in a limited geographical area such as home, school, computer laboratory, office building, or closely positioned group of buildings. Each computer or device on the network is a node. Current wired LANs are most likely to be based on Ethernettechnology, although new standards like ITU-T G.hn also provide a way to create a wired LAN using existing home wires (coaxial cables, phone lines and power lines).[4]Typical library network, in a branching tree topology and controlled access to resources All interconnected devices must understand the network layer (layer 3), because they are handling multiple subnets (the different colors). Those inside the library, which have only 10/100 Mbit/s Ethernet connections to the user device and a Gigabit Ethernet connection to the central router, could be called "layer 3 switches" because they only have Ethernet interfaces and must understand IP. It would be more correct to call them access routers, where the router at the top is a distribution router thatconnects to the Internet and academic networks' customer access routers.The defining characteristics of LANs, in contrast to WANs (Wide Area Networks), include their higher data transfer rates, smaller geographic range, and no need for leased telecommunication lines. Current Ethernet or other IEEE 802.3 LAN technologies operate at speeds up to 10 Gbit/s. This is the data transfer rate. IEEE has projects investigating the standardization of 40 and 100 Gbit/s.[5]Personal area network :A personal area network (PAN) is a computer network used for communication among computer and different information technological devices close to one person. Some examples of devices that are used in a PAN are personal computers, printers, fax machines, telephones, PDAs, scanners, and even video game consoles. A PAN may include wired and wireless devices. The reach of a PAN typically extends to 10 meters.[6] A wired PAN is usually constructed with USB and Firewire connections while technologies such as Bluetooth and infrared communication typically form a wireless PAN.Home area network :A home area network (HAN) is a residential LAN which is used for communication between digital devices typically deployed in the home, usually a small number of personal computers and accessories, such as printers and mobile computing devices. An important function is the sharing of Internet access, often a broadband service through a CATV or Digital Subscriber Line (DSL) provider. It can also be referred to as an office area network (OAN).Wide area network :A wide area network (WAN) is a computer network that covers a large geographic area such as a city, country, or spans even intercontinental distances, using a communications channel that combines many types of media such as telephone lines, cables, and air waves. A WAN often uses transmission facilities provided by common carriers, such as telephone companies. WAN technologies generally function at the lower three layers of the OSI reference model: the physical layer, the data link layer, and the network layer.Campus network :A campus network is a computer network made up of an interconnection of local area networks (LAN's) within a limited geographical area. The networkingequipments (switches, routers) and transmission media (optical fiber, copper plant, Cat5 cabling are almost entirely owned (by the campus tenant / owner: an enterprise, university, government etc.).In the case of a university campus-based campus network, the network is likely to link a variety of campus buildings including; academic departments, the university library and student residence halls.Metropolitan area network:A Metropolitan area network is a large computer network that usually spans a city or alarge campus. Sample EPN made of Frame relay WAN connections and dialup remote access.Enterprise private network :An enterprise private network is a network build by an enterprise to interconnect various company sites, e.g., production sites, head offices, remote offices, shops, in order to share computer resources.Virtual private network :A virtual private network (VPN) is a computer network in which some of the links between nodes are carried by open connections or virtual circuits in some larger network (e.g., the Internet) instead of by physical wires. The data link layer protocols of the virtual network are said to be tunneled through thelarger network when this is the case. One common application is secure communications through the public Internet, but a VPN need not have explicit security features, such as authentication or content encryption. VPNs, for example, can be used to separate the traffic of different user communities over an underlying network with strong security features.VPN may have best-effort performance, or may have a defined service level agreement (SLA) between the VPN customer and the VPN service provider. Generally, a VPN has a topology more complex than point-to-point.Internetwork :An internetwork is the connection of two or more private computer networks via a common routing technology (OSI Layer 3) using routers. The Internet is an aggregation of many internetworks, hence its name was shortened to Internet.Backbone network :A Backbone network (BBN) A backbone network or network backbone is part of a computer network infrastructure that interconnects various pieces of network, providing a path for the exchange of information between different LANs or subnetworks.[1][2] A backbone can tie together diverse networks in the same building, in different buildings in a campus environment, or over wide areas. Normally, the backbone's capacity is greater than the networks connected to it.A large corporation that has many locations may have a backbone network that ties all of the locations together, for example, if a server cluster needs to be accessed by different departments of a company that are located at different geographical locations.The pieces of the network connections (for example: ethernet, wireless) that bring these departments together is often mentioned as network backbone. Networkcongestion is often taken into consideration while designing backbones. Backbone networks should not be confused with the Internet backbone.Global area network:A global area network (GAN) is a network used for supporting mobile communications across an arbitrary number of wireless LANs, satellite coverage areas, etc. The key challenge in mobile communications is handing off the user communications from one local coverage area to the next. In IEEE Project 802, this involves a succession of terrestrial wireless LANs.[7]Internet :The Internet is a global system of interconnected governmental, academic, corporate, public, and private computer networks. It is based on the networking technologies of the Internet Protocol Suite. It is the successor of the Advanced Research ProjectsAgency Network (ARPANET) developed by DARPA of the United States Department of Defense. The Internet is also the communications backbone underlying the World Wide Web (WWW).Intranets and extranets :Intranets and extranets are parts or extensions of a computer network, usually a local area network. An intranet is a set of networks, using the Internet Protocol and IP-based tools such as web browsers and file transfer applications, that is under the control of a single administrative entity. That administrative entity closes the intranet to all but specific, authorized users. Most commonly, an intranet is the internal network of an organization. A large intranet will typically have at least one web server to provide users with organizational information.An extranet is a network that is limited in scope to a single organization or entity and also has limited connections to the networks of one or more other usually, but not necessarily, trusted organizations or entities—a company's customers may be given access to some part of its intranet—while at the same time the customers may not be considered trusted from a security standpoint. Technically, an extranet may also be categorized as a CAN, MAN, WAN, or other type of network, although an extranet cannot consist of a single LAN; it must have at least one connection with an external network.Overlay network:An overlay network is a virtual computer network that is built on top of another network. Nodes in the overlay are connected by virtual or logical links, each of which corresponds to a path, perhaps through many physical links, in the underlying network.中文译文:计算机网络计算机网络,通常简单的被称作是一种网络,是一家集电脑和设备为一体的沟通渠道,便于用户之间的沟通交流和资源共享。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
外文出处:Prasan Kumar Sahoo著,Proc of Computing, and Communications Conference, 2005.[C]出版社:IEEE,2005年附件:1.外文资料翻译译文;2.外文原文基于拓扑结构的分布式无线传感器网络的功率控制摘要无线传感器网络由大量的传感器节点电池供电,限制在一定区域内的随机部署的几个应用。
由于传感器能量资源的有限,他们中的每一个都应该减少能源消耗,延长网络的生命周期。
在这篇文章中,一种分布式算法的基础上,提出了无线传感器网络的构建一种高效率能源树结构,而无需定位信息的节点。
节点的能量守恒是由传输功率控制完成的。
除此之外,维护的网络拓扑结构由于能源短缺的节点也提出了协议。
仿真结果表明,我们的分布式协议可以达到类似集中算法的理想水平的能量守恒,可以延长网络的生命周期比其他没有任何功率控制的分布式算法。
关键词:无线网络传感器,分布式算法,功率控制,拓扑结构1.引言近年来在硬件和软件的无线网络技术的发展,使小尺寸、低功耗、低成本、多功能传感器节点[1]的基础上,由传感、数据处理及无线通信组件组成。
这些低能量节点的电池,部署在数百到成千上万的无线传感器网络。
在无线传感器网络系统、音视频信号处理系统,使用更高的发射功率和转发数据包相似的路径是种主要消费传感器的能量。
除此之外,补充能量的电池更换和充电几百节点上的传感器网络应用的大部分地区,特别是在严酷的环境是非常困难的,有时不可行。
因此,节能[2],[3],[4]的传感器节点是一个关键问题,如传感器网络的生命周期的完全取决于耐久性的电池。
传感器节点一般都是自组织建立了无线传感器网络,监察活动的目标和报告的事件或信息多跳中的基站。
有四种主要的报告模式的传感器网络:事件驱动、队列驱动、期刊、查询和混合的报告。
在事件驱动模型,节点报告接收器,同时报告遥感一些事件,例如火灾或水灾而敲响了警钟。
定期报告中,节点模型的数据收集和可聚合所需资料,成为集,然后定期的发送到上游。
资料相结合的方法,称为数据融合[5],[6],[7]和[8],从而降低了数量的传输数据。
这样的例子,也可应用在这里,如报告的温湿度的地方。
所以,集合到一个单一的类似数据包的数据融合的遥感数据的传送到接收器的多级跳环境中,从而保存能源也是在传感器网络中的重要研究问题。
在[9]的基础上,对传感器节点的每个单元的电源消耗比较进行分析,它观察耗能接收的电源和空闲状态几乎相同,CPU 的功耗是很低。
在文献[10]的基础上,在作者建议中的理想的发射功率评估通过节点互动与信号衰减节点的无线传感器网络 MAC 协议的传输功率控制。
计算理想的发射功率的反复改进和存储当前的理想发射功率,为每个相邻的节点。
在[11],作者介绍了拓扑控制的无线传感器网络,于一体的有效子网和短跃点方法来达到节能降耗的两级策略。
分析是在非对称无线链接并不罕见,具有不同的最大传输范围,在异构无线设备的网络拓扑控制的问题。
详细分析了在[12]。
因为节点是异构的,他们有不同的最大传输功率和广播范围,需要可调整的功率控制的分布式天线。
作者在[13]中的采取一套主动节点和节点的传播范围,建议尽量减少总功率消耗的无线传感器网络的最小电源配置方法。
在[14],作者提出了一个分析路由协议的范围的可变传动方案。
从他们的分析研究表明,该算法可以提高可变传动范围全面的网络性能。
以LEACH [15]为基础的算法,这些算法是让一些节点使用较高的发射功率帮助邻居传输数据到了BS。
然而, LEACH需要全球的传感器网络的知识,并且假定每个节点接近BS。
在[16]中,两个局部拓扑结构的控制算法,并提出了异构多跳无线网络的非均匀传输范围。
虽然这个协议保护网络的连接和谈论如何控制的拓扑结构,它不谈网络拓扑结构和能耗的密度较大的问题,如无线传感器网络节点。
[17]是跨省电种技术,特设的无线网络无显著降低能耗的能力或连接的网络。
这是一个分布式的随机算法,为了节省功率最大对电池进行关闭。
但是它使用固定传输功率范围,该算法适用于低密度等IEEE 802.11无线通信网络的节点。
在[18],提出了构建集中算法进行了静态的无线网络的拓扑结构。
根据这一算法的基础上,初步每个节点有它自己的组成部分。
然后,它通过合并交互连通到一个整体上。
毕竟,部件连接环和优化的后处理解除功耗的网络。
虽然该算法[18]是专为无线网络拓扑结构的优化,它是一个集中并不能改变发射功率动态。
分布式算法在无线传感器网络的传输功率控制提出了[19]。
他们指派一个任意选择的传输功率级传感器节点在可能分裂的网络中。
同样,他们提出了全球性的解决方案与不同的传输功率算法, 拓扑结构创造了一个连接的网络和设定不同的传输范围为所有的节点。
所以,他们的工作能耗的节点可能更多,因为在无线传感器网络中的节点是相邻的。
在无线传感器网络中、通信是能量消耗的主要因素[20]。
然而,传输功率调节控制网络拓扑结构可以延长寿命及提高无线传感器网络的能力。
另外,而非控制发射功率水平,总是使用一个固定的高功率水平网络的节点的节点将迅速减少死亡网络的生存时间。
收集数据,感觉到最重要的信息可能包含一些要求,提供一种连接网络拓扑结构是非常必要的无线传感器网络。
因此,在我们的工作中,我们提出如何控制发射功率水平的每个节点的网络来节约能源。
我们提出一个分布式算法,调整传输功率级别的节点动态和构建一棵树和一个中间功率电平拓扑结构之间的最大和最小,在不同的群体,达到一种连接网络的节点。
本算法在一种无线传感器网络中没有位置信息,建立连接节点分布式的拓扑结构。
,,i图 2. 父和子网关的不同组的节点•子网关:连接到父网关下游组的节点称为子网关。
组中存在至少一个子网关。
在某些的情况下如果一组都包含唯一的节点的单个节点被视为为该组的父和子网关。
图 2,组 G1 的A,B节点分别为D,C节点的子网关。
•节点能级 (NEL):当前节点的能量级别称为 NEL。
例如在广播一控制数据包,如果一个节点的能量级别是X,NEL在控制数据包中作为X的单位。
父网关功率级(•PGPL):任何组的父网关的发射功率水平,它可以与子网关的上游组织连接被称为父网关功率级(PGPL)。
自从接收器始终是父网关的组里,它< PGPL P max,这可能值PGPL分配为0。
然而,对于其它组中的父网关, P在1和3之间,按我们的假设。
•源 ID (SID):如果 A 和 B 是相同或者不相同组中的两个不同传感器节点,将数据包A发送到 B,A节点的ID是源节点,A也是B的源节点。
3.分布式的功率控制协议在本节中,我们将提出我们基于拓扑结构协议的功率控制,这是一种动态的拓扑结构。
我们假设在网络中的每个节点具有一个唯一的 ID,他们每个人都知道在拓扑结构之前邻居的 ID。
根据我们协议的每个系统模型,由于每个组的节点之间存在的连接孔,我们假设网络可能会断开连接,如果他们使用低传输功率级与另一个节点的一组之间,并且可能会消耗更多的精力,如果他们使用最大传输功率级进行通信。
此外,在我们假设传输电源网络中的所有节点级别后部署可能是最大或最小值和最大值之间。
因此,我们的协议,在树拓扑构造节点使用最小传动功率级的每个组之间 (Pmin = 0),整个网络的树拓扑连接在不同组节点中形成并使用有效功率级别 (PTx),这里(P min=0)<P T x(P max=3),这个分布式协议的不同阶段在下面将描述。
3.1.施工阶段一旦所有的节点部署在网络中, 就通过广播最小发射功率的构造数据包启动施工阶段 (Pmin = 0) 以与邻居直接连接如图 4 (a) 所示。
图 3 所示的构造数据包格式和数据包的参数初始化为:SID= Sink’s ID, PGID= Sink’s ID, NEL为接收的功率级, LHC= 0,GHC= 0, PGPL= 0. 接收器节点通常接收数据,则其 PGPL 分配给 0,这是不同的网络的其他父网关。
在接到构造包,在其最小的传输接收器的邻居电源范围 (Pmin = 0),扫描包的所有参数。
他们等待随机时间W i,并与接收器连接。
让N i作为第i节点的邻居代号在网络中的N个节点,自收到构建包中, 第i个节点的等待时间可以被看作是:(1)在αi是一个小的随机数兼容CSMA-CA机制(22)。
然后,他们每个人都重播了构建包使用相同的最小功率电平Pmin = 0到他们的邻居提供必要的参数对应的数据包,等待时间的T i单位在(2)中得到:(2)第i个节点的当前的能量级为E i,βi是一个非常小的随机数,比如0.0001.0.00001 β图 3. 构造数据包的格式为了避免密集网络中的节点之间的数据包碰撞,我们建议接收器还等待T i 单位后广播构造包,然后进入信息阶段 3.2 节中所述。
它是应注意该接收器,必须至少一个传感器节点的最小值或最大传输功率范围内。
但是,如果在接收器没有找到任何Pmin = 0的邻居,它与它的邻居链接进入信息阶段,在经过T i 轮候时间(表 2 和表 3)。
表 2接收器与网络的任何节点施工阶段算法算法 1: 施工阶段表 3发件人和接收信息阶段算法算法 2:信息阶段在接到构造包,节点扫描中它的所有参数,并考虑作为其源有最少 LHC 节点。
接收机节点等待无线设备、连接的源和上文所述,然后按照相同的步骤。
直到一个节点不会收到任何构造包进一步与第一个树拓扑构造一个的组的节点之间如根和其他节点为它最小传动功率级中其他节点的接收器与图 4 (b) 所示,此过程会继续进行。
我们假定有不同组的节点或一些节点之间的连接孔是无法构建使用 Pmin 的链接,在施工阶段以有限的时间间隔后终止。
执行信息阶段后形成树拓扑下一组。
它是构造包始终使用最小传动功率级传输,每次 LHC 都加 1,它从一个节点跳到另一个时应注意。
在一个组中一些节点也可能是其他的邻居节点已收到相同的构造数据包。
然后一个节点决定它自己的源节点的如何? 3.3.1(A) 段所述,我们曾讨论在维护阶段的这部分的问题。
图 4. (a) 随机分布在面积的传感器节点 (b) 第一个树拓扑结构3.2.信息阶段这一阶段的目的是在整个网络的不同组节点中使用最有效电源级构建分布式的树拓扑。
它通过广播通知数据包使用最大传输功率级 (Pmax = 3)。
通知数据包的格式如图 5 所示。
它是应注意每个组的节点具有唯一的父网关。
为例,接收器是其组中的唯一父网关。
所以,之前于广播通知数据包,一个节点将 PGID的值通知包复制从该构造,可以区别另一个构造包的数据包。
此外,GHC构造数据包中的值是加 1,然后添加到通知数据包的相应字段。
代以通知数据包中的所需值,广播使用 Pmax = 3。