无线局域网技术外文翻译文献
网络工程局域网中英文对照外文翻译文献
中英文对照外文翻译(文档含英文原文和中文翻译)PUTER NETWORKSDATE COMMUNICATIONSThe end equipment which either generates the digital information for transmission or uses the received digital data can be computer ,printers ,keyboards, CRTs, and so on. This equipment generally manipulates digital information internally in word units—all the bits that make up a word in a particular piece of equipment are transferred in parallel. Digital data, when transmitted, are in serial form. Parallel transmission of an 8-bit word require eight pairs of transmission lines—not at all cost-effective. Data terminal (DTE) is a general phrase encompassing all of the circuitry necessary to perform parallel-to-serial and serial-to-parallel conversions for transmission and reception respectively and for data link management. The UART (Universal Asynchronous Receiver/Transmitter) and USART (Universal Asynchronous/Asynchronous Receiver/Transmitter) are the devices that perform the parallel-to-serial and serial-to-parallel conversions. The primary DTE includes a line control unit (LCU or LinCo) which controls the flow of information in a multipoint data link system. A station controller (STACO) is the corresponding that belonged to the subscriber in a data link system. Between the DTEs, starting with the modems, was communications equipment owned and maintained by Telco property.Data communications equipment (DCE) accepts the serial data stream from the DTE and converts it to some form of analog signal suitable for transmission on voice-grade lined. At the receive end, the DCE performs the reverse function of converting the received analog signal to a serial digital data stream. The simplest form of DCE is a modem (modulator/demodulator) or data set. At the transmit end, the modem can be considered a form of digital-to-analog converter, while at the receive end, it can considered a form of analog-to-digital converter. The most common of modulation by modems are frequency shift keying (FSK), phase shift keying (PSK), and quadrature amplitude modulation (QAM). This is a typically data transmission mode using the analog telephone lines. If you transmit data by digital channel (sometimes it is called “Digital T-carrier”), a pulse Code Modulation (PCM) equipment must be used. A microwave transmission system can also be used for the data communication. Finally, you can use the satellite communication system for data transmission.If the cables and signal levels used to interconnect the DTE and DCE were left unregulated, the variations generated would probably be proportional to the number of manufacturers. Electronics industries Association (EIA),an organization of manufactures with establishing the DTE and modem. This is a 25-pincable whose pins have designated functions and specified signal levels. The RS-232C is anticipated to be replaced by an update standard.2.ARCHITECTURE OF COMPUTER NETWORKSComputer network is a complex consisting of two or more conned computing units, it is used for the purpose of data communication and resource resource sharing. Design of a network and its logical structure should comply with a set of design principles, including the organization of functions and the description of data formats and procedure. This is the network architecture and so called as a set of layers and protocols, because the architecture is a layer-based.In the next two sections we will discuss two important network architectures, the OSI reference model and the TCP/IP reference model.1.The OSI Reference ModelThe OSI model is shown in Fig.14-2(minus the physical medium). This model is based on a proposal developed by the International Standards Organizations (OSI) as the first step toward international standardization of the protocols used in the various layers. The model is called the ISO OSI (Open System Interconnection) Reference Model because it deals with connecting open systems--that is, systems that are open for communication with other systems, We will usually just call it the OSI model for short.The OSI model has seven has seven layers. Note that the OSI model itself is not a network architecture because it does not specify the exact services and protocols to be used in each layer. It just tells what each layer should do. However , However, ISO has also produced standards for all the layers, although these are not part of the reference model itself. Each one has been published as a separate international standard.2.The TCP/IP Reference ModelThe TCP/IP reference model is an early transport protocol which was designed by the US Department of Defence (DOD) around in 1978. It is often claimed that it gave rise the OSI “connectionless”mode of operation. TCP/IP is still usedextensively and is called as a industrial standard of internet work in fact, TCP/IP has two parts: TCP and IP. TCP means it is on the transport layer and IP means it is on the network layer separately.1.There are two end-to-end protocols in the transport layer, one of which is TCP (Transmission Control Protocol) , another is UDP (User Datagram Protocol). TCP is a connection-oriented protocol that allows a byte stream originating on one machine to be delivered without error on any other machine in the internet. UDP is an unreliable, connectionless protocol for application that do not want TCP’s sequencing of flows control flow control and wish to provide their own.2.The network layer defines an official packet format and protocol called IP (Internet protocol). The job of the network layer is to deliver IP packets where they are supposed to go.The TCP/IP Reference Model is shown in Fig.14.3. On top of the transport layer is the application layer, It contains all the higher-level protocols. The early ones included virtual terminal (TELNET), file transfer (FTP), electronic mail (SMTP) and domain name service(DNS).3.WIDE AREA NETWORKA wide area network, or WAN, spans a large geographical area, often a country or continent . It contains a collection of machines intended for running user (i. e. , application) programs. We will follow traditional usage and call these machines hosts. By a communication subnet, or just subnet for short. The job of the subnet is to carry messages from host to host, just as the telephone system carries words from speaker to listener. By separating the pure communication aspects of the network (the subnet) from the application aspects (the hosts), the complete network design is greatly simplified. Relation between hosts and the subnet is shown in Fig.14-4.One of many methods that can be used to categorize wide area networks is with respect to the flow of information on a transmission facility. If we use this method to categorize wide area networks, we can group them into three basic types: circuit switched, leased line and packet switched.1.CIRCUIT SWITCHED NETWORKSThe most popular type of network and the one almost all readers use on a daily basis is a circuit switched network. The public switched telephone network, however,is not limited to the telephone company, By purchasing appropriate switching equipment, any organization can construct their own internal circuit switched network and, if desired, provide one or more interfaces to the public switched network to allow voice and data transmission to flow between the public network and their private internal network2.LEASED LINE NETWORKSThis is a dedicated network connected by leased lines. Leased line is a communications line reserved for the exclusive use of a leasing customer without inter-exchange switching arrangements. Leased or private lines are dedicated to the user. This advantage is that the terminal or computer is a always physically connected to the line. Very short response times are met with service.3.PACKET SWITCHING NETWORKSA packet network is constructed through the use of equipment that assembles and disassembles packets, equipment that routes packet, and transmission facilities used to route packets from the originator to the destination device. Some types of data terminal equipment (DTE) can create their own packets, while other types of DTE require the conversion of their protocol into packets through the use of a packet assembler / disassemble (PAD). Packets are routed through the network by packet switches. Packet switches examine the destination of packets as they flow through the network and transfer the packets onto trunks interconnecting switches based upon the packet destination destination and network activity.Many older pubic networks follow a standard called X.25. It was developed during 1970s by CCITT to provide an interface between public packet-switched network and their customers.CCITT Recommendation X.25 controls the access from a packet mode DTE, such as a terminal device or computer system capable of forming packets, to the DCE at a packet mode. CCITT Recommendation X.28 controls the interface between non-packet mode devices that cannot interface between the PAD and the host computer. CCITT Recommendation X.3 specifies the parameter settings on the PAD and X.75 specifies the interface between packet network.4.LOCAL AREA NETWORKLocal area data network , normally referred to simply as local area network or LANs, are used to interconnect distributed communities of computer-based DTEs located within a building or localized group of building. For example, a LAN may be used to interconnect workstations distributed around offices within a single building or a group of buildings such as a university campus. Alternatively, it may be complex. Since all the equipment is located within a single establishment, however, LANs are normally installed and maintained by the organization. Hence they are also referred to as private data networks.The main difference between a communication path established using a LAN and a connection made through a public data network is that a LAN normally offers much higher date transmission rates because of the relatively short physical separations involved. In the context of the ISO Reference Model for OSI, however, this difference manifests itself only at the lower network dependent layers. In many instances the higher protocol layers in the reference model are the same for both types of network.Before describing the structure and operation of the different types of LAN, it is perhaps helpful to first identify some of the selection issues that must be considered. It should be stressed that this is only a summary; there are also many possible links between the tips of the branches associated with the figure.1.TopologyMost wide area networks, such as the PSTN, use a mesh (sometimes referred to as a network) topology. With LANs, however, the limited physical separation of the subscriber DTEs allows simpler topologies to be used. The four topologies in common use are star, bus ,ring and hub.The most widespread topology for LANs designed to function as data communication subnetworks for the interconnection of local computer-based equipment is the hub topology, which is a variation a variation of the bus and ring. Sometimes it is called hub/tree topology.2.Transmission mediaTwisted pair, coaxial cable and optical fiber are the three main type of transmission medium used for LANs.3. Medium access control methodsTwo techniques have been adopted for use of the medium access control in the LANs. They are carrier-sense-multiple-access with collision detection (CSMA/CD), for bus network topologies, and control token, for use with either bus or ring networks.CSMA/CD is used to control multiple-access networks. Each on the network “listens” before attempting to send a message, waiting for the “traffic” to clear. If two stations try to sent their messages at exactly the same time, a “collision” is detected, and both stations are required to “step back” and try later.Control token is another way of controlling access to a shared transmission medium that is by the use of a control (permission) token. This token is passed form one DTE to another according to a defined set of rules understood and adhered to by all DTEs connected to the medium. ADTE may only transmit a frame when it is in possession of the token and, after it has transmitted the frame, it passed the token on to allow another DTE to access the transmission medium.1.计算机网络数据通信端设备可以是计算机、打印机、键盘、CRT等,它们可以产生要发送的数字信息,也可使用所接收的数字数据。
Cooperative Diversity in Wireless Networks 文献翻译
Cooperative Diversity in Wireless Networks 文献翻译————————————————————————————————作者:————————————————————————————————日期:23 无线网络的协作分集:高效协议和中断行为 摘要:我们研究和分析了低复杂度的协作分集协议用以抵抗无线网络中多径传播引起的衰落,其底层技术是利用协作终端为其他终端转发信号而获得空间分集。
我们略述几种协同通信策略,包括固定中继方法如放大-转发,译码-转发,基于协同终端间的信道估计的选择中继方式,基于目标终端的有限反馈的增量中继方式。
我们在高SNR 条件下以中断事件、相关中断概率指标讨论了其性能特征,以估计协议对传输衰落的鲁棒性。
除固定的解码—转发协议外,所有的协同分集协议就所达到的全分集(也就是在两个终端下的二阶分集)来说是高效的,而且在某些状态下更加接近于最优(小于1。
5dB )。
因此,当用分布式天线时,我们可以不用物理阵列而提供很好的空间分集效应,但因为采用半双工工作方式要牺牲频谱效率,也可能要增加额外接收硬件的开销。
协作分集对任何无线方式都适用,包括因空间限制而不能使用物理阵列的蜂窝移动通信和无线ad hoc 网络,这些性能分析显示使用这些协议可减少能耗.索引语——分集技术,衰落信道,中断概率,中继信道,用户协同,无线网络Ⅰ 介绍在无线网络中,多径传播引起的信号衰落是一个特别严重的信道损害问题,可以利用分集技术来减小。
II 系统模型在图1中的无线信道模型中,窄带传输会产生频率非选择性衰落和附加噪声。
我们在第四部分重点分析慢衰落,在时延限制相当于信道相干时间里,用中断概率来评价,与空间分集的优势区分.虽然我们的协同协议能自然的扩展到宽频带和高移动情况,其中面临各自的频域和时域的选择性衰落,当系统采用另一种形式的分集时对我们协议的潜在影响将相对减小。
4A 媒体接入当前的无线网络中,例如蜂窝式和无线局域网,我们将有用的带宽分成正交信道,并且分配这些信道终端,使我们的协议适用于现存的网络.这种选择产生的意外效果是,我们能够同时在I —A 处理多径(单个接收)和干扰(多个接收),相当于在信号接收机传输信号的一对中继信号.对于我们所有的协同协议,传输中的必须同时处理他们接收到的信号;但是,网络实现使终端不能实现全双工,也就是,传输和接收同时在相同的频带中实现。
无线局域网毕业论文中英文对照资料外文翻译文献
无线局域网毕业论文中英文对照资料外文翻译文献中英文对照资料外文翻译文献WLANWhy use WLANFor one of the main local area network management, for the laying of cables, or check the cable is disconnected this time-consuming work, it is easy to upset, not easy to break in a short time to find out where. Furthermore, for the business and application environment constantly updating and development of enterprise network must be matched with the original re-layout, need to re-install the network lines, although the cable itself is not expensive, but requested the technical staff to the high cost of wiring, especially the old building, wiring project costs even higher. Therefore, the construction of wireless local area network has become the best solution.What conditions need to use WLANWLAN is not limited to alternative local area network, but to make up for lack of wired local area networks, in order to achieve the purpose of extending the network, the following circumstances may have wireless local area network.●no fixed workplace users●wired local area network set up by the environmental constraints●As a wired local area network backup systemWLAN access technologyCurrently manufacturers in the design of wireless local area network products, there are quite a variety of access design methods can be divided into three categories: narrowband microwave, spread spectrum (Spread Spectrum) technology, andinfrared have their advantages and disadvantages, limitations, and more, followed by detailed discussion of these techniques. (Infrared) technology, each technique has their advantages and disadvantages, limitations, and more, followed by detailed discussion of these techniques.Technical requirementsAs wireless local area network needs to support high-speed, burst data services, need to be addressed in the indoor use of multipath fading, as well as issues such as crosstalk subnets. Specifically, wireless local area network must achieve the following technical requirements:1)Reliability: Wireless LAN system packet loss rate should be lower than 10-5,the error rate should be lower than 10-8.2)Compatibility: For indoor use of wireless local area network, so as far aspossible with the existing wired LAN network operating system and networksoftware compatible.3)Data rate: In order to meet the needs of local area network traffic, wirelessLAN data transfer rate should be more than 1Mbps.4)The confidentiality of communications: As the data transmitted in the air viawireless media, wireless local area networks at different levels must takeeffective measures to improve communication security and data security.5)Mobility: support for all mobile networks or semi-mobile network.6)Energy Management: When receiving or sending data to the site when themachine is in sleep mode, when activated again when the data transceiver toachieve the savings in power consumption.7)small size and low price: This is the key to the popularity of wireless local areanetwork can be.8)Electromagnetic environment: wireless LAN should consider thehumanbodyand the surrounding electromagnetic environment effects.AndroidGoogle Android is a Linux-based platform for developing open-source phone operating system (registered trademark in China called "Achi;). It includes operating systems, user interface and applications - mobile phone work required by the software, but there is no past, the exclusive right to impede innovation and barriers to mobile industry, called mobile terminal is the first to create a truly open and complete mobile software. Google and Open Handset Alliance to develop the Android, the alliance by including China Mobile, Motorola, Qualcomm and T-Mobile, including more than 30 technology and the composition of a leader in wireless applications. Google with operators, equipment manufacturers, developers and other interested parties to form deep-level partnerships, hoping to establish a standardized, open software platform for mobile phones in the mobile industry to form an open ecosystem .It uses software stack layers (software stack, also known as the software stack) architecture, is divided into three parts: thecore of the underlying Linux-based language developed by the c, only basic functions. Middle layer consists of library. Library and Virtual Machine Virtual Machine, developed by the C +. At the top are a variety of applications, including the call procedures, SMS procedures, application software is developed by the companies themselves to write java.To promote this technology, Google, and dozens of other phone company has established the Open Handset Alliance (Open Handset Alliance).Characteristic●application framework to support component reuse and replacement●Dalvik virtual machine specifically for mobile devices i s optimized●Internal integrated browser, the browser-based open-source WebKit engine●optimization of 2D and 3D graphics library includes graphics library, 3Dgraphics library based on OpenGL ES 1.0 (hardware-accelerated optional)●# SQLite for structured data storage●Multimedia support includes the common audio, video and static image fileformats (such as MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, GIF)●GSM phone (depending on hardware)●Bluetooth Bluetooth, EDGE, 3G, and WiFi (hardware dependent)●Camera, GPS, compass, and accelerometer (hardware dependent)●Rich development environment including a device emulator, debugger,memory and performance analysis charts, and the Eclipse integrateddevelopment environment plug-insApplicationsA core Android application package together with the release of the application package, including email client, SMS short messaging program, calendar, maps, browser, contact management procedures. A ll applications are written using JA V A.Android Application Framework Developers have full access to core applications used by the API framework. The application framework designed to simplify the reuse of software components; any application can publish its functional blocks and any other applications can use the function block its release (but must follow the framework of security restrictions). Reuse mechanism allows the application form can be user replaced.All of the following applications by the composition of a range of services and systems, including:●an expanded view (V iews) can be used to build applications, including a list of(lists), grid (grids), text boxes (text boxes), buttons (buttons), and even an embeddable web browser.●Content Manager (Content Providers) allows applications to access data fromanother application program (such as the contact database), or to share their own data.● A resource manager (Resource Manager) to provide access to non-coderesources, such as local strings, graphics, and hierarchical file (layout files).● A notification manager (Notif ication Manager) allows applications to customersin the status bar display notification information.●An activity class Manager (Activity Manager) to manage the application lifecycle and provides common navigation rollback feature.Ordering the systemOrdering the system information using automated software tools to achieve la carte, side dishes, stir fry vegetables to the transfer of all management processes; completion point, the computer management menu, point the menu and the kitchen, front-end checkout synchronization print; achieved without the menu paper-based operation; backstage manager of inquiry; warehouse inventory management and so on.In addition, ordering the system can also effectively manage customer data, archiving and future reference, put an end to the restaurant "leakage List", "run list" phenomenon; help restaurants using computer data processing capability and powerful ability to process optimization to achieve automated management, streamline workflow restaurant, reduce waste and man-made phenomenon of management oversight, re-optimal allocation of corporate resources, the operating costs to a minimum.Powerful addition to ordering the system to support the general application of stand-alone and LAN in addition to support head office / branch of multi-level framework used for remote network using the POS system to achieve front store sales cashier, sales of small-ticket instantly print sales day-end, reporting sales data and receive information of new featuresdishes.There are three currently ordering the system to achieve mode:First, the touch screen a la carte model: It uses the currently most popular touch-computer ordering process to achieve that members can to order the software screen prompts, simply click on the screen with your fingers can complete the entire ordering process and convenient This model applies to the practice of rich dishes and large restaurants, restaurants, and restaurant, etc..Second,the wireless PDA ordering mode: it uses a wireless WiFi technology, a la carte interface by PDA display, use touch pen to complete the ordering process, virtuallyanywhere, anytime to order real-time response, this model is more suitable for dishes and practices simple restaurant, features a restaurant and special mood of senior restaurants.Third, the wireless ordering Po mode: it uses the ISM band, can be a floor or other obstruction in the case of seamless coverage up to 10 meters away, while the signal remained stable, which is the ratio of the wireless PDA ordering model's greatest strength, this model applies to simple dishes and practices and other requirements with fewer fast food restaurants, pot shops.。
英文文献科技类原文及翻译1
英文文献科技类原文及翻译1On the deployment of V oIP in Ethernet networks:methodology and case studyAbstractDeploying IP telephony or voice over IP (V oIP) is a major and challenging task for data network researchers and designers. This paper outlines guidelines and a step-by-step methodology on how V oIP can be deployed successfully. The methodology can be used to assess the support and readiness of an existing network. Prior to the purchase and deployment of V oIP equipment, the methodology predicts the number of V oIP calls that can be sustained by an existing network while satisfying QoS requirements of all network services and leaving adequate capacity for future growth. As a case study, we apply the methodology steps on a typical network of a small enterprise. We utilize both analysis and simulation to investigate throughput and delay bounds. Our analysis is based on queuing theory, and OPNET is used for simulation. Results obtained from analysis and simulation are in line and give a close match. In addition, the paper discusses many design and engineering issues. These issues include characteristics of V oIP traffic and QoS requirements, V oIP flow and call distribution, defining future growth capacity, and measurement and impact of background traffic. Keywords: Network Design,Network Management,V oIP,Performance Evaluation,Analysis,Simulation,OPNET1 IntroductionThese days a massive deployment of V oIP is taking place over data networks. Most of these networks are Ethernet based and running IP protocol. Many network managers are finding it very attractive and cost effective to merge and unify voice and data networks into one. It is easier to run, manage, and maintain. However, one has to keep in mind that IP networks are best-effort networks that were designed for non-real time applications. On the other hand, V oIP requires timely packet delivery with low latency, jitter, packet loss, andsufficient bandwidth. To achieve this goal, an efficient deployment of V oIP must ensure these real-time traffic requirements can be guaranteed over new or existing IP networks. When deploying a new network service such as V oIP over existing network, many network architects, managers, planners, designers, and engineers are faced with common strategic, and sometimes challenging, questions. What are the QoS requirements for V oIP? How will the new V oIP load impact the QoS for currently running network services and applications? Will my existing network support V oIP and satisfy the standardized QoS requirements? If so, how many V oIP calls can the network support before upgrading prematurely any part of the existing network hardware? These challenging questions have led to the development of some commercial tools for testing the performance of multimedia applications in data networks. A list of the available commercial tools that support V oIP is listed in [1,2]. For the most part, these tools use two common approaches in assessing the deployment of V oIP into the existing network. One approach is based on first performing network measurements and then predicting the network readiness for supporting V oIP. The prediction of the network readiness is based on assessing the health of network elements. The second approach is based on injecting real V oIP traffic into existing network and measuring the resulting delay, jitter, and loss. Other than the cost associated with the commercial tools, none of the commercial tools offer a comprehensive approach for successful V oIP deployment. I n particular, none gives any prediction for the total number of calls that can be supported by the network taking into account important design and engineering factors. These factors include V oIP flow and call distribution, future growth capacity, performance thresholds, impact of V oIP on existing network services and applications, and impact background traffic on V oIP. This paper attempts to address those important factors and layout a comprehensive methodology for a successful deployment of any multimedia application such as V oIP and video conferencing. However, the paper focuses on V oIP as the new service of interest to be deployed. The paper also contains many useful engineering and design guidelines, and discusses many practical issues pertaining to the deployment of V oIP. These issues include characteristics of V oIP traffic and QoS requirements, V oIP flow and call distribution, defining future growth capacity, and measurement and impact of background traffic. As a case study, we illustrate how ourapproach and guidelines can be applied to a typical network of a small enterprise. The rest of the paper is organized as follows. Section 2 presents a typical network topology of a small enterprise to be used as a case study for deploying V oIP. Section 3 outlines practical eight-step methodology to deploy successfully V oIP in data networks. Each step is described in considerable detail. Section 4 describes important design and engineering decisions to be made based on the analytic and simulation studies. Section 5 concludes the study and identifies future work.2 Existing network3 Step-by-step methodologyFig. 2 shows a flowchart of a methodology of eight steps for a successful V oIP deployment. The first four steps are independent and can be performed in parallel. Before embarking on the analysis and simulation study, in Steps 6 and 7, Step 5 must be carried out which requires any early and necessary redimensioning or modifications to the existing network. As shown, both Steps 6 and 7 can be done in parallel. The final step is pilot deployment.3.1. VoIP traffic characteristics, requirements, and assumptionsFor introducing a new network service such as V oIP, one has to characterize first the nature of its traffic, QoS requirements, and any additional components or devices. For simplicity, we assume a point-to-point conversation for all V oIP calls with no call conferencing. For deploying V oIP, a gatekeeper or Call Manager node has to be added to the network [3,4,5]. The gatekeeper node handles signaling for establishing, terminating, and authorizing connections of all V oIP calls. Also a V oIP gateway is required to handle external calls. A V oIP gateway is responsible for converting V oIP calls to/from the Public Switched Telephone Network (PSTN). As an engineering and design issue, the placement of these nodes in the network becomes crucial. We will tackle this issue in design step 5. Otherhardware requirements include a V oIP client terminal, which can be a separate V oIP device, i.e. IP phones, or a typical PC or workstation that is V oIP-enabled. A V oIP-enabled workstation runs V oIP software such as IP Soft Phones .Fig. 3 identifies the end-to-end V oIP components from sender to receiver [9]. The first component is the encoder which periodically samples the original voice signal and assigns a fixed number of bits to each sample, creating a constant bit rate stream. The traditional sample-based encoder G.711 uses Pulse Code Modulation (PCM) to generate 8-bit samples every 0.125 ms, leading to a data rate of 64 kbps . The packetizer follows the encoder and encapsulates a certain number of speech samples into packets and adds the RTP, UDP, IP, and Ethernet headers. The voice packets travel through the data network. An important component at the receiving end, is the playback buffer whose purpose is to absorb variations or jitter in delay and provide a smooth playout. Then packets are delivered to the depacketizer and eventually to the decoder which reconstructs the original voice signal. We will follow the widely adopted recommendations of H.323, G.711, and G.714 standards for V oIP QoS requirements.Table 1 compares some commonly used ITU-T standard codecs and the amount ofone-way delay that they impose. To account for upper limits and to meet desirable quality requirement according to ITU recommendation P.800, we will adopt G.711u codec standards for the required delay and bandwidth. G.711u yields around 4.4 MOS rating. MOS, Mean Opinion Score, is a commonly used V oIP performance metric given in a scale of 1–5, with 5 is the best. However, with little compromise to quality, it is possible to implement different ITU-T codecs that yield much less required bandwidth per call and relatively a bit higher, but acceptable, end-to-end delay. This can be accomplished by applying compression, silence suppression, packet loss concealment, queue management techniques, and encapsulating more than one voice packet into a single Ethernet frame.3.1.1. End-to-end delay for a single voice packetFig. 3 illustrates the sources of delay for a typical voice packet. The end-to-end delay is sometimes referred to by M2E or Mouth-to-Ear delay. G.714 imposes a maximum total one-way packet delay of 150 ms end-to-end for V oIP applications . In [22], a delay of up to 200 ms was considered to be acceptable. We can break this delay down into at least three different contributing components, which are as follows (i) encoding, compression, and packetization delay at the sender (ii) propagation, transmission and queuing delay in the network and (iii) buffering, decompression, depacketization, decoding, and playback delay at the receiver.3.1.2. Bandwidth for a single callThe required bandwidth for a single call, one direction, is 64 kbps. G.711 codec samples 20 ms of voice per packet. Therefore, 50 such packets need to be transmitted per second. Each packet contains 160 voice samples in order to give 8000 samples per second. Each packet is sent in one Ethernet frame. With every packet of size 160 bytes, headers of additional protocol layers are added. These headers include RTP+UDP+IP+Ethernet with preamble of sizes 12+8+20+26, respectively. Therefore, a total of 226 bytes, or 1808 bits, needs to be transmitted 50 times per second, or 90.4 kbps, in one direction. For both directions, the required bandwidth for a single call is 100 pps or 180.8 kbps assuming a symmetric flow.3.1.3. Other assumptionsThroughout our analysis and work, we assume voice calls are symmetric and no voice conferencing is implemented. We also ignore the signaling traffic generated by the gatekeeper. We base our analysis and design on the worst-case scenario for V oIP call traffic. The signaling traffic involving the gatekeeper is mostly generated prior to the establishment of the voice call and when the call is finished. This traffic is relatively small compared to the actual voice call traffic. In general, the gatekeeper generates no or very limited signaling traffic throughout the duration of the V oIP call for an already established on-going call. In this paper, we will implement no QoS mechanisms that can enhance the quality of packet delivery in IP networks.A myriad of QoS standards are available and can be enabled for network elements. QoS standards may i nclude IEEE 802.1p/Q, the IETF’s RSVP, and DiffServ.Analysis of implementation cost, complexity, management, and benefit must be weighed carefully before adopting such QoS standards. These standards can be recommended when the cost for upgrading some network elements is high and the network resources are scarce and heavily loaded.3.2. VoIP traffic flow and call distributionKnowing the current telephone call usage or volume of the enterprise is an important step for a successful V oIP deployment. Before embarking on further analysis or planning phases for a V oIP deployment, collecting statistics about of the present call volume and profiles is essential. Sources of such information are organization’s PBX, telephone records and bills. Key characteristics of existing calls can include the number of calls, number of concurrent calls, time, duration, etc. It is important to determine the locations of the call endpoints, i.e. the sources and destinations, as well as their corresponding path or flow. This will aid in identifying the call distribution and the calls made internally or externally. Call distribution must include percentage of calls within and outside of a floor, building, department, or organization. As a good capacity planning measure, it is recommended to base the V oIP call distribution on the busy hour traffic of phone calls for the busiest day of a week or a month. This will ensure support of the calls at all times with high QoS for all V oIP calls.When such current statistics are combined with the projected extra calls, we can predict the worst-case V oIP traffic load to be introduced to the existing network.Fig. 4 describes the call distribution for the enterprise under study based on the worst busy hour and the projected future growth of V oIP calls. In the figure, the call distribution is described as a probability tree. It is also possible to describe it as a probability matrix. Some important observations can be made about the voice traffic flow for inter-floor and external calls. For all these type of calls, the voice traffic has to be always routed through the router. This is so because Switchs 1 and 2 are layer 2 switches with VLANs configuration. One can observe that the traffic flow for inter-floor calls between Floors 1 and 2 imposes twice the load on Switch 1, as the traffic has to pass through the switch to the router and back to the switch again. Similarly, Switch 2 experiences twice the load for external calls from/to Floor 3.3.3. Define performance thresholds and growth capacityIn this step, we define the network performance thresholds or operational points for a number of important key network elements. These thresholds are to be considered when deploying the new service. The benefit is twofold. First, the requirements of the new service to be deployed are satisfied. Second, adding the new service leaves the network healthy and susceptible to future growth. Two important performance criteria are to be taken into account.First is the maximum tolerable end-to-end delay; and second is the utilization bounds or thresholds of network resources. The maximum tolerable end-to-end delay is determined by the most sensitive application to run on the network. In our case, it is 150 ms end-to-end for V oIP. It is imperative to note that if the network has certain delay sensitive applications, the delay for these applications should be monitored, when introducing V oIP traffic, such that they do not exceed their required maximum values. As for the utilization bounds for network resources, such bounds or thresholds are determined by factors such as current utilization, future plans, and foreseen growth of the network. Proper resource and capacity planning is crucial. Savvy network engineers must deploy new services with scalability in mind, and ascertain that the network will yield acceptable performance under heavy and peak loads, with no packet loss. V oIP requires almost no packet loss. In literature, 0.1–5% packet loss was generally asserted. However, in [24] the required V oIP packet loss was conservatively suggested to be less than 105 . A more practical packet loss, based on experimentation, of below 1% was required in [22]. Hence, it is extremely important not to utilize fully the network resources. As rule-of-thumb guideline for switched fast full-duplex Ethernet, the average utilization limit of links should be 190%, and for switched shared fast Ethernet, the average limit of links should be 85% [25]. The projected growth in users, network services, business, etc. must be all taken into consideration to extrapolate the required growth capacity or the future growth factor. In our study, we will ascertain that 25% of the available network capacity is reserved for future growth and expansion. For simplicity, we will apply this evenly to all network resources of the router, switches, and switched-Ethernet links. However, keep in mind this percentage in practice can be variable for each network resource and may depend on the current utilization and the required growth capacity. In our methodology, the reservation of this utilization of network resources is done upfront, before deploying the new service, and only the left-over capacity is used for investigating the network support of the new service to be deployed.3.4. Perform network measurementsIn order to characterize the existing network traffic load, utilization, and flow, networkmeasurements have to be performed. This is a crucial step as it can potentially affect results to be used in analytical study and simulation. There are a number of tools available commercially and noncommercially to perform network measurements. Popular open-source measurement tools include MRTG, STG, SNMPUtil, and GetIF [26]. A few examples of popular commercially measurement tools include HP OpenView, Cisco Netflow, Lucent VitalSuite, Patrol DashBoard, Omegon NetAlly, Avaya ExamiNet, NetIQ Vivinet Assessor, etc. Network measurements must be performed for network elements such as routers, switches, and links. Numerous types of measurements and statistics can be obtained using measurement tools. As a minimum, traffic rates in bits per second (bps) and packets per second (pps) must be measured for links directly connected to routers and switches. To get adequate assessment, network measurements have to be taken over a long period of time, at least 24-h period. Sometimes it is desirable to take measurements over several days or a week. One has to consider the worst-case scenario for network load or utilization in order to ensure good QoS at all times including peak hours. The peak hour is different from one network to another and it depends totally on the nature of business and the services provided by the network.Table 2 shows a summary of peak-hour utilization for traffic of links in both directions connected to the router and the two switches of the network topology of Fig. 1. These measured results will be used in our analysis and simulation study.外文文献译文以太网网络电话传送调度:方法论与案例分析摘要对网络数据研究者与设计师来说,IP电话或者语音IP电话调度是一项重大而艰巨的任务。
无线路由器中英文外文翻译文献
无线路由器中英文外文翻译文献本文介绍了一些关于无线路由器的中英文外文翻译文献,并对其进行简要介绍。
- Author: John Smith- Author: Jane Johnson- Published in: Journal of Wireless Networking3. Title: "Securing Wireless Routers: Best Practices and Vulnerabilities"- Author: David Lee- Published in: Journal of Internet Security4. Title: "Wireless Router Placement for Optimal Coverage: A Case Study"- Author: Sarah Chen- Summary: This case study investigates the optimal placement of wireless routers to achieve maximum coverage. It explores factors thataffect signal strength and coverage, such as obstacles and interference, and proposes strategies for router placement to improve network performance and expand coverage in different environments.以上是一些关于无线路由器的中英文外文翻译文献的简要介绍。
这些文献涵盖了无线路由器的技术、性能评估、安全性和优化方面的研究,有助于了解无线路由器的相关知识和应用。
计算机网络中英文对照外文翻译文献
中英文资料外文翻译计算机网络计算机网络,通常简单的被称作是一种网络,是一家集电脑和设备为一体的沟通渠道,便于用户之间的沟通交流和资源共享。
网络可以根据其多种特点来分类。
计算机网络允许资源和信息在互联设备中共享。
一.历史早期的计算机网络通信始于20世纪50年代末,包括军事雷达系统、半自动地面防空系统及其相关的商业航空订票系统、半自动商业研究环境。
1957年俄罗斯向太空发射人造卫星。
十八个月后,美国开始设立高级研究计划局(ARPA)并第一次发射人造卫星。
然后用阿帕网上的另外一台计算机分享了这个信息。
这一切的负责者是美国博士莱德里尔克。
阿帕网于来于自印度,1969年印度将其名字改为因特网。
上世纪60年代,高级研究计划局(ARPA)开始为美国国防部资助并设计高级研究计划局网(阿帕网)。
因特网的发展始于1969年,20世纪60年代起开始在此基础上设计开发,由此,阿帕网演变成现代互联网。
二.目的计算机网络可以被用于各种用途:为通信提供便利:使用网络,人们很容易通过电子邮件、即时信息、聊天室、电话、视频电话和视频会议来进行沟通和交流。
共享硬件:在网络环境下,每台计算机可以获取和使用网络硬件资源,例如打印一份文件可以通过网络打印机。
共享文件:数据和信息: 在网络环境中,授权用户可以访问存储在其他计算机上的网络数据和信息。
提供进入数据和信息共享存储设备的能力是许多网络的一个重要特征。
共享软件:用户可以连接到远程计算机的网络应用程序。
信息保存。
安全保证。
三.网络分类下面的列表显示用于网络分类:3.1连接方式计算机网络可以据硬件和软件技术分为用来连接个人设备的网络,如:光纤、局域网、无线局域网、家用网络设备、电缆通讯和G.hn(有线家庭网络标准)等等。
以太网的定义,它是由IEEE 802标准,并利用各种媒介,使设备之间进行通信的网络。
经常部署的设备包括网络集线器、交换机、网桥、路由器。
无线局域网技术是使用无线设备进行连接的。
计算机科学与技术专业无线局域网毕业论文外文文献翻译及原文
毕业设计(论文)外文文献翻译文献、资料中文题目:无线局域网文献、资料英文题目:文献、资料来源:文献、资料发表(出版)日期:院(部):专业:计算机科学与技术专业班级:姓名:学号:指导教师:翻译日期: 2017.02.14毕业设计(论文)外文资料翻译外文出处:Chris Haseman. Android-essential(用外文写)s[M].London:Spring--Verlag,2008.8-13.附件: 1.外文资料翻译译文;2.外文原文。
指导教师评语:签名:年月日注:请将该封面与附件装订成册。
附件1:外文资料翻译译文无线局域网一、为何使用无线局域网络对于局域网络管理主要工作之一,对于铺设电缆或是检查电缆是否断线这种耗时的工作,很容易令人烦躁,也不容易在短时间内找出断线所在。
再者,由于配合企业及应用环境不断的更新与发展,原有的企业网络必须配合重新布局,需要重新安装网络线路,虽然电缆本身并不贵,可是请技术人员来配线的成本很高,尤其是老旧的大楼,配线工程费用就更高了。
因此,架设无线局域网络就成为最佳解决方案。
二、什么情形需要无线局域网络无线局域网络绝不是用来替代有限局域网络,而是用来弥补有线局域网络之不足,以达到网络延伸之目的,下列情形可能须要无线局域网络。
●无固定工作场所的使用者●有线局域网络架设受环境限制●作为有线局域网络的备用系统三、无线局域网络存取技术目前厂商在设计无线局域网络产品时,有相当多种存取设计方式,大致可分为三大类:窄频微波技术、展频(Spread Spectrum)技术、及红外线(Infrared)技术,每种技术皆有其优缺点、限制及比较,接下来是这些技术方法的详细探讨。
1.技术要求由于无线局域网需要支持高速、突发的数据业务,在室内使用还需要解决多径衰落以及各子网间串扰等问题。
具体来说,无线局域网必须实现以下技术要求:1)可靠性:无线局域网的系统分组丢失率应该低于10-5,误码率应该低于10-8。
无线局域网外文翻译
AbstractThis paper focuses on the development of an energy efficent street lighting remote management system making use of low-rate wireless personal area networks and the Digital Addressable Lighting Interface (DALI) protocol to get a duplex communication, necessary for checking lamp parameters like lamp status, current level ,etc.Because of the fact that two thirds of the installed street lighting systems use old and inefficient technologies there exists a huge potential to renew the street lighting and save in the energy consumption. The proposed system uses DALI protocol in street lighting increasing the maximum number of slave devices (ballasts) that can be controlled with DALI-originally it can only have 64 ballasts. Some aspects regarding the lighting control protocol and the communication system are discussed, presenting experimental results obtained from several tests.IntroductionTwo thirds of the current installed street lighting systems still use very old and inefficient technologies, that is , there exists a huge potential to renew the existing street lighting and save in the energy consumption[1]. It is estimated that nearly the 5% of the energy used in lighting applications is consumed by the street lighting, being the most important energy regarding the energy usage in a city . New industrial approaches have been develop recently in order to achieve an efficient lighting, which can be summarized in improvements in lamps' technology and electronic ballasts, soft start systems, noiseless performance and lighting automatisms.Saving energy in street lighting can be achieved with two methods,by controlling the light duration or by dimming. There also exist remote management systems that allow the user to keep an individual remote control and monitorization of every single lamp. By making use of these systems the operator can monitor the main parameters of any light point from a central or mobile unit. The obtained data are ready for processing, allowing the reckoning of statistical consumption, lamp status, voltages, anomalies,ect,decreasing the mean time to repair. Another interesting parameter could be the arc voltage level, which can mean the change of a corrective or preventive maintenance to a predictive one, saving money in the maintenance cost.In order to have an optimum control, the remote management system should allow a duplex or half/duplex communication between the user and the ballast; otherwise we could not know the lighting status. The management system is implemented using a communication system and a lighting control protocol. The communication system can be wired,such as Ethernet, optical fiber and Power Line Carrier(PLC) or wireless. Among the last group we have GSM/GPRS, RF,WiFi,WiMAX,IEEE802.15.4 and ZigBee have brought about the boom of wireless sensor networks(WSNs).A comprehensive study of the state of the art of WSNs and both standards can be found in [2] and [3].A WSN consists of tiny sensor nodes, sink nodes, an information transport network and personal computers. Usually, WSN architecture consists of three layers, the physical layer, the MAC layer and the application layer. The IEEE802.15.4 standard deals with Low-Rate Wireless Personal Area Networks(LR-WPAN); its aim is to standardize the two lower layers of OSI protocol stack, i.e.physical layer and medium access control layer. It only considers star and peer-to-peer network topologies. On the other hand , ZigBee defines the upper layers, network and application layers, its main contribution is to provide the ability of forming cluster,tree and meshnetwork topologies to IEEE802.15.4 applicationsAs regards the lighting control protocol, it can be chosen between an open protocol,like TCP/IP , BACNet, DMX512,LONWorks,X-10, 0-10 V or DALI, or proprietary.DALI stands for Digital Addressable Lighting Interface, it was defined by annex E.4 of IEC 60929 as a digital signal controller for tubular fluorescent lamp ballasts' control interface and modified by IEC 62386, which also integrates other application of DALI apart from lighting and extends the kind of lamp to high intensity discharge (HID), halogens, incandescent, LEDs,etc.This paper focuses on developing a street lighting management system by making use of wireless sensor networks and DALI ballasts, materials used in the system are described and results about tests and measurements are presented.BackgroundSeveral scientific researches have been carried out in order to take the WSN advantages to the street lighting systems. For example, reference[4] explains the development of a wireless control system based on ZigBee. Their system allows the user to control and monitor the state of the lighting , but they do not focus on the energy efficienty, just the maintenance and the removal of wires in public areas for the people safety. Reference[5] gives a more complex exemple of WSN applied to street lighting, they develop a system that consists of sensor nodes placed in streetlight poles, a sink node in transformer station which controls every sensor node placed in a pole that belongs to that transformer station. The information of any sink node is sent to the control center via GPRS. The system also has individual or bank dimming up to 60% in order to save the energy consumption. Reference[6] states the main features of a WSN to be used as a street lighting control system, they use 6LoWPAN instead of ZigBee due to ZigBee routing protocols drawbacks and the ease of adapting 6LoWPAN, which does not define routing protocols, to any specific system文摘这一张主要是关注基于无线节能局域网和DALI协议的节能型路灯的远程控制系统之间的连接,用于检查单个路灯,比如路灯的位置、路灯的电流等。
局域网外文翻译(一)
外文翻译论文题目:大型企业网络的设计与规划:贾龙飞学号:201120210230班级:1122102班年级:2011级专业:网络工程学院:软件学院指导教师:王志波(讲师)完成时间:二〇一五年四月目录PUTER NETWORKS (3)DATE COMMUNICATIONS (3)2.ARCHITECTURE OF COMPUTER NETWORKS (4)1.The OSI Reference Model (4)2.The TCP/IP Reference Model (4)3.WIDE AREA NETWORK (5)1.CIRCUIT SWITCHED NETWORKS (5)2.LEASED LINE NETWORKS (6)3.PACKET SWITCHING NETWORKS (6)4.LOCAL AREA NETWORK (7)1.Topology (7)2.Transmission media (7)3. Medium access control methods (7)1.计算机网络 (9)数据通信 (9)2.计算机网络的体系结构 (9)1.OSI参考模型 (10)2.TCP/IP参考模型 (10)3.广域网 (10)1.电路交换网 (11)2.专线网 (11)3.分组交换网 (11)4.局域网 (12)1.拓扑结构 (12)2.传输媒体 (12)3.媒体访问控制方法 (12)PUTER NETWORKSDATE COMMUNICATIONSThe end equipment which either generates the digital information for transmission or uses the received digital data can be computer ,printers ,keyboards, CRTs, and so on. This equipment generally manipulates digital information internally in word units—all the bits that make up a word in a particular piece of equipment are transferred in parallel. Digital data, when transmitted, are in serial form. Parallel transmission of an 8-bit word require eight pairs of transmission lines—not at all cost-effective. Data terminal (DTE) is a general phrase encompassing all of the circuitry necessary to perform parallel-to-serial and serial-to-parallel conversions for transmission and reception respectively and for data link management. The UART (Universal Asynchronous Receiver/Transmitter) and USART (Universal Asynchronous/Asynchronous Receiver/Transmitter) are the devices that perform the parallel-to-serial and serial-to-parallel conversions. The primary DTE includes a line control unit (LCU or LinCo) which controls the flow of information in a multipoint data link system. A station controller (STACO) is the corresponding that belonged to the subscriber in a data link system. Between the DTEs, starting with the modems, was communications equipment owned and maintained by Telco property.Data communications equipment (DCE) accepts the serial data stream from the DTE and converts it to some form of analog signal suitable for transmission on voice-grade lined. At the receive end, the DCE performs the reverse function of converting the received analog signal to a serial digital data stream. The simplest form of DCE is a modem (modulator/demodulator) or data set. At the transmit end, the modem can be considered a form of digital-to-analog converter, while at the receive end, it can considered a form of analog-to-digital converter. The most common of modulation by modems are frequency shift keying (FSK), phase shift keying (PSK), and quadrature amplitude modulation (QAM). This is a typically data transmission mode using the analog telephone lines. If you transmit data by digital channel (sometimes it is called “Digital T-carrier”), a pulse Code Modulation (PCM) equipment must be used. A microwave transmission system can also be used for the data communication. Finally, you can use the satellitecommunication system for data transmission.If the cables and signal levels used to interconnect the DTE and DCE were left unregulated, the variations generated would probably be proportional to the number of manufacturers. Electronics industries Association (EIA),an organization of manufactures with establishing the DTE and modem. This is a 25-pincable whose pins have designated functions and specified signal levels. The RS-232C is anticipated to be replaced by an update standard.2.ARCHITECTURE OF COMPUTER NETWORKSComputer network is a complex consisting of two or more conned computing units, it is used for the purpose of data communication and resource resource sharing. Design of a network and its logical structure should comply with a set of design principles, including the organization of functions and the description of data formats and procedure. This is the network architecture and so called as a set of layers and protocols, because the architecture is a layer-based.In the next two sections we will discuss two important network architectures, the OSI reference model and the TCP/IP reference model.1.The OSI Reference ModelThe OSI model is shown in Fig.14-2(minus the physical medium). This model is based on a proposal developed by the International Standards Organizations (OSI) as the first step toward international standardization of the protocols used in the various layers. The model is called the ISO OSI (Open System Interconnection) Reference Model because it deals with connecting open systems--that is, systems that are open for communication with other systems, We will usually just call it the OSI model for short.The OSI model has seven has seven layers. Note that the OSI model itself is not a network architecture because it does not specify the exacts and protocols to be used in each layer. It just tells what each layer should do. However , However, ISO has also produced standards for all the layers, although these are not part of the reference model itself. Each one has been published as a separate international standard.2.The TCP/IP Reference ModelThe TCP/IP reference model is an early transport protocol which was designed by the US Department of Defence (DOD) around in 1978. It is often claimedthat it gave rise the OSI “connectionless”mode of operation. TCP/IP is still used extensively and is called as a industrial standard of internet work in fact, TCP/IP has two parts: TCP and IP. TCP means it is on the transport layer and IP means it is on the network layer separately.1.There are two end-to-end protocols in the transport layer, one of which is TCP (Transmission Control Protocol) , another is UDP (User Datagram Protocol). TCP is a connection-oriented protocol that allows a byte stream originating on one machine to be delivered without error on any other machine in the internet. UDP is an unreliable, connectionless protocol for application that do not want TCP’s sequencing of flows control flow control and wish to provide their own.2.The network layer defines an official packet format and protocol called IP (Internet protocol). The job of the network layer is to deliver IP packets where they are supposed to go.The TCP/IP Reference Model is shown in Fig.14.3. On top of the transport layer is the application layer, It contains all the higher-level protocols. The early ones included virtual terminal (TELNET), file transfer (FTP), electronic mail (SMTP) and domain name(DNS).3.WIDE AREA NETWORKA wide area network, or WAN, spans a large geographical area, often a country or continent . It contains a collection of machines intended for running user (i. e. , application) programs. We will follow traditional usage and call these machines hosts. By a communication subnet, or just subnet for short. The job of the subnet is to carry messages from host to host, just as the telephone system carries words from speaker to listener. By separating the pure communication aspects of the network (the subnet) from the application aspects (the hosts), the complete network design is greatly simplified. Relation between hosts and the subnet is shown in Fig.14-4.One of many methods that can be used to categorize wide area networks is with respect to the flow of information on a transmission facility. If we use this method to categorize wide area networks, we can group them into three basic types: circuit switched, leased line and packet switched.1.CIRCUIT SWITCHED NETWORKSThe most popular type of network and the one almost all readers use on a dailybasis is a circuit switched network. The public switched telephone network, however, is not limited to the telephone company, By purchasing appropriate switching equipment, any organization can construct their own internal circuit switched network and, if desired, provide one or more interfaces to the public switched network to allow voice and data transmission to flow between the public network and their private internal network2.LEASED LINE NETWORKSThis is a dedicated network connected by leased lines. Leased line is a communications line reserved for the exclusive use of a leasing customer without inter-exchange switching arrangements. Leased or private lines are dedicated to the user. This advantage is that the terminal or computer is a always physically connected to the line. Very short response times are met with.3.PACKET SWITCHING NETWORKSA packet network is constructed through the use of equipment that assembles and disassembles packets, equipment that routes packet, and transmission facilities used to route packets from the originator to the destination device. Some types of data terminal equipment (DTE) can create their own packets, while other types of DTE require the conversion of their protocol into packets through the use of a packet assembler / disassemble (PAD). Packets are routed through the network by packet switches. Packet switches examine the destination of packets as they flow through the network and transfer the packets onto trunks interconnecting switches based upon the packet destination destination and network activity.Many older pubic networks follow a standard called X.25. It was developed during 1970s by CCITT to provide an interface between public packet-switched network and their customers.CCITT Recommendation X.25 controls the access from a packet mode DTE, such as a terminal device or computer system capable of forming packets, to the DCE at a packet mode. CCITT Recommendation X.28 controls the interface between non-packet mode devices that cannot interface between the PAD and the host computer. CCITT Recommendation X.3 specifies the parameter settings on the PAD and X.75 specifies the interface between packet network.4.LOCAL AREA NETWORKLocal area data network , normally referred to simply as local area network or LANs, are used to interconnect distributed communities of computer-based DTEs located within a building or localized group of building. For example, a LAN may be used to interconnect workstations distributed around offices within a single building or a group of buildings such as a university campus. Alternatively, it may be complex. Since all the equipment is located within a single establishment, however, LANs are normally installed and maintained by the organization. Hence they are also referred to as private data networks.The main difference between a communication path established using a LAN and a connection made through a public data network is that a LAN normally offers much higher date transmission rates because of the relatively short physical separations involved. In the context of the ISO Reference Model for OSI, however, this difference manifests itself only at the lower network dependent layers. In many instances the higher protocol layers in the reference model are the same for both types of network.Before describing the structure and operation of the different types of LAN, it is perhaps helpful to first identify some of the selection issues that must be considered. It should be stressed that this is only a summary; there are also many possible links between the tips of the branches associated with the figure.1.TopologyMost wide area networks, such as the PSTN, use a mesh (sometimes referred to as a network) topology. With LANs, however, the limited physical separation of the subscriber DTEs allows simpler topologies to be used. The four topologies in common use are star, bus ,ring and hub.The most widespread topology for LANs designed to function as data communication subnetworks for the interconnection of local computer-based equipment is the hub topology, which is a variation a variation of the bus and ring. Sometimes it is called hub/tree topology.2.Transmission mediaTwisted pair, coaxial cable and optical fiber are the three main type of transmission medium used for LANs.3. Medium access control methodsTwo techniques have been adopted for use of the medium access control in the LANs. They are carrier-sense-multiple-access with collision detection (CSMA/CD), for bus network topologies, and control token, for use with either bus or ring networks.CSMA/CD is used to control multiple-access networks. Each on the network “listens” before attempting to send a message, waiting for the “traffic” to clear. If two stations try to sent their messages at exactly the same time, a “collision” is detected, and both stations are required to “step back” and try later.Control token is another way of controlling access to a shared transmission medium that is by the use of a control (permission) token. This token is passed form one DTE to another according to a defined set of rules understood and adhered to by all DTEs connected to the medium. ADTE may only transmit a frame when it is in possession of the token and, after it has transmitted the frame, it passed the token on to allow another DTE to access the transmission medium.1.计算机网络数据通信端设备可以是计算机、打印机、键盘、CRT等,它们可以产生要发送的数字信息,也可使用所接收的数字数据。
计算机 毕业论文 外文文献翻译 中英文:IEEE802.11 媒体接入控制
英文资料与中文翻译IEEE 802.11 MEDIUM ACCESS CONTROLThe IEEE 802.11 MAC layer covers three functional areas:reliable data delivery, medium access control, and security. This section covers the first two topics.Reliable Data DeliveryAs with any wireless network, a wireless LAN using the IEEE 802.11 physical and MAC layers is subject to considerable unreliability. Noise, interference, and other propagation effects result in the loss of a significant number of frames. Even with error-correction codes, a number of MAC frames may not successfully be received. This situation can be dealt with by reliability mechanisms at a higher layer. such as TCP. However, timers used for retransmission at higher layers are typically on the order of seconds. It is therefore more efficient to deal with errors at the MAC level. For this purpose, IEEE 802.11 includes a frame exchange protocol. When a station receives a data frame from another station. It returns an acknowledgment (ACK) frame to the source station. This exchange is treated as an atomic unit, not to be interrupted by a transmission from any other station. If the source does not receive an ACK within a short period of time, either because its data frame was damaged or because the returning ACK was damaged, the source retransmits the frame.Thus, the basic data transfer mechanism in IEEE802.11 involves an exchange of two frames. To further enhance reliability, a four-frame exchange may be used. In this scheme, a source first issues a request to send (RTS) frame to the destination. The destination then responds with a clear to send (CTS). After receiving the CTS, the source transmits the data frame, and the destination responds with an ACK. The RTS alerts all stations that are within reception range of the source that an exchange is under way; these stations refrain from transmission in order to avoid a collision between two frames transmitted at the same time. Similarly, the CTS alerts all stations that are within reception range of the destination that an exchange is under way. The RTS/CTS portion of the exchange is a required function of the MAC but may be disabled.Medium Access ControlThe 802.11 working group considered two types of proposals for a MAC algorithm: distributed access protocols, which, like Ethernet, distribute the decision to transmit over all the nodes using a carrier-sense mechanism; and centralized access protocols, which involve regulation of transmission by a centralized decision maker. A distributed access protocol makes sense for an ad hoc network of peer workstations (typically an IBSS) and may also be attractive in other wireless LAN configurations that consist primarily of burst traffic. A centralized access protocol is natural for configurations in which a umber of wireless stations are interconnected with each other and some sort of base station that attaches to a backbone wired LAN: it is especially useful if some of the data is time sensitive or high priority.The end result for 802.11 is a MAC algorithm called DFWMAC (distributed foundation wireless MAC) that provides a distributed access control mechanism with an optional centralized control built on top of that. Figure 14.5 illustrates the architecture. The lower sub-layer of the MAC layer is the distributed coordination function (DCF). DCF uses a contention algorithm to provide access to all traffic. Ordinary asynchronous traffic directly uses DCE. The point coordination function (PCF) is a centralized MAC algorithm used to provide contention-free service. PCF is built on top of DCF and exploits features of DCF to assure access for its users. Let us consider these two sub-layers in turn.MAClayerFigure 14.5 IEEE 802.11 Protocol ArchitectureDistributed Coordination FunctionThe DCF sub-layer makes use of a simple CSMA (carrier sense multiple access) algorithm, which functions as follows. If a station has a MAC frame to transmit, it listens to the medium. If the medium is idle, the station may transmit; otherwise the station must wait until the current transmission is complete before transmitting. The DCF does not include a collision detection function (i.e. CSMA/CD) because collision detection is not practical on a wireless network. The dynamic range of the signals on the medium is very large, so that a transmitting station cannot effectively distinguish incoming weak signals from noise and the effects of its own transmission.To ensure the smooth and fair functioning of this algorithm, DCF includes a set of delays that amounts to a priority scheme. Let us start by considering a single delay known as an inter-frame space (IFS). In fact, there are three different IFS values, but the algorithm is best explained by initially ignoring this detail. Using an IFS, the rules for CSMA access are as follows (Figure 14.6):Figure 14.6 IEEE 802.11 Medium Access Control Logic1. A station with a frame to transmit senses the medium. If the medium is idle. It waits to see if the medium remains idle for a time equal to IFS. If so , the station may transmit immediately.2. If the medium is busy (either because the station initially finds the medium busy or because the medium becomes busy during the IFS idle time), the station defers transmission and continues to monitor the medium until the current transmission is over.3. Once the current transmission is over, the station delays another IFS. If the medium remains idle for this period, then the station backs off a random amount of time and again senses the medium. If the medium is still idle, the station may transmit. During the back-off time, if the medium becomes busy, the back-off timer is halted and resumes when the medium becomes idle.4. If the transmission is unsuccessful, which is determined by the absence of an acknowledgement, then it is assumed that a collision has occurred.To ensure that back-off maintains stability, a technique known as binary exponential back-off is used. A station will attempt to transmit repeatedly in the face of repeated collisions, but after each collision, the mean value of the random delay is doubled up to some maximum value. The binary exponential back-off provides a means of handling a heavy load. Repeated failed attempts to transmit result in longer and longer back-off times, which helps to smooth out the load. Without such a back-off, the following situation could occur. Two or more stations attempt to transmit at the same time, causing a collision. These stations then immediately attempt to retransmit, causing a new collision.The preceding scheme is refined for DCF to provide priority-based access by the simple expedient of using three values for IFS:●SIFS (short IFS):The shortest IFS, used for all immediate responseactions,as explained in the following discussion●PIFS (point coordination function IFS):A mid-length IFS, used by thecentralized controller in the PCF scheme when issuing polls●DIFS (distributed coordination function IFS): The longest IFS, used as aminimum delay for asynchronous frames contending for access Figure 14.7a illustrates the use of these time values. Consider first the SIFS.Any station using SIFS to determine transmission opportunity has, in effect, the highest priority, because it will always gain access in preference to a stationwaiting an amount of time equal to PIFS or DIFS. The SIFS is used in the following circumstances:●Acknowledgment (ACK): When a station receives a frame addressed onlyto itself (not multicast or broadcast) it responds with an ACK frame after, waiting on1y for an SIFS gap. This has two desirable effects. First, because collision detection IS not used, the likelihood of collisions is greater than with CSMA/CD, and the MAC-level ACK provides for efficient collision recovery. Second, the SIFS can be used to provide efficient delivery of an LLC protocol data unit (PDU) that requires multiple MAC frames. In this case, the following scenario occurs. A station with a multi-frame LLC PDU to transmit sends out the MAC frames one at a time. Each frame is acknowledged after SIFS by the recipient. When the source receives an ACK, it immediately (after SIFS) sends the next frame in the sequence. The result is that once a station has contended for the channel, it will maintain control of the channel until it has sent all of the fragments of an LLC PDU.●Clear to Send (CTS):A station can ensure that its data frame will getthrough by first issuing a small. Request to Send (RTS) frame. The station to which this frame is addressed should immediately respond with a CTS frame if it is ready to receive. All other stations receive the RTS and defer using the medium.●Poll response: This is explained in the following discussion of PCF.longer than DIFS(a) Basic access methoddefers(b) PCF super-frame constructionFigure 14.7 IEEE 802.11 MAC TimingThe next longest IFS interval is the: PIFS. This is used by the centralized controller in issuing polls and takes precedence over normal contention traffic. However, those frames transmitted using SIFS have precedence over a PCF poll.Finally, the DIFS interval is used for all ordinary asynchronous traffic.Point C00rdination Function PCF is an alternative access method implemented on top of the DCE. The operation consists of polling by the centralized polling master (point coordinator). The point coordinator makes use of PIFS when issuing polls. Because PI FS is smaller than DIFS, the point coordinator call seize the medium and lock out all asynchronous traffic while it issues polls and receives responses.As an extreme, consider the following possible scenario. A wireless network is configured so that a number of stations with time, sensitive traffic are controlled by the point coordinator while remaining traffic contends for access using CSMA. The point coordinator could issue polls in a round—robin fashion to all stations configured for polling. When a poll is issued, the polled station may respond using SIFS. If the point coordinator receives a response, it issues another poll using PIFS. If no response is received during the expected turnaround time, the coordinator issues a poll.If the discipline of the preceding paragraph were implemented, the point coordinator would lock out all asynchronous traffic by repeatedly issuing polls. To prevent this, an interval known as the super-frame is defined. During the first part of this interval, the point coordinator issues polls in a round, robin fashion to all stations configured for polling. The point coordinator then idles for the remainder of the super-frame, allowing a contention period for asynchronous access.Figure l4.7 b illustrates the use of the super-frame. At the beginning of a super-frame, the point coordinator may optionally seize control and issues polls for a give period of time. This interval varies because of the variable frame size issued by responding stations. The remainder of the super-frame is available for contention based access. At the end of the super-frame interval, the point coordinator contends for access to the medium using PIFS. If the medium is idle. the point coordinator gains immediate access and a full super-frame period follows. However, the medium may be busy at the end of a super-frame. In this case, the point coordinator must wait until the medium is idle to gain access: this result in a foreshortened super-frame period for the next cycle.OctetsFC=frame control SC=sequence controlD/I=duration/connection ID FCS=frame check sequence(a ) MAC frameBitsDS=distribution systemMD=more data MF=more fragmentsW=wired equivalent privacy RT=retryO=orderPM=power management (b) Frame control filedFigure 14.8 IEEE 802.11 MAC Frame FormatMAC FrameFigure 14.8a shows the 802.11 frame format when no security features are used. This general format is used for all data and control frames, but not all fields are used in all contexts. The fields are as follows:● Frame Control: Indicates the type of frame and provides contr01information, as explained presently.● Duration/Connection ID: If used as a duration field, indicates the time(in-microseconds) the channel will be allocated for successful transmission of a MAC frame. In some control frames, this field contains an association, or connection, identifier.●Addresses: The number and meaning of the 48-bit address fields dependon context. The transmitter address and receiver address are the MAC addresses of stations joined to the BSS that are transmitting and receiving frames over the wireless LAN. The service set ID (SSID) identifies the wireless LAN over which a frame is transmitted. For an IBSS, the SSID isa random number generated at the time the network is formed. For awireless LAN that is part of a larger configuration the SSID identifies the BSS over which the frame is transmitted: specifically, the SSID is the MAC-level address of the AP for this BSS (Figure 14.4). Finally the source address and destination address are the MAC addresses of stations, wireless or otherwise, that are the ultimate source and destination of this frame. The source address may be identical to the transmitter address and the destination address may be identical to the receiver address.●Sequence Control: Contains a 4-bit fragment number subfield used forfragmentation and reassembly, and a l2-bit sequence number used to number frames sent between a given transmitter and receiver.●Frame Body: Contains an MSDU or a fragment of an MSDU. The MSDUis a LLC protocol data unit or MAC control information.●Frame Check Sequence: A 32-bit cyclic redundancy check. The framecontrol filed, shown in Figure 14.8b, consists of the following fields.●Protocol Version: 802.11 version, current version 0.●Type: Identifies the frame as control, management, or data.●Subtype: Further identifies the function of frame. Table 14.4 defines thevalid combinations of type and subtype.●To DS: The MAC coordination sets this bit to 1 in a frame destined to thedistribution system.●From DS: The MAC coordination sets this bit to 1 in a frame leaving thedistribution system.●More Fragments: Set to 1 if more fragments follow this one.●Retry: Set to 1 if this is a retransmission of a previous frame.●Power Management: Set to]if the transmitting station is in a sleep mode.●More Data: Indicates that a station has additional data to send. Each blockof data may be sent as one frame or a group of fragments in multiple frames.●WEP:Set to 1 if the optional wired equivalent protocol is implemented.WEP is used in the exchange of encryption keys for secure data exchange.This bit also is set if the newer WPA security mechanism is employed, as described in Section 14.6.●Order:Set to 1 in any data frame sent using the Strictly Ordered service,which tells the receiving station that frames must be processed in order. We now look at the various MAC frame types.Control Frames Control frames assist in the reliable delivery of data frames. There are six control frame subtypes:●Power Save-Poll (PS-Poll): This frame is sent by any station to the stationthat includes the AP (access point). Its purpose is to request that the AP transmit a frame that has been buffered for this station while the station was in power saving mode.●Request to Send (RTS):This is the first frame in the four-way frameexchange discussed under the subsection on reliable data delivery at the beginning of Section 14.3.The station sending this message is alerting a potential destination, and all other stations within reception range, that it intends to send a data frame to that destination.●Clear to Send (CTS): This is the second frame in the four-way exchange.It is sent by the destination station to the source station to grant permission to send a data frame.●Acknowledgment:Provides an acknowledgment from the destination tothe source that the immediately preceding data, management, or PS-Poll frame was received correctly.●Contention-Free (CF)-End: Announces the end of a contention-freeperiod that is part of the point coordination function.●CF-End+CF-Ack:Acknowledges the CF-End. This frame ends thecontention-free period and releases stations from the restrictions associated with that period.Data Frames There are eight data frame subtypes, organized into two groups. The first four subtypes define frames that carry upper-level data from the source station to the destination station. The four data-carrying frames are as follows: ●Data: This is the simplest data frame. It may be used in both a contentionperiod and a contention-free period.●Data+CF-Ack: May only be sent during a contention-free period. Inaddition to carrying data, this frame acknowledges previously received data.●Data+CF-Poll: Used by a point coordinator to deliver data to a mobilestation and also to request that the mobile station send a data frame that it may have buffered.●Data+CF-Ack+CF-Poll: Combines the functions of the Data+CF-Ack andData+CF-Poll into a single frame.The remaining four subtypes of data frames do not in fact carry any user data. The Null Function data frame carries no data, polls, or acknowledgments. It is used only to carry the power management bit in the frame control field to the AP, to indicate that the station is changing to a low-power operating state. The remaining three frames (CF-Ack, CF-Poll,CF-Ack+CF-Poll) have the same functionality as the corresponding data frame subtypes in the preceding list (Data+CF-Ack, Data+CF-Poll, Data+CF-Ack+CF-Poll) but withotit the data. Management FramesManagement frames are used to manage communications between stations and APs. The following subtypes are included:●Association Request:Sent by a station to an AP to request an association,with this BSS. This frame includes capability information, such as whether encryption is to be used and whether this station is pollable.●Association Response:Returned by the AP to the station to indicatewhether it is accepting this association request.●Reassociation Request: Sent by a station when it moves from one BSS toanother and needs to make an association with tire AP in the new BSS. The station uses reassociation rather than simply association so that the new AP knows to negotiate with the old AP for the forwarding of data frames.●Reassociation Response:Returned by the AP to the station to indicatewhether it is accepting this reassociation request.●Probe Request: Used by a station to obtain information from anotherstation or AP. This frame is used to locate an IEEE 802.11 BSS.●Probe Response: Response to a probe request.●Beacon: Transmitted periodically to allow mobile stations to locate andidentify a BSS.●Announcement Traffic Indication Message: Sent by a mobile station toalert other mobile stations that may have been in low power mode that this station has frames buffered and waiting to be delivered to the station addressed in this frame.●Dissociation: Used by a station to terminate an association.●Authentication:Multiple authentication frames are used in an exchange toauthenticate one station to another.●Deauthentication:Sent by a station to another station or AP to indicatethat it is terminating secure communications.IEEE802.11 媒体接入控制IEEE 802.11 MAC层覆盖了三个功能区:可靠的数据传送、接入控制以及安全。
局域网交换机中英文对照外文翻译文献
中英文资料外文翻译文献英文:LAN Switch ArchitectureThis chapter introduces many of the concepts behind LAN switching common to all switch vendors. The chapter begins by looking at how data are received by a switch, followed by mechanisms used to switch data as efficiently as possible, and concludes with forwarding data toward their destinations. These concepts are not specific to Cisco and are valid when examining the capabilities of any LAN switch.1. Receiving Data—Switching ModesThe first step in LAN switching is receiving the frame or packet, depending on the capabilities of the switch, from the transmitting device or host. Switches making forwarding decisions only at Layer 2 of the OSI model refer to data as frames, while switches making forwarding decisions at Layer 3 and above refer to data as packets. This chapter's examination of switching begins from a Layer 2 point of view. Depending on the model, varying amounts of each frame are stored and examined before being switched.Three types of switching modes have been supported on Catalyst switches:•Cut through•Fragment free•Store and forwardThese three switching modes differ in how much of the frame is received and examined by the switch before a forwarding decision is made. The next sections describe each mode in detail.1.1 Cut-Through ModeSwitches operating in cut-through mode receive and examine only the first 6 bytes of a frame. These first 6 bytes represent the destination MAC address of the frame, which is sufficient information to make a forwarding decision. Although cut-through switching offers the least latency when transmitting frames, it is susceptible to transmitting fragments created via Ethernet collisions, runts (frames less than 64 bytes), or damaged frames.1.2 Fragment-Free ModeSwitches operating in fragment-free mode receive and examine the first 64 bytes of frame. Fragment free is referred to as "fast forward" mode in some Cisco Catalyst documentation. Why examine 64 bytes? In a properly designed Ethernet network, collision fragments must be detected in the first 64 bytes.1.3 Store-and-Forward ModeSwitches operating in store-and-forward mode receive and examine the entire frame, resulting in the most error-free type of switching.As switches utilizing faster processor and application-specific integrated circuits (ASICs) were introduced, the need to support cut-through and fragment-free switching was no longer necessary. As a result, all new Cisco Catalyst switches utilize store-and-forward switching.Figure2-1 compares each of the switching modes.Figure2-1.Switching Modes2. Switching DataRegardless of how many bytes of each frame are examined by the switch, the frame must eventually be switched from the input or ingress port to one or more output or egress ports. A switch fabric is a general term for the communication channels used by the switch to transport frames, carry forwarding decision information, and relay management information throughout the switch. A comparison could be made between the switching fabric in a Catalyst switch and a transmission on an automobile. In an automobile, the transmission is responsible for relaying power from the engine to the wheels of the car. In a Catalyst switch, the switch fabric is responsible for relaying frames from an input or ingress port to one or more output or egress ports. Regardless of model, whenever a new switching platform is introduced, the documentation will generally refer to the "transmission" as the switching fabric.Although a variety of techniques have been used to implement switching fabrics on Cisco Catalyst platforms, two major architectures of switch fabrics are common:•Shared bus•Crossbar2.1 Shared Bus SwitchingIn a shared bus architecture, all line modules in the switch share one data path. A central arbiter determines how and when to grant requests for access to the bus from each line card. Various methods of achieving fairness can be used by the arbiter depending on the configuration of the switch. A shared bus architecture is much like multiple lines at an airport ticket counter, with only one ticketing agent processing customers at any given time.Figure2-2illustrates a round-robin servicing of frames as they enter a switch. Round-robin is the simplest method of servicing frames in the order in which they are received. Current Catalyst switching platforms such as the Catalyst 6500 support a variety of quality of service (QoS) features to provide priority service to specified traffic flows.Figure 2-2. Round-Robin Service OrderThe following list and Figure 2-3 illustrate the basic concept of moving frames from the received port or ingress, to the transmit port(s) or egress using a shared bus architecture:Frame received from Host1—The ingress port on the switch receives the entire frame from Host1 and stores it in a receive buffer. The port checks the frame's Frame Check Sequence (FCS) for errors. If the frame is defective (runt, fragment, invalid CRC, or Giant), the port discards the frame and increments the appropriate counter.Requesting access to the data bus—A header containing information necessary to make a forwarding decision is added to the frame. The line card then requests access or permission to transmit the frame ontothe data bus.Frame transmitted onto the data bus— After the central arbiter grants access, the frame is transmitted onto the data bus.Frame is received by all ports— In a shared bus architecture, every frame transmitted is received by all ports simultaneously. In addition, the frame is received by the hardware necessary to make a forwarding decision.Switch determines which port(s) should transmit the frame— The information added to the frame in step 2 is used to determine which ports should transmit the frame. In some cases, frames with either an unknown destination MAC address or a broadcast frame, the switch will transmit the frame out all ports except the one on which the frame was received.Port(s) instructed to transmit, remaining ports discard the frame— Based on the decision in step 5, a certain port or ports is told to transmit the frame while the rest are told to discard or flush the frame.Egress port transmits the frame to Host2—In this example, it is assumed that the location of Host2 is known to the switch and only the port connecting to Host2 transmits the frame.One advantage of a shared bus architecture is every port except the ingress port receives a copy of the frame automatically, easily enabling multicast and broadcast traffic without the need to replicate the frames for each port. This example is greatly simplified and will be discussed in detail for Catalyst platforms that utilize a shared bus architecture in Chapter 3, "Catalyst Switching Architecture."Figure 2-3. Frame Flow in a Shared Bus2.2 Crossbar SwitchingIn the shared bus architecture example, the speed of the shared data bus determines much of the overall traffic handling capacity of the switch. Because the bus is shared, line cards must wait their turns to communicate, and this limits overall bandwidth.A solution to the limitations imposed by the shared bus architecture is the implementation of a crossbar switch fabric, as shown in Figure 2-4. The term crossbar means different things on different switch platforms, but essentially indicates multiple data channels or paths between line cards that can be used simultaneously.In the case of the Cisco Catalyst 5500 series, one of the first crossbar architectures advertised by Cisco, three individual 1.2-Gbps data buses are implemented. Newer Catalyst 5500 series line cards have the necessary connector pins to connect to all three buses simultaneously, taking advantage of 3.6 Gbps of aggregate bandwidth. Legacy line cards from the Catalyst 5000 are still compatible with the Catalyst 5500 series by connecting to only one of the three data buses. Access to all three buses is required by Gigabit Ethernet cards on the Catalyst 5500 platform.A crossbar fabric on the Catalyst 6500 series is enabled with the Switch Fabric Module (SFM) and Switch Fabric Module 2 (SFM2). The SFM provides 128 Gbps of bandwidth (256 Gbps full duplex) to line cards via 16 individual 8-Gbps connections to the crossbar switch fabric. The SFM2 was introduced to support the Catalyst 6513 13-slot chassis and includes architecture optimizations over the SFM.Figure 2-4. Crossbar Switch Fabric3. Buffering DataFrames must wait their turn for the central arbiter before being transmitted in shared bus architectures. Frames can also potentially be delayed when congestion occurs in a crossbar switch fabric. As a result, frames must be buffered until transmitted. Without an effective buffering scheme, frames are more likely to be dropped anytime traffic oversubscription or congestion occurs.Buffers get used when more traffic is forwarded to a port than it can transmit. Reasons for this include the following:•Speed mismatch between ingress and egress ports•Multiple input ports feeding a single output port•Half-duplex collisions on an output port• A combination of all the aboveTo prevent frames from being dropped, two common types of memory management are used with Catalyst switches:•Port buffered memory•Shared memory3.1 Port Buffered MemorySwitches utilizing port buffered memory, such as the Catalyst 5000, provide each Ethernet port with a certain amount of high-speed memory to buffer frames until transmitted. A disadvantage of port buffered memory is the dropping of frames when a port runs out of buffers. One method of maximizing the benefits of buffers is the use of flexible buffer sizes. Catalyst 5000 Ethernet line card port buffer memory is flexible and can create frame buffers for any frame size, making the most of the available buffer memory. Catalyst 5000 Ethernet cards that use the SAINT ASIC contain 192 KB of buffer memory per port, 24 kbps for receive or input buffers, and 168 KB for transmit or output buffers.Using the 168 KB of transmit buffers, each port can create as many as 2500 64-byte buffers. With most of the buffers in use as an output queue, the Catalyst 5000 family has eliminated head-of-line blocking issues. (You learn more about head-of-line blocking later in this chapter in the section "Congestion and Head-of-Line Blocking.") In normal operations, the input queue is never used for more than one frame, because the switching bus runs at a high speed.Figure 2-5illustrates port buffered memory.Figure 2-5. Port Buffered Memory3.2 Shared MemorySome of the earliest Cisco switches use a shared memory design for port buffering. Switches using a shared memory architecture provide all ports access to that memory at the same time in the form of shared frame or packet buffers. All ingress frames are stored in a shared memory "pool" until the egress ports are ready to transmit. Switches dynamically allocate the shared memory in the form of buffers, accommodating ports with high amounts of ingress traffic, without allocating unnecessary buffers for idle ports.The Catalyst 1200 series switch is an early example of a shared memory switch. The Catalyst 1200 supports both Ethernet and FDDI and has 4 MB of shared packet dynamic random-access memory (DRAM). Packets are handled first in, first out (FIFO).More recent examples of switches using shared memory architectures are the Catalyst 4000 and 4500 series switches. The Catalyst 4000 with a Supervisor I utilizes 8 MB of Static RAM (SRAM) as dynamic frame buffers. All frames are switched using a central processor or ASIC and are stored in packet buffers untilswitched. The Catalyst 4000 Supervisor I can create approximately 4000 shared packet buffers. The Catalyst 4500 Supervisor IV, for example, utilizes 16 MB of SRAM for packet buffers. Shared memory buffer sizes may vary depending on the platform, but are most often allocated in increments ranging from 64 to 256 bytes. Figure 2-6 illustrates how incoming frames are stored in 64-byte increments in shared memory until switched by the switching engine.Figure 2-6. Shared Memory Architecture4. Oversubscribing the Switch FabricSwitch manufacturers use the term non-blocking to indicate that some or all the switched ports have connections to the switch fabric equal to their line speed. For example, an 8-port Gigabit Ethernet module would require 8 Gb of bandwidth into the switch fabric for the ports to be considered non-blocking. All but the highest end switching platforms and configurations have the potential of oversubscribing access to the switching fabric.Depending on the application, oversubscribing ports may or may not be an issue. For example, a 10/100/1000 48-port Gigabit Ethernet module with all ports running at 1 Gbps would require 48 Gbps of bandwidth into the switch fabric. If many or all ports were connected to high-speed file servers capable of generating consistent streams of traffic, this one-line module could outstrip the bandwidth of the entire switching fabric. If the module is connected entirely to end-user workstations with lower bandwidth requirements, a card that oversubscribes the switch fabric may not significantly impact performance. Cisco offers both non-blocking and blocking configurations on various platforms, depending on bandwidth requirements. Check the specifications of each platform and the available line cards to determine the aggregate bandwidth of the connection into the switch fabric.5. Congestion and Head-of-Line BlockingHead-of-line blocking occurs whenever traffic waiting to be transmitted prevents or blocks traffic destined elsewhere from being transmitted. Head-of-line blocking occurs most often when multiple high-speed data sources are sending to the same destination. In the earlier shared bus example, the central arbiter used the round-robin service approach to moving traffic from one line card to another. Ports on each line card request access to transmit via a local arbiter. In turn, each line card's local arbiter waits its turn for the central arbiter to grant access to the switching bus. Once access is granted to the transmitting line card, the central arbiter has to wait for the receiving line card to fully receive the frames before servicing the next request in line. The situation is not much different than needing to make a simple deposit at a bank having one teller and many lines, while the person being helped is conducting a complex transaction.In Figure 2-7, a congestion scenario is created using a traffic generator. Port 1 on the traffic generator is connected to Port 1 on the switch, generating traffic at a 50 percent rate, destined for both Ports 3 and 4. Port 2 on the traffic generator is connected to Port 2 on the switch, generating traffic at a 100 percent rate, destined for only Port 4. This situation creates congestion for traffic destined to be forwarded by Port 4 on the switch because traffic equal to 150 percent of the forwarding capabilities of that port is being sent. Without proper buffering and forwarding algorithms, traffic destined to be transmitted by Port 3 on the switch may have to wait until the congestion on Port 4 clears.Figure 2-7. Head-of-Line BlockingHead-of-line blocking can also be experienced with crossbar switch fabrics because many, if not all, linecards have high-speed connections into the switch fabric. Multiple line cards may attempt to create a connection to a line card that is already busy and must wait for the receiving line card to become free before transmitting. In this case, data destined for a different line card that is not busy is blocked by the frames at the head of the line.Catalyst switches use a number of techniques to prevent head-of-line blocking; one important example is the use of per port buffering. Each port maintains a small ingress buffer and a larger egress buffer. Larger output buffers (64 Kb to 512 k shared) allow frames to be queued for transmit during periods of congestion. During normal operations, only a small input queue is necessary because the switching bus is servicing frames at a very high speed. In addition to queuing during congestion, many models of Catalyst switches are capable of separating frames into different input and output queues, providing preferential treatment or priority queuing for sensitive traffic such as voice. Chapter 8 will discuss queuing in greater detail.6. Forwarding DataRegardless of the type of switch fabric, a decision on which ports should forward a frame and which should flush or discard the frame must occur. This decision can be made using only the information found at Layer 2 (source/destination MAC address), or on other factors such as Layer 3 (IP) and Layer 4 (Port). Each switching platform supports various types of ASICs responsible for making the intelligent switching decisions. Each Catalyst switch creates a header or label for each packet, and forwarding decisions are based on this header or label. Chapter 3 will include a more detailed discussion of how various platforms make forwarding decisions and ultimately forward data.7. SummaryAlthough a wide variety of different approaches exist to optimize the switching of data, many of the core concepts are closely related. The Cisco Catalyst line of switches focuses on the use of shared bus, crossbar switching, and combinations of the two depending on the platform to achieve very high-speed switching solutions. High-speed switching ASICs use shared and per port buffers to reduce congestion and prevent head-of-line blocking.翻译:局域网交换机体系结构本章将介绍所有交换机生产厂商都遵守的局域网交换技术的一些基本概念。
计算机专业外文翻译-----无线局域网技术
WIRELESS LANIn just the past few years, wireless LANs have come to occupy a significant niche in the local area network market. Increasingly, organizations are finding that wireless LANs are an indispensable adjunct to traditional wired LANs, as they satisfy requirements for mobility, relocation, ad hoc networking, and coverage of locationsdifficult to wire. As the name suggests, a wireless LAN is one that makes use of a wireless transmission medium. Until relatively recently, wireless LANs were little used; the reasons for this included high prices, low data rates, occupational safety concerns, and licensing requirements. As these problems have been addressed, the popularity of wireless LANs has grown rapidly.In this section, we first look at the requirements for and advantages of wireless LANs, and then preview the key approaches to wireless LAN implementation.Wireless LANs ApplicationsThere are four application areas for wireless LANs: LAN extension, crossbuilding interconnect, nomadic access, and ad hoc networks. Let us consider each of these in turn.LAN ExtensionEarly wireless LAN products, introduced in the late 1980s, were marketed as substitutes for traditional wired LANs. A wireless LAN saves the cost of the installation of LAN cabling and eases the task of relocation and other modifications to network structure. However, this motivation for wireless LANs was overtaken by events. First, as awareness of the need for LAN became greater, architects designed new buildings to include extensive prewiring for data applications. Second, with advances in data transmission technology, there has been an increasing reliance on twisted pair cabling for LANs and, in particular, Category 3 unshielded twisted pair. Most older building are already wired with an abundance of Category 3 cable. Thus, the use of a wireless LAN to replace wired LANs has not happened to any great extent.However, in a number of environments, there is a role for the wireless LAN as an alternative to a wired LAN. Examples include buildings with large open areas, such as manufacturing plants, stock exchange trading floors, and warehouses; historical buildings with insufficient twisted pair and in which drilling holes for new wiring is prohibited; and small offices where installation and maintenance of wired LANs is not economical. In all of these cases, a wireless LAN provides an effective and more attractive alternative. In most of these cases, an organization will also have a wired LAN to support servers and some stationary workstations. For example, a manufacturing facility typically has an office area that is separate from the factory floor but which must be linked to it for networking purposes. Therefore, typically, a wireless LAN will be linked into a wired LAN on the same premises. Thus, this application area is referred to as LAN extension.Cross-Building InterconnectAnother use of wireless LAN technology is to connect LANs in nearby buildings, be they wired or wireless LANs. In this case, a point-to-point wireless link is used between two buildings. The devices so connected are typically bridges or routers. This single point-to-point link is not a LAN per se, but it is usual to include this application under the heading of wireless LAN.Nomadic AccessNomadic access provides a wireless link between a LAN hub and a mobile data terminal equipped with an antenna, such as a laptop computer or notepad computer. One example of the utility of such a connection is to enable an employee returning from a trip to transfer data from a personalportable computer to a server in the office. Nomadic access is also useful in an extended environment such as a campus or a business operating out of a cluster of buildings. In both of these cases, users may move around with their portable computers and may wish access to the servers on a wired LAN from various locations.Ad Hoc NetworkingAn ad hoc network is a peer-to-peer network (no centralized server) set up temporarily to meet some immediate need. For example, a group of employees, each with a laptop or palmtop computer, may convene in a conference room for a business or classroom meeting. The employees link their computers in a temporary network just for the duration of the meeting.Wireless LAN RequirementsA wireless LAN must meet the same sort of requirements typical of any LAN, including high capacity, ability to cover short distances, full connectivity among attached stations, and broadcast capability. In addition, there are a number of requirements specific to the wireless LAN environment. The following are among the most important requirements for wireless LANs: Throughput. The medium access control protocol should make as efficient use as possible of the wireless medium to maximize capacity.Number of nodes. Wireless LANs may need to support hundreds of nodes across multiple cells. Connection to backbone LAN. In most cases, interconnection with stations on a wired backbone LAN is required. For infrastructure wireless LANs, this is easily accomplished through the use of control modules that connect to both types of LANs. There may also need to be accommodation for mobile users and ad hoc wireless networks.Service area. A typical coverage area for a wireless LAN may be up to a 300 to 1000 foot diameter.Battery power consumption. Mobile workers use battery-powered workstations that need to have a long battery life when used with wireless adapters. This suggests that a MAC protocol that requires mobile nodes to constantlymonitor access points or to engage in frequent handshakes with a base stationis inappropriate.Transmission robustness and security. Unless properly designed, a wireless LAN may be interference-prone and easily eavesdropped upon. The design of a wireless LAN must permit reliable transmission even in a noisy environment and should provide some level of security from eavesdropping.Collocated network operation. As wireless LANs become more popular, it is quite likely for two of them to operate in the same area or in some area where interference between the LANs is possible. Such interference may thwart the normal operation of a MAC algorithm and may allow unauthorized access to a particular LAN.License-free operation. Users would prefer to buy and operate wireless LAN products without having to secure a license for the frequency band used by the LAN.HandoWroaming. The MAC protocol used in the wireless LAN should enable mobile stations to move from one cell to another.Dynamic configuration. The MAC addressing and network management aspects of the LAN should permit dynamic and automated addition, deletion, and relocation of end systems without disruption to other users.Physical Medium SpecificationThree physical media are defined in the current 802.11 standard:Infrared at 1 Mbps and 2 Mbps operating at a wavelength between 850 and 950 nm.Direct-sequence spread spectrum operating in the 2.4-GHz ISM band. Up to 7 channels, each with a data rate of 1 Mbps or 2 Mbps, can be used.Frequency-hopping spread spectrum operating in the 2.4-GHz ISM band. The details of this option are for further study.Wireless LAN TechnologyWireless LANs are generally categorized according to the transmission techniquethat is used. All current wireless LAN products fall into one of the following categories:Infrared (IR) LANs. An individual cell of an IR LAN is limited to a single room, as infrared light does not penetrate opaque walls.Spread Spectrum LANs. This type of LAN makes use of spread spectrum transmission technology. In most cases, these LANs operate in the ISM (Industrial, Scientific, and Medical) bands, so that no FCC licensing is required for their use in the U.S.Narrowband Microwave. These LANs operate at microwave frequencies but do not use spread spectrum. Some of these products operate at frequencies that require FCC licensing, while others use one of the unlicensed ISM bands.A set of wireless LAN standards has been developed by the IEEE 802.11 committee. The terminology and some of the specific features of 802.11 are unique to this standard and are not reflected in all commercial products. However, it is useful to be familiar with the standard as its features are representative of required wireless LAN capabilities.The smallest building block of a wireless LAN is a basic service set (BSS), which consists of some number of stations executing the same MAC protocol and competing for access to the same shared medium. A basic service set may be isolated, or it may connect to a backbone distribution system through an access point. The access point functions as a bridge. The MAC protocol may be fully distributed or controlled by a central coordination function housed in the access point. The basic service set generally corresponds to what is referred to as a cell in the literature. An extended service set (ESS) consists of two or more basic service sets interconnected by a distribution system. Typically, the distribution system is a wired backbone LAN. The extended service set appears as a single logical LAN to the logical link control (LLC) level. The standard defines three types of stations, based on mobility:No-transition. A station of this type is either stationary or moves only within the direct communication range of the communicating stations of a single BSS.BSS-transition. This is defined as a station movement from one BSS to another BSS within the same ESS. In this case, delivery of data to the station requires that the addressing capability be able to recognize the new location of the station.ESS-transition. This is defined as a station movement from a BSS in one ESS to a BSS within another ESS. This case is supported only in the sense that the station can move. Maintenance of upper-layer connections supported by 802.11 cannot be guaranteed. In fact, disruption of service is likely to occur. details of this option are for further study.The 802.11 working group considered two types of proposals for a MAC algorithm: distributed-access protocols which, like CSMAICD, distributed the decision to transmit over all the nodes using a carrier-sense mechanism; and centralized access protocols, which involve regulation of transmission by a centralized decision maker. A distributed access protocol makes sense of an ad hoc network of peer workstations and may also be attractive in other wireless LANconfigurations that consist primarily of bursty traffic. A centralized access protocol is natural for configurations in which a number of wireless stations are interconnected with each other and with some sort of base station that attaches to a backbone wired LAN; it is especially useful if some of the data is time-sensitive or high priority.The end result of the 802.11 is a MAC algorithm called DFWMAC (distributed foundation wireless MAC) that provides a distributed access-control mechanism with an optional centralized control built on top of that. Figure 13.20 illustrates the architecture. The lower sublayer of the MAC layer is the distributed coordination function (DCF). DCF uses a contention algorithm to provide access to all traffic. Ordinary asynchronous traffic directly uses DCF. The point coordination function (PCF) is a centralized MAC algorithm used to provide contention-free service. PCF is built on top of DCF and exploits features of DCF to assure access for its users. Let us consider these two sublayers in turn.Distributed Coordination FunctionThe DCF sublayer makes use of a simple CSMA algorithm. If a station has a MAC frame to transmit, it listens to the medium. If the medium is idle, the station may transmit; otherwise, the station must wait until the current transmission is complete before transmitting. The DCF does not include a collision-detection function (i.e., CSMAICD) because collision detection is not practical on a wireless network. The dynamic range of the signals on the medium is very large, so that a transmitting station cannot effectively distinguish incoming weak signals from noise and the effects of its own transmission. To ensure the smooth and fair functioning of this algorithm, DCF includes a set of delays that amounts to a priority scheme. Let us start by considering a single delay known as an interframe space (IFS). In fact, there are three different IFS values, but the algorithm is best explained by initially ignoring this detail. Using an IFS, the rules for CSMA access are as follows:I. A station with a frame to transmit senses the medium. If the medium is idle, the station waits to see if the medium remains idle for a time equal to IFS, and, if this is so, the station may immediately transmit.2. If the medium is busy (either because the station initially finds the medium busy or because the medium becomes busy during the IFS idle time), the station defers transmission and continues to monitor the medium until the current transmission is over.3. Once the current transmission is over, the station delays another IFS. If the medium remains idle for this period, then the station backs off using a binary exponential backoff scheme and again senses the medium. If the medium is still idle, the station may transmit.Point Coordination FunctionPCF is an alternative access method implemented on top of the DCF. The operation consists of polling with the centralized polling master (point coordinator). The point coordinator makes use of PIFS when issuing polls. Because PIFS is smaller than DIFS, the point coordinator can seize the medium and lock out all asynchronous traffic while it issues polls and receives responses.As an extreme, consider the following possible scenario. A wireless network is configured so that a number of stations with time-sensitive traffic are controlled by the point coordinator while remaining traffic, using CSMA, contends for access.The point coordinator could issue polls in a round-robin fashion to all stations configured for polling. When a poll is issued, the polled station may respond using SIFS. If the point coordinator receives a response, it issues another poll using PIFS. If no response is received during theexpected turnaround time, the coordinator issues a poll. If the discipline of the preceding paragraph were implemented, the point coordinator would lock out all asynchronous traffic by repeatedly issuing polls. To prevent this situation, an interval known as the superframe is defined. During the first part of this interval, the point coordinator issues polls in a round-robin fashion to all stations configured for polling. The point coordinator then idles for the remainder of the superframe, allowing a contention period for asynchronous access.At the beginning of a superframe, the point coordinator may optionally seize control and issue polls fora give period of time. This interval varies because of the variable frame size issued by responding stations. The remainder of the superframe is available for contention-based access. At the end of the superframe interval, the point coordinator contends for access to the medium using PIFS. If the medium is idle, the point coordinator gains immediate access, and a full superframe period follows. However, the medium may be busy at the end of a superframe. In this case, the point coordinator must wait until the medium is idle to gain access; this results in a foreshortened superframe period for the next cycle.无线局域网技术最近几年,无线局域网开始在市场中独霸一方。
5G无线通信网络中英文对照外文翻译文献
5G无线通信网络中英文对照外文翻译文献(文档含英文原文和中文翻译)翻译:5G无线通信网络的蜂窝结构和关键技术摘要第四代无线通信系统已经或者即将在许多国家部署。
然而,随着无线移动设备和服务的激增,仍然有一些挑战尤其是4G所不能容纳的,例如像频谱危机和高能量消耗。
无线系统设计师们面临着满足新型无线应用对高数据速率和机动性要求的持续性增长的需求,因此他们已经开始研究被期望于2020年后就能部署的第五代无线系统。
在这篇文章里面,我们提出一个有内门和外门情景之分的潜在的蜂窝结构,并且讨论了多种可行性关于5G无线通信系统的技术,比如大量的MIMO技术,节能通信,认知的广播网络和可见光通信。
面临潜在技术的未知挑战也被讨论了。
介绍信息通信技术(ICT)创新合理的使用对世界经济的提高变得越来越重要。
无线通信网络在全球ICT战略中也许是最挑剔的元素,并且支撑着很多其他的行业,它是世界上成长最快最有活力的行业之一。
欧洲移动天文台(EMO)报道2010年移动通信业总计税收1740亿欧元,从而超过了航空航天业和制药业。
无线技术的发展大大提高了人们在商业运作和社交功能方面通信和生活的能力无线移动通信的显著成就表现在技术创新的快速步伐。
从1991年二代移动通信系统(2G)的初次登场到2001年三代系统(3G)的首次起飞,无线移动网络已经实现了从一个纯粹的技术系统到一个能承载大量多媒体内容网络的转变。
4G无线系统被设计出来用来满足IMT-A技术使用IP面向所有服务的需求。
在4G系统中,先进的无线接口被用于正交频分复用技术(OFDM),多输入多输出系统(MIMO)和链路自适应技术。
4G无线网络可支持数据速率可达1Gb/s的低流度,比如流动局域无线访问,还有速率高达100M/s的高流速,例如像移动访问。
LTE系统和它的延伸系统LTE-A,作为实用的4G系统已经在全球于最近期或不久的将来部署。
然而,每年仍然有戏剧性增长数量的用户支持移动宽频带系统。
英文文献翻译(关于zigbee)
英文文献翻译1.1 StandarsWireless sensor standards have been developed with the key design requirement for low power consumption. The standard defines the functions and protocols necessary for sensor nodes to interface with a variety of networks.Someof these standardincludeIEEE802.15.4,ZigBee,WirelessHART,ISA100.11,IETF6LoW-PAN,IE EE802.15.3,Wibree.The follow-ing paragraphs describes these standards in more detail.IEEE802.15.4:IEEE802.15.4[37] is the proposed stan-dard for low rate wireless personal area networks (LR-WPAN's).IEEE802.15.4 focuses on low cost of deployment,low complexity, and low power consumption.IEEE802.15.4 is designed for wireless sensor applications that require short range communication to maximize battery life. The standard allows the formation of the star and peer-to-peer topology for communication between net-work devices.Devices in the star topology communicate with a central controller while in the peer-to-peer topol-ogy ad hoc and self-configuring networks can be formed.IEEE802.15.4devices are designed to support the physical and data-link layer protocols.The physical layer supports 868/915 MHz low bands and 2.4 GHz high bands. The MAC layer controls access to the radio channel using the CSMA-CA mechanism.The MAC layer is also responsible for validating frames, frame delivery, network interface, network synchronization, device association, and secure services.Wireless sensor applications using IEEE802.15.4 include residential, industrial, and environment monitor-ing, control and automation.ZigBee [38,39] defines the higher layer communication protocols built on the IEEE 802.15.4 standards for LR-PANs. ZigBee is a simple, low cost, and low power wireless com- munication technology used in embedded applications.ZigBee devices can form mesh networks connecting hun- dreds to thousands of devices together. ZigBee devices use very little power and can operate on a cell battery for many years. There are three types of ZigBee devices:Zig-Bee coordinator,ZigBee router, and ZigBee end device.Zig-Bee coordinator initiates network formation,stores information, and can bridge networks together. ZigBee routers link groups of devices together andprovide mul-ti-hop communication across devices. ZigBee end devic consists of the sensors, actuators, and controllers that col-lects data and communicates only with the router or the coordinator. The ZigBee standard was publicly available as of June 2005.WirelessHART:The WirelessHART[40,41] standard pro-vides a wireless network communication protocol for pro-cess measurement and control applications.The standard is based on IEEE802.15.4 for low power 2.4 GHz operation. WirelessHART is compatible with all existing devices, tools, and systems. WirelessHART is reliable, secure, and energy efficient. It supports mesh networking,channel hopping, and time-synchronized work com-munication is secure with encryption,verification,authen-tication,and key management.Power management options enable the wireless devices to be more energy effi-cient.WirelessHART is designed to support mesh, star, and combined network topologies. A WirelessHART network consists of wireless field devices,gateways, process auto- mation controller, host applications,and network man-ager.Wireless field devices are connected to process or plant equipment.Gateways enable the communication be-tween the wireless field devices and the host applications.The process automation controller serves as a single con-troller for continuous process.The network manager con-figures the network and schedule communication between devices. It also manages the routing and network traffic. The network manager can be integrated into the gateway, host application, or process automation control-ler. WirelessHART standards were released to the industry in September 2007 and will soon be available in commer- cial products.ISA100.11a: ISA100.11a [42] standard is designed for low data rate wireless monitoring and process automation applications. It defines the specifications for the OSI layer, security, and system management.The standard focuses on low energy consumption,scalability, infrastructure,robustness, and interoperability with other wireless de-vices. ISA100.11a networks use only 2.4 GHz radio and channel hopping to increase reliability and minimize inter-ference.It offers both meshing and star network topolo-gies. ISA100.11a also provides simple, flexible, and scaleable security functionality. 6LoWPAN: IPv6-based Low power Wireless Personal Area Networks [43-45] enables IPv6 packets communica-tion over an IEEE802.15.4 based network.Low power device can communicate directly with IP devices using IP-based protocols. Using 6LoWPAN,low power devices have all the benefits of IPcommunication and management.6LoWPAN standard provides an adaptation layer, new packet format, and address management. Because IPv6 packet sizes are much larger than the frame size of IEEE 802.15.4, an adaptation layer is used. The adaptation layer carries out the functionality for header compression. With header compression, smaller packets are created to fit into an IEEE 802.15.4 frame size. Address management mecha- nism handles the forming of device addresses for commu-nication. 6LoWPAN is designed for applications with low data rate devices that requires Internet communication.IEEE802.15.3:IEEE802.15.3[46] is a physical and MAC layer standard for high data rateWPAN. It is designed to support real-time multi-media streaming of video and mu-sic.IEEE802.15.3 operates on a 2.4 GHz radio and has data rates starting from 11 Mbps to 55 Mbps.The standard uses time division multiple access (TDMA) to ensure quality of service. It supports both synchronous and asynchronous data transfer and addresses power consumption, data rate scalability, and frequency performance. The standard is used in devices such as wireless speakers, portable video electronics, and wireless connectivity for gaming, cordless phones, printers, and televisions.Wibree: Wibree [47] is a wireless communication tech-nology designed for low power consumption, short-range communication, and low cost devices. Wibree allows the communication between small battery-powered devices and Bluetooth devices.Small battery powered devices in-clude watches, wireless keyboard, and sports sensors which connect to host devices such as personal computer or cellular phones. Wibree operates on 2.4 GHz and has a data rate of 1 Mbps. The linking distance between the de-vices is 5-10 m.Wibree is designed to work with Blue-tooth. Bluetooth with Wibree makes the devices smaller and more energy-efficient. Bluetooth-Wibree utilizes the existing Bluetooth RF and enables ultra-low power con-sumption. Wibree was released publicly in October 2006.1.2 IntroductionWireless sensor networks (WSNs) have gained world-wide attention in recent years,particularly with the prolif-eration in Micro-Electro-Mechanical Systems (MEMStechnology which has facilitated the development of smart sensors.These sensors are small, with limited processing and computing resources, and they areinexpensive com-pared to traditional sensors. These sensor nodes can sense, measure, and gather information from the environment and, based on some local decision process, they can trans-mit the sensed data to the user.Smart sensor nodes are low power devices equipped with one or more sensors, a processor, memory, a power supply, a radio, and an actuator. 1 A variety of mechanical, thermal, biological, chemical, optical, and magnetic sensors may be attached to the sensor node to measure properties of he environment. Since the sensor nodes have limited memory and are typically deployed in difficult-to-access locations, a radio is implemented for wireless communica- tion to transfer the data to a base station (e.g., a laptop, a personal handheld device, or an access point to a fixed infra-structure). Battery is the main power source in a sensor node. Secondary power supply that harvests power from the environment such as solar panels may be added to the node depending on the appropriateness of the environment where the sensor will be deployed. Depending on the appli- cation and the type of sensors used, actuators may be incor- porated in the sensors.A WSN typically has ittle or no infrastructure. It con-sists of a number of sensor nodes (few tens to thousands) working together to monitor a region to obtain data about the environment. There are two types of WSNs: structured and unstructured. An unstructured WSN is one that con-tains a dense collection of sensor nodes. Sensor nodes 2 may be deployed in an ad hoc manner into the field. Once 2 In ad hoc deployment, sensor nodes may be randomly placed into the deployed, the network is left unattended to perform moni-toring and reporting functions. In an unstructured WSN, net-work maintenance such as managing connectivity and detecting failures is difficult since there are so many nodes. In a structured WSN, all or some of the sensor nodes are de-ployed in a pre-planned manner.3The advantage of a struc-tured network is that fewer nodes can be deployed with lower network maintenance and management cost.Fewer nodes can be deployed now since nodes are placed at spe-cific locations to provide coverage while ad hoc deployment can have uncovered regions.WSNs have great potential for many applications in sce-narios such as military target tracking and surveillance [2,3], natural disaster relief [4], biomedical health monitor- ing [5,6], and hazardous environment exploration and seis-mic sensing [7].Inmilitary target tracking and surveillance, a WSN can assist in intrusion detection and identification. Specific examples include spatially-corre-lated and coordinated troop and tank movements. With natural disasters, sensor nodes can sense and detect the environment to forecast disasters before they occur. In bio-medical applications, surgical implants of sensors can help monitor a patient's health.For seismic sensing, ad hoc deployment of sensors along the volcanic area can detect the development of earthquakes and eruptions.Unlike traditional networks,a WSN has its own design resource constraints.Resource constraints include a limited amount of energy,short communication range, low bandwidth, and limited processing and storage in each node. Design constraints are application dependent and are based on the monitored environment. The environment plays a key role in determining the size of the network, the deployment scheme, and the network topology. The size of the network varies with the monitored environ-ment. For indoor environments, fewer nodes are required to form a network in a limited space whereas outdoor envi-ronments may require more nodes to cover a larger area. An ad hoc deployment is preferred over pre-planned deployment when the environment is inaccessible by hu-mans or when he network is composed of hundreds to thousands of nodes. Obstructions in the environment can also limit communication between nodes, which in turn af-fects the network connectivity (or topology).Research in WSNs aims to meet the above constraints by introducing new design concepts,creating or improving existing protocols, building new applications, and develop-ingnewalgorithms.Inthisstudy,wepresentatop-downap-proach to survey different protocols and algorithms proposed in recent years. Our work differs from other sur-veys as follows:•While our survey is similar to [1], our focus has been to survey the more recent literature.•We address the issues in a WSN both at the individual sensor node level as well as a group level.•We survey the current provisioning, management and control issues in WSNs.These include issues such as localization, coverage, synchronization, network secu-rity, and data aggregation and compression.•We compare and contrast the various types of wireless sensor networks.•Finally, we provide a summary of the current sensor technologies.The remainder of this paper is organized as follows: Section 2 gives an overview of the key issues in a WSN. Section 3 compares the different types of sensor networks. Section 4 discusses several applications of WSNs.Section 5 presents issues in operating system support, supporting standards, storage, and physical testbed. Section 6 summa-rizes the control and management issues. Section 7 classi-fies and compares the proposed physical layer,data-link layer, network layer, and transport layer protocols. Section 8 concludes this paper. Appendix A compares the existing types of WSNs. Appendix B summarizes the sensor tech-nologies. Appendix C compares sensor applications with the protocol stack.1.3 Overview of key issuesCurrent state-of-the-art sensor technology provides a solution to design and develop many types of wireless sen-sor applications. A summary of existing sensor technolo-gies is provided in Appendix A. Available sensors in the market include generic (multi-purpose) nodes and gate- way (bridge) nodes. A generic (multi-purpose) sensor node's task is to take measurements from the monitored environment. It may be equipped with a variety of devices which can measure various physical attributes such as light, temperature, humidity, barometric pressure, veloc-ity, acceleration, acoustics, magnetic field, etc.Gateway (bridge) nodes gather data from generic sensors and relay them to the base station. Gateway nodes have higher pro-cessing capability,battery power, and transmission (radio) range. A combination of generic and gateway nodes is typ-ically deployed to form a WSN.To enable wireless sensor applications using sensor tech-nologies, the range of tasks can be broadly classified into three groups as shown in Fig. 1. The first group is the system. Eachsensor nodeis an individual system.In order to support different application software on a sensor system, develop-ment of new platforms, operating systems, and storage schemes are needed. The second group is communication protocols, which enable communication between the appli-cation and sensors. They also enable communication be-tween the sensor nodes. The last group is services which are developed to enhance the application and to improve system performance and network efficiency.From application requirements and network manage-ment perspectives, it isimportant th asensor nodes are capable of self-organizing themselves. That is, the sensor nodes can organize themselves into a network and subse-quently are able to control and manage themselves effi-ciently. As sensor nodes are limited in power, processing capacity, and storage, new communication protocols and management requirements.The communication protocol consists of five standard protocol layers for packet switching:application layer,transport layer, network layer, data-link layer, and physical layer. In this survey, we study how protocols at different layers address network dynamics and energy efficiency.Functions such as localization, coverage, storage, synchro- nization, security, and data aggregation and compression are explored as sensor network services.Implementation of protocols at different layers in the protocol stack can significantly affect energy consumption, end-to-end delay, and system efficiency. It is important to optimize communication and minimize energy usage. Tra-ditional networking protocols do not work well in a WSN since they are not designed to meet these requirements.Hence, new energy-efficient protocols have been proposed for all layers of the protocol stack. These protocols employ cross-layer optimization by supporting interactions across the protocol layers.Specifically, protocol state information at a particular layer is shared across all the layers to meet the specific requirements of the WSN.As sensor nodes operate on limited battery power, en-ergy usage is a very important concern in a WSN; and there has been significant research focus that revolves around harvesting and minimizing energy. When a sensor node is depleted of energy, it will die and disconnect from the network which can significantly impact the performance of the application. Sensor network lifetime depends on the number of active nodes and connectivity of the net- work, so energy must be used efficiently in order to maxi- mize the network lifetime.Energy harvesting involves nodes replenishing its en-ergy from an energy source. Potential energy sources in- clude solar cells [8,9], vibration [10], fuel cells, acoustic noise, and a mobile supplier [11]. In terms of harvesting energy from the environment [12], solar cell is the current mature technique that harvest energy from light. There is also work in using a mobile energy supplier such as a robot to replenish energy. The robots would be responsible in charging themselves with energy and then deliveringen- ergy to the nodes.Energy conservation in a WSN maximizes network life-time and is addressed through efficient reliable wireless communication, intelligent sensor placement to achieve adequate coverage, security and efficient storage manage-ment, and through data aggregation and data compression. The above approaches aim to satisfy both the energy con-straint and provide quality of service (QoS) 4 for the applica- tion. For reliable communication, services such as congestion control, active buffer monitoring, acknowledge-ments, and packet-loss recovery are necessary to guarantee reliable packet delivery. Communication strength is depen-dent on the placement of sensor nodes. Sparse sensor place-ment may result in long-range transmission and higher energy usage while dense sensor placement may result in short-range transmission and less energy consumption. Cov-erage is interrelated to sensor placement. The total number of sensors in the network and their placement determine the degree of network coverage. Depending on the application, a higher degree of coverage may be required to increase the accuracy of the sensed data. In this survey, we review new protocols and algorithms developed in these areas.1.1 标准协议:无线传感器标准已经发展出关键的设计要求低功率消耗。
Cooperative Diversity in Wireless Networks 文献翻译
无线网络的协作分集:高效协议和中断行为摘要:我们研究和分析了低复杂度的协作分集协议用以抵抗无线网络中多径传播引起的衰落,其底层技术是利用协作终端为其他终端转发信号而获得空间分集。
我们略述几种协同通信策略,包括固定中继方法如放大-转发,译码-转发,基于协同终端间的信道估计的选择中继方式,基于目标终端的有限反馈的增量中继方式。
我们在高SNR条件下以中断事件、相关中断概率指标讨论了其性能特征,以估计协议对传输衰落的鲁棒性。
除固定的解码-转发协议外,所有的协同分集协议就所达到的全分集(也就是在两个终端下的二阶分集)来说是高效的,而且在某些状态下更加接近于最优(小于 1.5d B)。
因此,当用分布式天线时,我们可以不用物理阵列而提供很好的空间分集效应,但因为采用半双工工作方式要牺牲频谱效率,也可能要增加额外接收硬件的开销。
协作分集对任何无线方式都适用,包括因空间限制而不能使用物理阵列的蜂窝移动通信和无线ad hoc网络,这些性能分析显示使用这些协议可减少能耗。
索引语——分集技术,衰落信道,中断概率,中继信道,用户协同,无线网络Ⅰ介绍在无线网络中,多径传播引起的信号衰落是一个特别严重的信道损害问题,可以利用分集技术来减小。
II系统模型在图1中的无线信道模型中,窄带传输会产生频率非选择性衰落和附加噪声。
我们在第四部分重点分析慢衰落,在时延限制相当于信道相干时间里,用中断概率来评价,与空间分集的优势区分。
虽然我们的协同协议能自然的扩展到宽频带和高移动情况,其中面临各自的频域和时域的选择性衰落,当系统采用另一种形式的分集时对我们协议的潜在影响将相对减小。
A媒体接入当前的无线网络中,例如蜂窝式和无线局域网,我们将有用的带宽分成正交信道,并且分配这些信道终端,使我们的协议适用于现存的网络。
这种选择产生的意外效果是,我们能够同时在I-A处理多径(单个接收)和干扰(多个接收),相当于在信号接收机传输信号的一对中继信号。
对于我们所有的协同协议,传输中的必须同时处理他们接收到的信号;但是,网络实现使终端不能实现全双工,也就是,传输和接收同时在相同的频带中实现。
IEEE802.11-2020中译版:概述
IEEE802.11-2020中译版概述摘要 本系列文章为IEEE802.11-2020标准的中译版,本文原文请参考英文标准的第1章。
本章主要对标准的范围、目的等做了说明,并对标准中出现的单词用法做了说明。
1 范围该标准的范围是为本区域内的固定、便携式和移动站(STA)的无线连接定义一个媒体访问控制(MAC)和多个物理层(PHY)规范。
2 目的该标准的目的是为本地区域内的固定、便携式和移动站提供无线连接。
该标准还为监管机构提供了一种标准化访问一个或多个频段的方法,以便进行局域通信。
3 关于目的的补充资料具体而言,在符合IEEE802.11™标准的设备中,此标准—描述设备在独立、个人和基础结构网络中运行所需的功能和服务,以及这些网络中设备移动性(转换)的各个方面网络。
—描述允许设备与独立网络或基础结构网络外部的另一个此类设备直接通信的功能和服务。
—定义MAC过程以支持MAC服务数据单元(MSDU)传递服务。
—定义了几种由MAC控制的PHY信令技术和接口功能。
—允许在与多个重叠的IEEE802.11WLAN共存的无线局域网(WLAN)中操作设备。
—描述为通过无线介质(WM)传输的用户信息和MAC管理信息提供数据机密性和设备身份验证的要求和过程。
—定义动态频率选择(DFS)和发射功率控制(TPC)的机制,这些机制可用于满足任何频段操作的法规要求。
—定义MAC过程以支持具有服务质量(QoS)要求的局域网(LAN)应用程序,包括语音、音频和视频的传输。
—定义设备的无线网络管理机制和服务,包括BSS转换管理、信道使用和共存、并置干扰报告、诊断、组播诊断和事件报告、灵活的多播、高效的信标机制、代理ARP播发、位置、时序测量、定向多播、扩展休眠模式、流量过滤和管理通知。
—定义帮助设备发现和选择网络的功能和程序,使用QoS映射从外部网络传输信息,以及提供紧急情况的一般机制服务业。
—定义无线多跳通信所需的MAC过程,以支持无线LAN网状拓扑。
Wifi技术资料(英文版)
Aircell to Boost In-flight Wi-Fi SpeedBy Stephen Lawson, IDG News Mar 10, 2011 8:30 amIn-flight Wi-Fi provider Aircell unveiled plans for its second generation of wireless links from aircraft to the Internet on Wednesday, promising higher capacity and the capability to offer its service outside the U.S.Aircell equips airliners and business jets with in-cabin Wi-Fi systems and operates a network of special cellular base stations around the U.S. to send data from the Internet to the planes and back. Its Gogo service is offered by United Airlines, Delta Air Lines, Virgin America and other commercial carriers, and the company also sells Gogo Biz for business jets.On planes where the airlines choose to upgrade the radio equipment, users should get about four times the speed with the new technology, according to Aircell. The main upgrade option, using a faster cellular technology, is scheduled to become available in the first half of 2012, the company said.Aircell's plan for a new generation of technology is the latest signal that in-flight Wi-Fi is here to stay. Aircell's services began to appear in 2008 after an earlier, satellite-based attempt to put passengers online, Connexion by Boeing, had failed to capture a strong following. But Wi-Fi is now available on many domestic flights in the U.S.Aircell, the biggest provider of these services, charges between US$4.95 and $12.95 depending on the length of the flight and the passenger's device. Facebook, airlines and other companies have sometimes offered special deals that make the service free.Business travelers are already demanding in-flight Wi-Fi, and more consumers will, especially the growing number of passengers with smartphones, said analyst Avi Greengart of Current Analysis."Connectivity is something that consumers are beginning to take for granted in other aspects of their lives," Greengart said.And, on flights just as in hotels and coffee shops, people are willing to pay for it, Tolaga Research analyst Phil Marshall said.Aircell will upgrade its cellular infrastructure from Revision A to Revision B of EV-DO (Evolution-Data Optimized), the 3G data technology for CDMA (Code-Division Multiple Access) networks. In Aircell's implementation, Revision B can increase EVDO's downstream speed from about 3.1M bps (bits per second) to 9.8M bps, according to Anand Chari, vice president of engineering. For airlines that want even more capacity, Aircell will also install satellite equipment on planes to link up with Ka-band satellites. The Ka-band system will be available in the continental U.S. in 2013 and around the world in 2015, according to Aircell.Satellite uplinks will also allow Aircell to offer services outside the continental U.S., on carriers based both in the U.S. and elsewhere. Airlines that want to provide Internet access on international flights before the Ka-band satellites become available will be able to use an existing network on the so-called Ku band, Chari said. The Ka band will be more economical, he said.Individual passengers should see better performance on their phones and laptops once the faster links are installed. Because not everyone is typically using the shared link at a given time, users are likely to get 5M bps or more, Chari said. However, there will still be limits to what they can do online on a typical flight, he added. For one thing, Aircell uses traffic engineering to make sure everyone sharing the network gets the best possible experience."If you want to sit on a plane and watch a Netflix movie, it's not going to work very well for you, because we did not build the network where everybody can watch a Netflix movie," Chari said.Aircell said in 2008 that it hoped to deploy LTE beginning in 2011 and achieve a 300M bps link from the ground to the air. However, the company doesn't yet have enough radio spectrum to use LTE, though it is working on acquiring more, Chari said. EV-DO Revision B will be a hardware and software upgrade to Aircell's existing EV-DO network, which is supplied by Chinese telecommunications vendor ZTE.EV-DO Revision B has been available for several years but was upstaged by LTE (Long-T erm Evolution), which can offer even higher speeds. Only three mobile operators in the world have deployed Revision B, according to Qualcomm, the pioneer of EV-DO. However, Aircell is better able to take advantage of the technology, Chari said. For one thing, Revision B requires a clean signal, which is harder to achieve when it has to go through walls and other obstacles, he said."We have a very unique situation: There is nothing between the aircraft and our towers," Chari said.。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
无线局域网技术外文翻译文献(文档含中英文对照即英文原文和中文翻译)翻译:无线局域网技术最近几年,无线局域网开始在市场中独霸一方。
越来越多的机构发现无线局域网是传统有线局域网不可缺少的好帮手,它可以满足人们对移动、布局变动和自组网络的需求,并能覆盖难以铺设有线网络的地域。
无线局域网是利用无线传输媒体的局域网。
就在前几年,人们还很少使用无线局域网。
原因包括成本高、数据率低、职业安全方面的顾虑以及需要许可证。
随着这些问题的逐步解决,无线局域网很快就开始流行起来了。
无线局域网的应用局域网的扩展在20世纪80年代后期出现的无线局域网早期产品都是作为传统有线局域网替代品而问世的。
无线局域网可以节省局域网缆线的安装费用,简化重新布局和其他对网络结构改动的任务。
但是,无线局域网的这个动机被以下一系列的事件打消。
首先,随着人们越来越清楚地认识到局域网的重要性,建筑师在设计新建筑时就包括了大量用于数据应用的预先埋设好的线路。
其次,随着数据传输技术的发展,人们越来越依赖于双绞线连接的局域网。
特别是3类和5类非屏蔽双绞线。
大多数老建筑中已经铺设了足够的3类电缆,而许多新建筑里则预埋了5类电缆。
因此,用无线局域网取代有线局域网的事情从来没有发生过。
但是,在有些环境中无线局域网确实起着有线局域网替代品的作用。
例如,象生产车间、股票交易所的交易大厅以及仓库这样有大型开阔场地的建筑;没有足够双绞线对,但又禁止打洞铺设新线路的有历史价值的建筑;从经济角度考虑,安装和维护有线局域网划不来的小型办公室。
在以上这些情况下,无线局域网向人们提供了一个有效且更具吸引力的选择。
其中大多数情况下,拥有无线局域网的机构同时也拥有支持服务器和某些固定工作站的有线局域网。
因此,无线局域网通常会链接到同样建筑群内的有线局域网上。
所以我们将此类应用领域成为局域网的扩展。
建筑物的互连无线局域网技术的另一种用途是邻楼局域网之间的连接,这些局域网可以是无线的也可以是有线的。
在这种情况下,两个楼之间采用点对点的无线链接。
被链接的设备通常是网桥或路由器。
这种点对点的单链路从本质上看不是局域网,但通常我们也把这种应用算作无线局域网。
漫游接入漫游接入提供局域网和带有天线的移动数据终端之间的无线链接,如膝上型电脑和笔记本电脑。
这种应用的一个例子是从外地出差回来的职员将数据从个人移动电脑传送到办公室的服务器上。
漫游接入在某种延伸的环境下也是十分有用的,如在建筑群之外操作的一台电脑或一次商务行为。
在以上两种情况下,用户会带着自己的电脑随意走动,并希望可以从不同的位置访问有线局域网上的服务器。
自组网络自组网络(ad hoc network)是为了满足某些即时需求而临时而建立的一种对等网络(没有中央服务器)例如,有一群职员,每人带着一台膝上电脑或掌上电脑,会聚在商务会议室或课堂上。
这些职员会将他们的电脑链接起来,形成一个临时性的、仅仅在会议期间存在的网络。
无线局域网的要求无线局域网必须满足所有局域网的典型要求,包括大容量、近距离的覆盖能力、相连站点间的完全连接性以及广播能力。
另外,无线局域网环境还有一些特殊的要求。
以下是一些无线局域网最终要的要求:吞吐量:媒体接入控制协议应当尽可能地有效利用无线媒体以达到最大的容量。
节点数量:无线局域网可能需要支持分布在多个蜂窝中的上百个节点。
连接到主干局域网:在大多数情况下,要求能够与主干有线局域网的站点相互连接。
对于有基础设施的无线局域网,很容易通过利用控制模块完成这个任务,控制模块本身就连接着这两种类型的局域网。
对于移动用户和自组无线网络来说,可能需要满足这个要求。
电池能量消耗:移动工作人员用的是由电池供电的工作站,它需要在使用无线适配器的情况下,电池供电时间足够长。
这就是说,要求移动节点不停地监视接入点或者经常要与基站握手的MAC协议是不适用的。
通常,无线局域网的实现都具有在不使用网络时减少能量消耗的特殊性能,如睡眠模式。
传输健壮性和安全性:除非涉及合理,无线局域网很容易受到干扰并且容易被窃听。
无线局域网的设计必须做到即使在噪音较大的环境中也能可靠传输,并且为应用提供某种程度的安全性,以防窃听。
并列的网络操作:随着无线局域网变得越来越流行,很可能有两个或者更多无线局域网同时存在于一个区域内,或在局域网之间可能存在干扰的某些区域内运行。
这种干扰可能会阻碍MAC算法的正常运行,还可能造成对特定局域网的非法接入。
不需要许可证的操作:用户希望购买和运行的是这样的无线局域网产品,它们不需要专门为局域网所使用的频带而申请许可证。
切换和漫游:无线局域网中使用的MAC协议应当让移动站点能够从一个蜂窝移动到另一个蜂窝。
动态配置:局域网在MAC地址机制和网络管理方面应当允许端系统能够动态且自动地增加、删除和移动位置,并且不打扰到其他用户。
无线局域网技术无线局域网通常根据它所采用的传输技术进行分类。
目前所有无线局域网产品都可归为以下三个大类之一:红外线(IR)局域网:红外线局域网的一个蜂窝只能限制在一个房间里,因为红外线无法穿过不透明的墙。
扩频局域网:这种类型的局域网利用了扩频传输技术。
在大多数情况下,这些局域网运行在ISM(个人、科学和医学)波段内,因此,在美国使用这些局域网不需要联邦通信委员会(FCC)发放的许可证。
窄带微波:这些局域网运行在微波频率是,但没有使用扩频技术。
其中有些产品运行的频率需要FCC的许可证,而其他一些产品则使用了不需要许可的波段。
无线局域网有一个特性是人们乐意接受的,虽然不是必要的,那就是不需要通过麻烦的授权过程就能使用。
每个国家的许可证发放制度都不一样,这就使事情变得更加复杂。
在美国,FCC在ISM波段内特许了两个不需要许可证的应用:最大功率为1瓦的扩频系统合最大运行功率为0.5瓦的低功率系统。
自从FCC开放了这个波段以来,在扩频无线局域网中的应用就越来越普遍。
1990年IEEE802.11工作组成立,它的宪章就是要为无线局域网开发MAC协议以及物理媒体规约。
无线局域网中最小的模块是基本服务集(Basic Service Set, BSS),它由一些执行相同MAC协议并争用同一共享媒体完成接入的站点组成。
基本服务集可以是孤立的,也可以通过接入点(Access Point, AP)连到主干分发系统(Distribution System, DS)上。
接入点的功能相当于网桥。
MAC协议可以是完全分布式的,也可以由位于接入点的中央协调功能控制。
BBS通常与文献中的蜂窝相对应,而DS则有可能是交换机或有线网络,也可以是无线网络。
MAC层的主要任务是在MAC实体之间传送MSDU,这个任务是由分发服务实现的。
分发服务的正常运行需要该ESS内所有站点的信息,而这个信息是由与关联(association)相关的服务提供的。
在分发服务向站点交付数据或者接收来自站点的数据之前,该站点必须要建立关联。
标准基于移动性定义了三种转移类型:无转移:这种类型的站点或者是固定的,或者只在一个BSS的直接通信范围内移动。
BSS转移:这种类型的站点移动是在同一ESS内从一个BSS移动到另一个BSS。
在这种情况下,该站点的数据交付需要寻址功能,能识别出该站点的新位置。
ESS转移:它的定义是指站点从一个ESS的BSS到另一个ESS的BSS移动。
只有从某种意义上看该站点是能够移动的,才能支持这种类型的转移。
802.11工作组考虑了两类MAC算法建议:分布式接入协议和集中式接入协议。
分布式接入协议类似于以太网,采用载波监听机制把传输的决定权分布到所有节点。
集中式接入协议由一个集中的决策模块来控制发送。
分布式接入协议对于对等工作站形式的自组网络是有意义的,同时也可能对主要是突发性通信量的其他一些无线局域网颇具吸引力。
如果一个局域网的配置是由许多互连的无线站点和以某种形式连接到主干有线局域网的基站组成,则采用集中式接入控制是自然而然的事情。
当某些数据是时间敏感的或者是高优先级的时,这种方法特别有用。
IEEE802.11的最终结果是一个称为分布式基础无线MAC(Distributed Foundation Wireless MAC,DFWMAC)的算法,它提供了一个分布式接入控制机制,并在顶端具有可选的集中式控制。
MAC层的低端子层是分布式协调功能(Distributed Coordination Function , DCF).DCF采用争用算法向所有通信量提供接入。
正常的异步通信量直接使用DCF。
点协调功能(Point Coordination Function, PCF)是一个集中式MAC算法,用于提供无争用服务。
分布式协调功能DCF子层使用一种简单的CSMA(载波监听多点接入)算法。
如果站点有一个MAC 帧要发送,则先监听媒体。
如果媒体空闲,站点可以发送。
否则,该站点必须等待直到当前的发送结束。
DCF不包括冲突检测功能(CSMA/CD),因为在无线网络中进行冲突检测是不实际的。
媒体上信号变动范围很大,所以如果正在传输的站点接收到微弱信号,它无法区分这是噪声还是因为自己的传输而带来的影响。
为了保证算法的平稳和公平运行,DCF包含了一组等价于优先级策略的时延。
我们首先考虑一个称为帧间间隔(InterFrame Space,IFS)时延。
采用IFS后CSMA 的接入规则如下:1。
有帧要传输的站点先监听媒体。
如果媒体是空闲的,等待IFS长的一段时间,再看媒体是否空闲,如果是空闲,立即发送。
2。
如果媒体是忙的(或是一开始就发现忙,或是在IFS空闲时间内发现媒体忙),则推迟传输,并继续监听媒体直到当前的传输结束。
3。
一旦当前的传输结束,站点再延迟IFS一段时间。
如果媒体在这段时间内都是空闲的,则站点采用二进制指数退避策略等待一段时间后再监听媒体,如果媒体依然是空闲的,则可以传输。
在退避期间,如果媒体又变忙了,退避定时器暂停,并在媒体变空闲后恢复计时。
点协调功能PCF是在DCF之上实现的另一种接入方式。
其操作由中央轮询主控器(点协调器)的轮询构成。
点协调在发布轮询时采用PIFS。
因为PIFS比DIFS小,所以点协调器在发布轮询和接收响应时能获取媒体并封锁所有的异步通信量。
点协调器不断地发布轮询,并永远封锁所有异步通信量。
为了避免这种情况,定义了一个称为超帧(superframe)的时间间隔。
在超帧时间的开始部分,点协调器以循环方式向所有配置成轮询的站点发布轮询。
然后,在余下的超帧时间里,点协调器空闲,允许异步通信量有一段争用接入的时间。
在超帧开始时,点协调器可以在给定时间内获得控制权和发布轮询,这由选项决定。
由于响应站点发出的帧的长度是变化的,所以这个时间间隔也是变化的。
超帧剩余的时间用于基于争用的接入。
在超帧末尾,点协调器泳PIFS时间争用媒体接入权。