第三章 Network topulogy and LAN Design

合集下载

andrew s. tanenbaum 《computer networks》fourth edition习题答案

andrew s. tanenbaum 《computer networks》fourth edition习题答案
第2章物理层
1.
答;本题是求周期性函数的傅立叶系数。而题面中所给出的为信号在一个周期内的解析式。
即;
2.答:无噪声信道最大数据传输率公式:最大数据传输率=2hlog2v b/s。因此最大数据传输率决定于每次采样所产生的比特数,如果每次采样产生16bits,那么数据传输率可达128kbps;如果每次采样产生1024bits,那么可达8.2mbps。注意这是对无噪声信道而言的,实际信道总是有噪声的,其最大数据传输率由香农定律给出。
21.通常在物理层对于在线路上发送的比特不采取任何差错纠正措施。在每个调制解调器中都包括一个cpu使得有可能在第一层中包含错误纠正码,从而大大减少第二层所看到的错误率。由调制解调器做的错误处理可以对第二层完全透明。现在许多调制解调器都有内建的错误处理功能。
22.每个波特有4个合法值,因此比特率是波特率的两倍。对应于1200波特,数据速率是2400bps。
22. tcp与udp之间最主要的区别是什么。
答:tcp是面向连接的,而udp是一种数据报服务。
23.如果3枚炸弹炸毁与右上角那2个节点连接的3个节点,可将那2个节点与其余的节点拆开。系统能经得住任何两个节点的损失。
25.如果网络容易丢失分组,那么对每一个分组逐一进行确认较好,此时仅重传丢失的分组。而在另一方面,如果网络高度可靠,那么在不发差错的情况下,仅在整个文件传送的结尾发送一次确认,从而减少了确认的次数,节省了带宽;不过,即使有单个分组丢失,也需要重传整个文件。
10.区分n-2事件。事件1到n由主机成功地、没有冲突地使用这条信道的事件组成。这些可能性的事件的概率为p(1-p)n -1。事件n+1是一个空闲的信道,其概率为(1- p)n。事件n+2是一个冲突。由于事件n+2互斥,它们可能发生的事件必须统一合计。冲突的可能性等于那些小部分的槽的浪费,只是

思科第三学期1~9章节答案

思科第三学期1~9章节答案

第一章1 人们将 UDP 而不是 TCP 用于语音和视频通信的原因有哪两点?(选择两项。

) b cTCP 需要传输所有数据包,数据才可使用。

TCP 的确认过程引入了延时,这打断了数据流。

UDP 不具有重新传输丢失数据包的机制。

UDP 可以容许延时并相应进行补偿。

TCP 是一种无连接协议,可提供端到端可靠性。

UDP 是一种面向连接的协议,可提供端到端可靠性。

2 为什么 TCP 是用于传输数据文件的首选第 4 层协议? a1TCP 比 UDP 更可靠,因为它要求重新传输丢失的数据包。

与 UDP 相比,TCP 对源主机和目的主机处理资源要求较低。

UDP 引入了延时,因此会降低数据应用程序的质量。

TCP 可确保快速传输,因为它无需排序或确认。

3 企业 IT 部门可使用哪两种解决方案来帮助远程工作人员访问内部网?(选择两项。

) a cVPNNAT用户身份验证客户端防火墙软件数据包嗅探4 Cisco 企业体系架构有什么用途? b放弃三层层次模型,改用平面型网络方案将网络划分为功能组件,同时保持核心层、分布层和访问层的概念通过将多个组件整合在访问层的单个组件中,为核心层提供服务和功能通过将服务器群、管理服务器、企业内部网和电子商务路由器组合在同一层中,降低整体网络流量5 IDS 和 IPS 应该部署在 Cisco 企业体系架构的哪个功能区域内,才能检测和防止来自外部的恶意活动? c企业园区WAN 与 Internet企业边缘服务提供商边缘6 拥有外联网有什么好处? d它可专为员工提供类似 web 的访问方式以访问公司信息。

它可限制用户访问企业信息的方式,使其只能通过安全 VPN 或远程访问连接来访问。

它使客户和合作伙伴可通过连接到公共 web 服务器来访问公司信息。

它使供应商和承包商可通过受控的外部连接访问内部机密信息。

7 VoIP 可为远程办公提供什么功能?b高质量实时视频演示通过 Internet 进行实时语音通信同时共享桌面应用程序的能力通过 Internet 进行安全的加密数据传输8 Cisco 企业体系架构的哪个功能组件负责托管内部服务器? a企业园区企业边缘服务提供商边缘建筑分布层9 在层次式设计模型中,哪项任务一般仅需要使用接入层的服务? c连接到企业 web 服务器以更新销售数据在家中使用 VPN 向总部办公室服务器发送数据通过本地部门网络打印机打印会议日程向国外的商业机构拨打 VoIP 电话回复其它部门同事发来的电子邮件10 企业边缘处的设备有哪两项重要特征或功能?(选择两项。

CCNA 640 802 第六版 笔记

CCNA 640 802 第六版 笔记

目录Chapter 1 Internetworking(网际互联) (2)Exam Essentials (2)Chapter 2 Introduction to TCP/IP (4)Exam Essentials (4)Chapter 3 Subnetting,VLSM, Troubleshooting TCP/IP (8)Exam Essentials (8)Chapter 4 Cisco’s Internetworking Operating System (IOS) and Security Device Manager (SDM) (10)Exam Essentials (10)Chapter 5 Managing a Cisco Internetwork (12)Exam Essentials (12)Chapter 6 IP Routing (21)Exam Essentials (21)Chapter 7 Enhanced IGRP (EIGRP) and Open Shortest Path First (OSPF) (26)Exam Essentials (26)Chapter 8 Layer 2 Switching and Spanning Tree Protocol (STP) (32)Exam Essentials (32)Chapter 9 V irtual LANs_虚拟局域网(VLANs) (37)Exam Essentials (37)Chapter 10 Security (42)Exam Essentials (42)Chapter 11 Network Address Translation (NA T) (46)Exam Essentials (46)Chapter 12 Cisco’s Wireless Technologies (48)Exam Essentials (48)Chapter 13 Internet Protocol V ersion 6(IPv6) (51)Exam Essentials (51)Chapter 14 Wide Area Networks (54)Exam Essentials (54)Chapter 1 Internetworking(网际互联)Exam Essentials①Remember the possible causes of LAN traffic congestion.②Understand the difference between a collision domain and a broadcast domain.③Understand the difference between a hub, a bridge, a switch, and a router.④Remember the difference between connection-oriented and connectionlessnetwork services.⑤Remember the OSI layers.⑥Remember the types of Ethernet cabling and when you would use them.⑦Understand how to connect a console cable from a PC to a router and start HyperTerminal.⑧Remember the three layers in the Cisco three-layer model.把一个大的网络划分为一些小的网络就称为网络分段,这些工作由路由器,交换机和网桥来完成。

网络设计与规划中英文对照外文翻译文献

网络设计与规划中英文对照外文翻译文献

网络设计与规划中英文对照外文翻译文献(文档含英文原文和中文翻译)Service-Oriented Network Architecture (SONA)1.T he challenges facing businessesAlthough a large number of IT capital investment, but many companies have found that most of the critical network resources and information assets remain in the free state. In fact, can not have hundreds of "orphaned" applications and databases communicate with each other is a common business phenomenon.This is partly due to growing internal and external customers, but due to unpredictable demand. Many companies have been forced to rapidly deploy new technologies, often leading to the deployment of a plurality of discrete systems, and thus can not effectively share information across the organization. For example, if you do not create the applications and information together various overlapping networks, sales, customer service or purchasing department will not be able to easily access customer records. Many companies have found that the blind expansion brought them multiple underutilized and irreconcilable separation systems and resources. These disparate systems while also difficult to manage and expensive to administer.2. Intelligent Information Network - The Cisco AdvantageCisco Systems, Inc. ® With the Intelligent Information Network (IIN) program, is helping global IT organizations solve these problems and meet new challenges, such as the deployment of service-oriented architecture, Web services and virtualization. IIN elaborated network in terms of promoting the development of integrated hardware and software, which will enable organizations to better align IT resources with business priorities. By intelligent built into the existing network infrastructure, IIN will help organizations achieve lower infrastructure complexity and cost advantages.3. Power NetworksInnovative IT environment focused on by traditional server-based system to distributenew business applications. However, the network is still transparent connectivity and support IT infrastructure platform for all components. Cisco ® Service-Oriented Network Architecture (SONA), enterprises can optimize applications, processes and resources to achieve greater business benefits. By providing better network capabilities and intelligence, companies can improve the efficiency of network-related activities, as well as more funds for new strategic investments and innovation.Standardization reduces the amount of assets needed to support the same operating costs, thereby improving asset efficiency. Virtualization optimizes the use of assets, physical resources can be divided logically for use in all sectors of the dispersion. Improve the efficiency of the entire network can enhance the flexibility and scalability, and then have a huge impact on business development, customer loyalty and profits - thereby enhancing their competitive advantage.4. Use architecture to succeedCisco SONA framework illustrates how companies should develop into intelligent information network to accelerate applications, business processes and resources, and to IT to provide enterprises with better service.Cisco SONA Cisco and Cisco partner solutions in the industry, services and experience to provide proven, scalable business solutions.Cisco SONA framework illustrates how to build on the full integration of the intelligent network integration system, in order to greatly improve the flexibility and efficiency.Enterprises can deploy this integrated intelligence among the entire network, including data centers, branch offices and campus environments.4-1 Cisco Service-Oriented Network ArchitectureApplication layer business applications collaborative applicationsInteractive Services Layer Application Networking Services Adaptive Management ServicesInfrastructure ServicesNetwork infrastructure virtualizationNetwork infrastructure layer Park branch office data center WAN / MAN teleworkers Client server storageIntelligent Information Network5. Three levels of Cisco SONANetwork infrastructure layer, where all the IT resources on the Internet converged network platformInteractive services layer, the use of network infrastructure, applications and business processes efficient allocation of resourcesApplication layer, contains business applications and collaboration applications, take advantage of the efficiency of interactive servicesIn the network infrastructure layer of Cisco's proven enterprise architecture provides comprehensive design guide that provides a comprehensive, integrated end-system design guidelines for your entire network.In the interactive services layer, Cisco will integrate a full-service intelligent systems to optimize the distribution business and collaboration applications, thereby providing more predictable, more reliable performance, while reducing operating costs.At the application layer, through deep integration with the network fabric, Cisco application networking solutions without having to install the client or change the application, the entire application delivery while maintaining application visibility and security.6. Build business advantage of Cisco SONASimpler, more flexible, integrated infrastructure will provide greater flexibility and adaptability, and thus a lower cost for higher commercial returns. Use Cisco SONA, you will be able to improve overall IT efficiency and utilization, thereby enhancing the effectiveness of IT, we call network multiplicative effect.7. Network amplification effectZoom effect refers to the network to help enterprises enhance the contribution of IT across the enterprise through Cisco SONA. Optimal efficiency and use IT resources will be more low-cost to produce higher impact on the business, so that your network of value-added resources become profitable.Network amplification effect is calculated as follows:Efficiency = Cost ÷ IT assets (IT assets cost + operating costs)Utilization percentage (such as the percentage of available storage being used) assets to total assets used =Effectiveness = Efficiency x usageAsset Effectiveness Network amplifying effect = assets ÷ efficacy when using the Cisco SONA when not in use Cisco SONA8. Investment incomeCisco Advantage Cisco SONA in intelligent systems is not only to improve efficiency and reduce costs. By Cisco SONA, through the power of your network can achieve:Increase in income and opportunityImproved customer relationsImprove business resiliency and flexibilityIncrease productivity and efficiency and reduce costs9. Real-Time DevelopmentBy Cisco SONA toward more intelligent integrated network development, enterprises can be completed in phases: integration, standardization, virtualization and automation. Working with Cisco channel partner or customer groups, you can use the Cisco SONA framework to develop a blueprint for the development of enterprises. With rich experience in Cisco Lifecycle Management Services, a leading position in the field of standardization, mature enterprise architecture and create targeted industry solutions, Cisco account team can help you meet business requirements in real time.10.The development of the Intelligent Information NetworkRole of the network is evolving. Tomorrow's intelligent network will provide more than basic connectivity, bandwidth and application user access services, which will provide end functionality and centralized control, to achieve true enterprise transparency and flexibility.Cisco SONA enables enterprises to extend their existing infrastructure, towards the development of intelligent network to accelerate applications and improve business processes. Cisco provides design, support and financing services to maximize your return on investment.服务导向网络架构(SONA)1.企业面临的挑战尽管投入大量IT资金,但许多企业发现大多数的关键网络资源和信息资产仍处于游离状态。

3.-第三章课后习题及答案

3.-第三章课后习题及答案

第三章1. (Q1) Suppose the network layer provides the following service. The network layer in the source host accepts a segment of maximum size 1,200 bytes and a destination host address from the transport layer. The network layer then guarantees to deliver the segment to the transport layer at the destination host. Suppose many network application processes can be running at the destination host.a. Design the simplest possible transport-layer protocol that will getapplication data to the desired process at the destination host. Assume the operating system in the destination host has assigned a 4-byte port number to each running application process.b. Modify this protocol so that it provides a “return address”to the destination process.c. In your protocols, does the transport layer “have to do anything” in thecore of the computer network.Answer:a. Call this protocol Simple Transport Protocol (STP). At the sender side, STPaccepts from the sending process a chunk of data not exceeding 1196 bytes,a destination host address, and a destination port number. STP adds a four-byte header to each chunk and puts the port number of the destination process in this header. STP then gives the destination host address and the resulting segment to the network layer. The network layer delivers the segment to STP at the destination host. STP then examines the port number in the segment, extracts the data from the segment, and passes the data to the process identified by the port number.b. The segment now has two header fields: a source port field and destinationport field. At the sender side, STP accepts a chunk of data not exceeding 1192 bytes, a destination host address, a source port number, and a destination port number. STP creates a segment which contains the application data, source port number, and destination port number. It then gives the segment and the destination host address to the network layer. After receiving the segment, STP at the receiving host gives the application process the application data and the source port number.c. No, the transport layer does not have to do anything in the core; thetransport layer “lives” in the e nd systems.2. (Q2) Consider a planet where everyone belongs to a family of six, every family lives in its own house, each house has a unique address, and each person in a given house has a unique name. Suppose this planet has a mail service that delivers letters form source house to destination house. The mail service requires that (i) the letter be in an envelope and that (ii) the address of the destination house (and nothing more ) be clearly written on the envelope. Suppose each family has a delegate family member who collects and distributes letters for the other family members. The letters do not necessarily provide any indication of the recipients of the letters.a. Using the solution to Problem Q1 above as inspiration, describe a protocolthat the delegates can use to deliver letters from a sending family member to a receiving family member.b. In your protocol, does the mail service ever have to open the envelope andexamine the letter in order to provide its service.Answer:a.For sending a letter, the family member is required to give the delegate theletter itself, the address of the destination house, and the name of therecipient. The delegate clearly writes the recipient’s name on the top of the letter. The delegate then puts the letter in an envelope and writes the address of the destination house on the envelope. The delegate then gives the letter to the planet’s mail service. At the receiving side, the delegate receives the letter from the mail service, takes the letter out of the envelope, and takes note of the recipient name written at the top of the letter. The delegate than gives the letter to the family member with this name.b.No, the mail service does not have to open the envelope; it only examines theaddress on the envelope.3. (Q3) Describe why an application developer might choose to run an application over UDP rather than TCP.Answer:An application developer may not want its application to use TCP’s congestion control, which can throttle the application’s sending rate at times of congestion. Often, designers of IP telephony and IP videoconference applications choose to run their applications over UDP because they want to avoid TCP’s congestion control. Also, some applications do not need the reliable data transfer provided by TCP.4. (P1) Suppose Client A initiates a Telnet session with Server S. At about the same time, Client B also initiates a Telnet session with Server S. Provide possible source and destination port numbers fora. The segment sent from A to B.b. The segment sent from B to S.c. The segment sent from S to A.d. The segment sent from S to B.e. If A and B are different hosts, is it possible that the source port numberin the segment from A to S is the same as that from B to Sf. How about if they are the same hostAnswer:e Yes.f No.5. (P2) Consider Figure What are the source and destination port values in thesegments flowing form the server back to the clients’ processes What are the IP addresses in the network-layer datagrams carrying the transport-layer segmentsAnswer:Suppose the IP addresses of the hosts A, B, and C are a, b, c, respectively.(Note that a,b,c are distinct.)To host A: Source port =80, source IP address = b, dest port = 26145, dest IP address = aTo host C, left process: Source port =80, source IP address = b, dest port = 7532, dest IP address = cTo host C, right process: Source port =80, source IP address = b, dest port = 26145, dest IP address = c6. (P3) UDP and TCP use 1s complement for their checksums. Suppose you have thefollowing three 8-bit bytes: 01101010, 01001111, 01110011. What is the 1s complement of the sum of these 8-bit bytes (Note that although UDP and TCP use 16-bit words in computing the checksum, for this problem you are being asked to consider 8-bit sums.) Show all work. Why is it that UDP takes the 1s complement of the sum; that is , why not just sue the sum With the 1s complement scheme, how does the receiver detect errors Is it possible that a 1-bit error will go undetected How about a 2-bit errorAnswer:One's complement = 1 1 1 0 1 1 1 0.To detect errors, the receiver adds the four words (the three original words and the checksum). If the sum contains a zero, the receiver knows there has been an error. All one-bit errors will be detected, but two-bit errors can be undetected ., if the last digit of the first word is converted to a 0 and the last digit of the second word is converted to a 1).7. (P4) Suppose that the UDP receiver computes the Internet checksum for thereceived UDP segment and finds that it matches the value carried in the checksum field. Can the receiver be absolutely certain that no bit errors have occurred Explain.Answer:No, the receiver cannot be absolutely certain that no bit errors have occurred. This is because of the manner in which the checksum for the packet is calculated. 0 1 0 1 0 1 0 1 + 0 1 1 1 0 0 0 0 1 1 0 0 0 1 0 1 1 1 0 0 0 1 0 1 + 0 1 0 0 1 1 0 0 0 0 0 1 0 0 0 1If the corresponding bits (that would be added together) of two 16-bit words in the packet were 0 and 1 then even if these get flipped to 1 and 0 respectively, the sum still remains the same. Hence, the 1s complement the receiver calculates will also be the same. This means the checksum will verify even if there was transmission error.8. (P5) a. Suppose you have the following 2 bytes: 01001010 and 01111001. Whatis the 1s complement of sum of these 2 bytesb. Suppose you have the following 2 bytes: and 01101110. What is the 1scomplement of sum of these 2 bytesc. For the bytes in part (a), give an example where one bit is flipped ineach of the 2 bytes and yet the 1s complement doesn’t change.Answer:a. Adding the two bytes gives . Taking the one’s complement gives 01100010b. Adding the two bytes gives 00011110; the one’s complement gives .c. first byte = 00110101 ; second byte = 01101000.9. (P6) Consider our motivation for correcting protocol . Show that the receiver,shown inthe figure on the following page, when operating with the sender show in Figure , can lead the sender and receiver to enter into a deadlock state, where each is waiting for an event that will never occur.Answer:Suppose the sender is in state “Wait for call 1 from above” and the receiver (the receiver shown in the homework problem) is in state “Wait for 1 from below.” The sender sends a packet with sequence number 1, and transitions to “Wait for ACK or NAK 1,” waiting for an ACK or NAK. Suppose now the receiverreceives the packet with sequence number 1 correctly, sends an ACK, and transitions to state “Wait for 0 from below,” waiting for a data packet with sequence number 0. However, the ACK is corrupted. When the sender gets the corrupted ACK, it resends the packet with sequence number 1. However, the receiver is waiting for a packet with sequence number 0 and (as shown in the home work problem) always sends a NAK when it doesn't get a packet with sequence number 0. Hence the sender will always be sending a packet with sequence number 1, and the receiver will always be NAKing that packet. Neither will progress forward from that state.10. (P7) Draw the FSM for the receiver side of protocolAnswer:The sender side of protocol differs from the sender side of protocol in thattimeouts have been added. We have seen that the introduction of timeouts adds the possibility of duplicate packets into the sender-to-receiver data stream. However, the receiver in protocol can already handle duplicate packets. (Receiver-side duplicates in rdt would arise if the receiver sent an ACK that was lost, and the sender then retransmitted the old data). Hence the receiver in protocol will also work as the receiver in protocol rdt .11. (P8) In protocol , the ACK packets flowing from the receiver to the senderdo not have sequence numbers (although they do have an ACK field that contains the sequence number of the packet they are acknowledging). Why is it that our ACK packets do not require sequence numbersAnswer:To best Answer this question, consider why we needed sequence numbers in the firstplace. We saw that the sender needs sequence numbers so that the receiver can tell if a data packet is a duplicate of an already received data packet. In the case of ACKs, the sender does not need this info ., a sequence number on an ACK) to tell detect a duplicate ACK. A duplicate ACK is obvious to the receiver, since when it has received the original ACK it transitioned to the next state. The duplicate ACK is not the ACK that the sender needs and hence is ignored by the sender.12. (P9) Give a trace of the operation of protocol when data packets andacknowledgment packets are garbled. Your trace should be similar to thatused in FigureAnswer: Suppose the protocol has been in operation for some time. The sender is in state “Waitfor call from above” (top left hand corner) and the receiver is in state “Wait for 0 from below”. The scenarios for corrupted data and corrupted ACK are shown in Figure 1.13. (P10) Consider a channel that can lose packets but has a maximum delay thatis known. Modify protocol to include sender timeout and retransmit.Informally argue why your protocol can communicate correctly over thischannel.Answer:Here, we add a timer, whose value is greater than the known round-trip propagation delay. We add a timeout event to the “Wait for ACK or NAK0” and “Wait for ACK or NAK1” states. If the timeout event occurs, the most recently transmitted packet is retransmitted. Let us see why this protocol will still work with the receiver.• Suppose the timeout is caused by a lost data packet, ., a packet on the senderto- receiver channel. In this case, the receiver never received the previous transmission and, from the receiver's viewpoint, if the timeoutretransmission is received, it look exactly the same as if the original transmission is being received.• Suppose now that an ACK is lost. The receiver will eventually retransmit the packet on a timeout. But a retransmission is exactly the same action that is take if an ACK is garbled. Thus the sender's reaction is the same with a loss, as with a garbled ACK. The rdt receiver can already handle the case of a garbled ACK.14. (P11) Consider the protocol. Draw a diagram showing that if the networkconnection between the sender and receiver can reorder messages (thatis, that two messages propagating in the medium between the sender andreceiver can be reordered), then the alternating-bit protocol will notwork correctly (make sure you clearly identify the sense in which itwill not work correctly). Your diagram should have the sender on theleft and the receiver on the right, with the time axis running down thepage, showing data (D) and acknowledgement (A) message exchange. Makesure you indicate the sequence number associated with any data oracknowledgement segment.Answer:15. (P12) The sender side of simply ignores (that is, takes no action on) allreceived packets that are either in error or have the wrong value inthe ack-num field of an acknowledgement packet. Suppose that in suchcircumstances, were simply to retransmit the current data packet .Would the protocol still work (hint: Consider what would happen if therewere only bit errors; there are no packet losses but premature timeoutcan occur. Consider how many times the nth packet is sent, in the limitas n approaches infinity.)Answer:The protocol would still work, since a retransmission would be what would happen if the packet received with errors has actually been lost (and from the receiver standpoint, it never knows which of these events, if either, will occur). To get at the more subtle issue behind this question, one has to allow for premature timeouts to occur. In this case, if each extra copy of the packet is ACKed and each received extra ACK causes another extra copy of the current packet to be sent, the number of times packet n is sent will increase without bound as n approaches infinity.16. (P13) Consider a reliable data transfer protocol that uses only negativeacknowledgements. Suppose the sender sends data only infrequently. Woulda NAK-only protocol be preferable to a protocol that uses ACKs Why Nowsuppose the sender has a lot of data to send and the end to end connectionexperiences few losses. In this second case , would a NAK-only protocolbe preferable to a protocol that uses ACKs WhyAnswer:In a NAK only protocol, the loss of packet x is only detected by the receiver when packetx+1 is received. That is, the receivers receives x-1 and then x+1, only when x+1 is received does the receiver realize that x was missed. If there is a long delay between the transmission of x and the transmission of x+1, then it will be a long time until x can be recovered, under a NAK only protocol.On the other hand, if data is being sent often, then recovery under a NAK-only scheme could happen quickly. Moreover, if errors are infrequent, then NAKs are only occasionally sent (when needed), and ACK are never sent – a significant reduction in feedback in the NAK-only case over the ACK-only case.17. (P14) Consider the cross-country example shown in Figure . How big wouldthe window size have to be for the channel utilization to be greaterthan 80 percentAnswer:It takes 8 microseconds (or milliseconds) to send a packet. in order for the senderto be busy 90 percent of the time, we must have util = = / or n approximately 3377 packets.18. (P15) Consider a scenario in which Host A wants to simultaneously sendpackets to Host B and C. A is connected to B and C via a broadcastchannel—a packet sent by A is carried by the channel to both B and C.Suppose that the broadcast channel connecting A, B, and C canindependently lose and corrupt packets (and so, for example, a packetsent from A might be correctly received by B, but not by C). Design astop-and-wait-like error-control protocol for reliable transferringpackets from A to B and C, such that A will not get new data from theupper layer until it knows that B and C have correctly received thecurrent packet. Give FSM descriptions of A and C. (Hint: The FSM for Bshould be essentially be same as for C.) Also, give a description ofthe packet format(s) used.Answer:In our solution, the sender will wait until it receives an ACK for a pair of messages (seqnum and seqnum+1) before moving on to the next pair of messages. Data packets have a data field and carry a two-bit sequence number. That is, the valid sequence numbers are 0, 1, 2, and 3. (Note: you should think about why a 1-bit sequence number space of 0, 1 only would not work in the solution below.) ACK messages carry the sequence number of the data packet they are acknowledging.The FSM for the sender and receiver are shown in Figure 2. Note that the sender state records whether (i) no ACKs have been received for the current pair, (ii) an ACK for seqnum (only) has been received, or an ACK for seqnum+1 (only) has been received. In this figure, we assume that the seqnum is initially 0, and that the sender has sent the first two data messages (to get things going). A timeline trace for the sender and receiver recovering from a lost packet is shown below:Sender Receivermake pair (0,1)send packet 0Packet 0 dropssend packet 1receive packet 1buffer packet 1send ACK 1receive ACK 1(timeout)resend packet 0receive packet 0deliver pair (0,1)send ACK 0receive ACK 019. (P16) Consider a scenario in which Host A and Host B want to send messagesto Host C. Hosts A and C are connected by a channel that can lose andcorrupt (but not reorder)message. Hosts B and C are connected by anotherchannel (independent of the channel connecting A and C) with the sameproperties. The transport layer at Host C should alternate in deliveringmessages from A and B to the layer above (that is, it should firstdeliver the data from a packet from A, then the data from a packet fromB, and so on). Design a stop-and-wait-like error-control protocol forreliable transferring packets from A to B and C, with alternatingdelivery at C as described above. Give FSM descriptions of A and C.(Hint: The FSM for B should be essentially be same as for A.) Also, givea description of the packet format(s) used.Answer:This problem is a variation on the simple stop and wait protocol . Because the channel may lose messages and because the sender may resend a message that one of the receivers has already received (either because of a premature timeout or because the other receiver has yet to receive the data correctly), sequence numbers are needed. As in , a 0-bit sequence number will suffice here.The sender and receiver FSM are shown in Figure 3. In this problem, the sender state indicates whether the sender has received an ACK from B (only), from C (only) or from neither C nor B. The receiver state indicates which sequence number the receiver is waiting for.20. (P17) In the generic SR protocol that we studied in Section the sendertransmits a message as soon as it is available (if it is in the window)without waiting for an acknowledgment. Suppose now that we want an SRprotocol that sends messages two at a time. That is , the sender willsend a pair of messages and will send the next pair of messages onlywhen it knows that both messages in the first pair have been receivercorrectly.Suppose that the channel may lose messages but will not corrupt or reorder messages. Design an error-control protocol for theunidirectional reliable transfer of messages. Give an FSM descriptionof the sender and receiver. Describe the format of the packets sentbetween sender and receiver, and vice versa. If you use any procedurecalls other than those in Section (for example, udt_send(),start_timer(), rdt_rcv(), and so on) ,clearly state their actions. Givean example (a timeline trace of sender and receiver) showing how yourprotocol recovers from a lost packet.Answer:21. (P18) Consider the GBN protocol with a sender window size of 3 and a sequencenumber range of 1024. Suppose that at time t, the next in-order packetthat the receiver is expecting has a sequence number of k. Assume thatthe medium does not reorder messages. Answer the following questions:a. What are the possible sets of sequence number inside the sender’swindow at time t Justify your Answer.b .What are all possible values of the ACK field in all possible messagescurrently propagating back to the sender at time t Justify your Answer. Answer:a.Here we have a window size of N=3. Suppose the receiver has received packetk-1, and has ACKed that and all other preceeding packets. If all of these ACK's have been received by sender, then sender's window is [k, k+N-1].Suppose next that none of the ACKs have been received at the sender. In this second case, the sender's window contains k-1 and the N packets up to and including k-1. The sender's window is thus [k- N,k-1]. By these arguments, the senders window is of size 3 and begins somewhere in the range [k-N,k]. b.If the receiver is waiting for packet k, then it has received (and ACKed)packet k-1 and the N-1 packets before that. If none of those N ACKs have been yet received by the sender, then ACK messages with values of [k-N,k-1] may still be propagating back. Because the sender has sent packets [k-N, k-1], it must be the case that the sender has already received an ACK for k-N-1.Once the receiver has sent an ACK for k-N-1 it will never send an ACK that is less that k-N-1. Thus the range of in-flight ACK values can range from k-N-1 to k-1.22. (P19) Answer true or false to the following questions and briefly justifyyour Answer.a. With the SR protocol, it is possible for the sender to receive anACK for a packet that falls outside of its current window.b. With CBN, it is possible for the sender to receiver an ACK for apacket that falls outside of its current window.c. The alternating-bit protocol is the same as the SR protocol with asender and receiver window size of 1.d. The alternating-bit protocol is the same as the GBN protocol with asender and receiver window size of 1.Answer:a.True. Suppose the sender has a window size of 3 and sends packets 1, 2, 3 att0 . At t1 (t1 > t0) the receiver ACKS 1, 2, 3. At t2 (t2 > t1) the sender times out and resends 1, 2, 3. At t3 the receiver receives the duplicates and re-acknowledges 1, 2, 3. At t4 the sender receives the ACKs that the receiver sent at t1 and advances its window to 4, 5, 6. At t5 the sender receives the ACKs 1, 2, 3 the receiver sent at t2 . These ACKs are outside its window.b.True. By essentially the same scenario as in (a).c.True.d.True. Note that with a window size of 1, SR, GBN, and the alternating bitprotocol are functionally equivalent. The window size of 1 precludes the possibility of out-of-order packets (within the window). A cumulative ACK is just an ordinary ACK in this situation, since it can only refer to the single packet within the window.23. (Q4) Why is it that voice and video traffic is often sent over TCP ratherthan UDP in today’s Internet. (Hint: The Answer we are looking for hasnothing to do with TCP’s congestion-control mechanism. )Answer:Since most firewalls are configured to block UDP traffic, using TCP for video and voice traffic lets the traffic though the firewalls24. (Q5) Is it possible for an application to enjoy reliable data transfer evenwhen the application runs over UDP If so, howAnswer:Yes. The application developer can put reliable data transfer into the application layer protocol. This would require a significant amount of work and debugging, however.25. (Q6) Consider a TCP connection between Host A and Host B. Suppose that theTCP segments traveling form Host A to Host B have source port number xand destination port number y. What are the source and destination portnumber for the segments traveling form Host B to Host AAnswer:Source port number y and destination port number x.26. (P20) Suppose we have two network entities, A and B. B has a supply of datamessages that will be sent to A according to the following conventions.When A gets a request from the layer above to get the next data (D)message from B, A must send a request (R) message to B on the A-to-Bchannel. Only when B receives an R message can it send a data (D) messageback to A on the B-to-A channel. A should deliver exactly one copy ofeach D message to the layer above. R message can be lost (but notcorrupted) in the A-to-B channel; D messages, once sent, are alwaysdelivered correctly. The delay along both channels is unknown andvariable.Design(give an FSM description of) a protocol that incorporates the appropriate mechanisms to compensate for the loss-prone A-to-B channeland implements message passing to the layer above at entity A, asdiscussed above. Use only those mechanisms that are absolutely necessary.Answer:Because the A-to-B channel can lose request messages, A will need to timeout and retransmit its request messages (to be able to recover from loss). Because the channel delays are variable and unknown, it is possible that A will send duplicate requests ., resend a request message that has already been received by B). To be able to detect duplicate request messages, the protocol will use sequence numbers. A 1-bit sequence number will suffice for a stop-and-wait typeof request/response protocol.A (the requestor) has 4 states:•“Wait for Request 0 from above.” Here the requestor is waiting for a call from above to request a unit of data. When it receives a request from above, it sends a request message, R0, to B, starts a timer and makes a transition to the “Wait for D0” state. When in the “Wait for Request 0 from above” state, A ignores anything it receives from B.•“Wait for D0”. Here the requestor is waiting for a D0 data message from B.A timer is always running in this state. If the timer expires, A sends another R0 message, restarts the timer and remains in this state. If a D0 message is received from B, A stops the time and transits to the “Wait for Request 1 from above” state. If A receives a D1 data message while in this state, it is ignored.•“Wait for Request 1 from above.” Here the requestor is again waiting for a call from above to request a unit of data. When it receives a request from above, it sends a request message, R1, to B, starts a timer and makes a transition to the “Wait for D1” state. When in the “Wait for Request 1 from above” state, A ignores anything it receives from B.•“Wait for D1”. Here the requestor is waiting for a D1 data message from B.A timer is always running in this state. If the timer expires, A sends another R1 message, restarts the timer and remains in this state. If a D1 message is received from B, A stops the timer and transits to the “Wait for Request 0 f rom above” state. If A receives a D0 data message while in this state, it is ignored.The data supplier (B) has only two states:•“Send D0.” In this state, B continues to respond to received R0 messages by sending D0, and then remaining in this state. If B receives a R1 message, then。

CCNA enghlish home3

CCNA enghlish home3

Logical topology
How the hosts use the network, no matter where they are physically located.
思科网络技术学院理事会.

10
3.2 Principles of Communication
思科网络技术学院理事会.

16
3.2.6 Message timing
Access methods
determines when someone is able to send a message.
Flow control
determines how much information can be sent and the speed that it can be delivered
思科网络技术学院理事会.
3
3.1 Introduction to Networking
思科网络技术学院理事会.

4
3.1.1 What is a network?
Networks provide the ability to connect people and equipment no matter where they are in the world. Now, network is multi-purpose, converged information networks.

Plan, implement, and verify a local network.
思科网络技术学院理事会.

2
Context Index
3.1 Introduction to Networking 3.2 Principles of Communication 3.3 Communicating on a Local Wired Network 3.4 Building the Access Layer of an Ethernet Network 3.5 Building the Distribution Layer of Network 3.6 Plan and Connect a Local Network Lab: Learn about Packet Tracer

宽带网IP设计

宽带网IP设计

——宽带网IP课程设计昆明理工大学骨干网设计Kunming University of Science and Technology Backbone network design学校:昆明理工大学学院:信自学院专业:通信112年级: 2011级学生姓名:杨伟章学号: 201110404234 指导教师:邵剑飞目录摘要 (1)前言 (1)正文 (1)第一章1.1校园网的需求分析 (2)1.2 校园网的建设思路 (2)1.2.1 校园网设计方案1.2.2 骨干网架构设计第二章2.1各校区间的网络拓扑 (4)2.2校园内网络拓扑图 (4)2.3设备配置 (4)2.3.1 路由器2.3.2 交换机第三章3.1 VLAN及IP地址规划 (7)总结 (8)参考文献 (8)昆明理工大学骨干网设计摘要此次设计为昆明理工大学骨干网设计,网络连接呈贡,新迎,莲华三个校区,服务群众为教师,职工和学生,以宽带IP网络的知识为基础为昆工设计校园网,设计上使用eNSP,系统采用Linux。

每个校区内网均采用三层模型,即将校园网划分为核心层、汇聚层、传输层。

本次设计以呈贡校区为例。

关键字:昆明理工大学,骨干网,宽带IP网络前言校园网是为学校师生提供教学、科研和综合信息服务的宽带多媒体网络。

建成后的校园网将拥有高性能、高带宽、稳定可靠的网络传输环境。

为大容量的教学资源库、课件资源库、WEB、E-MAIL、FTP、BBS、视频服务器、数据库服务器、各种流媒体和各种应用平台提供有力的支持,实现办公自动化及办公收发文系统,为教育教学提供信息服务,同时可开通远程教育服务等。

第一章1.1 校园网的需求分析1.骨干网采用双核心技术,两台核心设备,彼此备份。

2.骨干网介入Internet,各子网再介入骨干网。

3.设置多个小中心用于二级汇聚,网络中心和各小中心都用千兆光钎连接。

4.各应用平台的建设均可接入骨干网,构成应用平台。

5.骨干网必须具有高速运转的能力。

Networking and Connectivity工业网络及连接

Networking and Connectivity工业网络及连接

电源故障指示
US/UA
通讯状态指示 外围故障
传感器供电短路
LED 信号指示
电源状态指示
US/UA
短路/断线诊断 LED 信号指示
执行器讯状态诊断信息 (Profibus-DP)
状态/故障 Status / Error
电源状态 Power
US
UA
Bus & sensor supply OK (>18V)
US
UA
Off
Off
Red
Red
通讯状态 Communication
BUS
Green Green flash
IO端口状态诊断信息
状态和故障 Status / Error
Input / Output OK Short-circuit
(sensor supply) Actuator shutdown
Actuator warning
43 0 V+24 V DC 传感器/模块电源
54 Inp+u2t4, oVutDpCut
输出电源
5 FE
标IO准-LiIn/kO端端口口 ((MM112,2, AA--ccooddeedd,, ffeemmaallee))
1
2
PIPNIN F功un能ction
4
3
1 1 ++2244 VV,,12.600A mA
UA actuator supply OK (>18V)
Bus & sensor supply Under voltage
UA actuator supply Under voltage Communication OK

公司仓库网络规划与设计的基本流程

公司仓库网络规划与设计的基本流程

公司仓库网络规划与设计的基本流程When planning and designing a network for a company warehouse, there are several key steps that need to be taken into consideration to ensure a successful and efficient layout. 针对公司仓库的网络规划和设计,有几个关键步骤需要考虑,以确保成功和高效的布局。

First and foremost, it is crucial to assess the specific needs and requirements of the warehouse. This involves understanding the inventory management system, the types of products being stored, and the frequency of movement within the warehouse. 首先,至关重要的是评估仓库的具体需求和要求。

这涉及理解库存管理系统、存储产品的类型以及仓库内运动的频率。

Once the requirements have been identified, the next step is to conduct a thorough site survey to assess the existing infrastructure and to identify any potential challenges or limitations. This may include evaluating the physical layout of the warehouse, the location of existing network equipment, and the availability of power and connectivity options. 一旦确定需求,下一步就是进行全面的现场调查,以评估现有基础设施并确定任何潜在的挑战或限制。

简述网络的结构设计制作流程

简述网络的结构设计制作流程

简述网络的结构设计制作流程Designing and creating a network structure is a complex and crucial process that involves a lot of planning and attention to detail. When embarking on this journey, the first step is to determine the specific needs and requirements of the network. This involves conducting a thorough analysis of the organization's goals, the number of users, the types of devices that will be connected, and the expected network traffic.设计和创建网络结构是一个复杂而至关重要的过程,需要大量的计划和细节上的关注。

在踏上这个旅程时,第一步是确定网络的具体需求和要求。

这包括对组织目标、用户数量、连接的设备类型以及预期的网络流量进行彻底分析。

After gathering all the necessary information, the next step is to create a detailed network design that outlines the layout of the network, including the physical and logical components. This design should take into consideration factors such as scalability, security, and reliability to ensure that the network can accommodate future growth and protect sensitive information.在收集到所有必要信息后,下一步是创建一个详细的网络设计,概述了网络的布局,包括物理和逻辑组件。

《计算机网络与因特网》课件 林坤辉

《计算机网络与因特网》课件 林坤辉
Any pair of computers on the extended LAN can communication,the computers do not know whether a repeater separates them.
精选课件ppt
11
The repeater attaches directly to the Ethernet cables and sends copies of electrical signals from one to other without waiting for a complete frame.
精选课件ppt
2
11.1 Introduction
Each LAN technology is designed for a specific combination of speed, distance, and cost. The designer specifies a maximum distance that the LAN can span,with typical LANs designed to span a few hundred meters.
精选课件ppt
7
11.4 Repeaters中继器
A repeater is usually an analog electronic device that continuously monitors electrical signals on each cable. When it senses a signal on one cable, the repeater transmits an amplified copy on the other cable.

NetworkDesign第七篇物理网络设计

NetworkDesign第七篇物理网络设计
Network Design
第七章 物理网络设计
中国科学技术大学网络学院 李艺
第一章 概述 第二章 用户需求分析 第三章 现有网络分析 第四章 逻辑网络设计 第五章 网络设备选择 第六章 WAN接入设计 第七章 网络介质设计 第八章 网络设计案例
NETWORK DESIGN
7- 1
7.1 构造化布线概述 7.2 铜质线缆 7.3 光缆 7.4 无线介质 7.5 编写物理设计文档
段,是连接垂直干线子系统和水平干线子系统的设备。
设备:配线架、交换机、路由器、机柜、稳压电源、UPS

设计时注意:
配线架的配线对数可由管理的信息点数决定;
利用配线架跳线功能,使布线系统实现灵活;
配线架一般由光配线盒和铜配线架组成;
管理间子系统应有足够的空间放置配线架和网络设备;
有交换机器的地方要配有专用稳压电源、UPS;
NETWORK DESIGN
7- 20
PSNEXT〔Power Sum Cross-talk〕:综合近端串扰说明四对线缆中三 对线缆传输信号时对另一对在近端所造成的影响。
NETWORK DESIGN
7- 21
ELFEXT〔Equal Level Far End Cross-talk〕:平衡等级远端串扰是传送 端的干扰信号对相邻线对在远端所造成的影响。
会被作为错误检测出来。串音分为近端串扰(NEXT)、远端串扰〔
FEXT〕以及综合串扰。
NEXT:(Near End Cross-talk):近端串扰,电信号在线缆及连接器 上传送时,在导体周围产生一个电磁场,它辐射到相邻线对上,就 会对其信号传输造成不良干扰。近端串扰表征了这种干扰对同在近 端的传送线对与接收线对所造成的影响。

LAN_Chapter_03

LAN_Chapter_03

Chapter 3. Bridging TechnologiesThis chapter covers the following key topics:∙Transparent Bridging—This section explains the five main processes of transparent bridging. These includeForwarding, Flooding, Filtering, Learning and Aging.∙Switching Modes—Various switching modes such as store-and-forward, cut through and others arecompared and contrasted.∙Token Ring Bridging—Different methods exist for bridging Token Ring. This section describes your options.∙Token Ring Switching—Token Ring switching provides many of the benefits found in Ethernet switching. Thissection discusses rules for Token Ring switching in a Catalyst environment ∙Ethernet or Token Ring—Some users are installing new network systems and do not know which to use. Thissection provides some thoughts for making the selection∙Migrating Token Ring to Ethernet—Administrators frequently elect to replace legacy Token Ring systems with Fast Ethernet switched solutions. This section offers suggestions of things to consider in such anupgrade.Although various internetworking devices exist for segmenting networks, Layer 2 LAN switches use bridge internetworking technology to create smaller collision domains. Chapter 2, "Segmenting LANs," discussed how bridges segment collision domains. But bridges do far more than segment collision domains: they protect networks from unwanted unicast traffic and eliminate active loops which otherwise inhibit network operations. How they do this differs for Ethernet and Token Ring networks. Ethernet employs transparent bridging to forward traffic and Spanning Tree to control loops. Token Ring typically uses a process called source-route bridging. This chapter describes transparent bridging, source-route bridging (along with some variations), and Layer 2 LAN switching. Chapter 6 covers Spanning Tree for Ethernet.Transparent BridgingAs discussed in Chapter 2, networks are segmented to provide more bandwidth per user. Bridges provide more user bandwidth by reducing the number of devices contending for the segment bandwidth. But bridges also provide additional bandwidth by controlling data flow in a network. Bridges forward traffic only to the interface(s) that need to receive the traffic. In the caseof known unicast traffic, bridges forward the traffic to a single port rather than to all ports. Why consume bandwidth on a segment where the intended destination does not exist?Transparent bridging, defined in IEEE 802.1d documents, describes five bridging processes for determining what to do with a frame. The processes are as follows:1. Learning2. Flooding3. Filtering4. Forwarding5. AgingFigure 3-1 illustrates the five processes involved in transparent bridging.Figure 3-1 Transparent Bridge Flow ChartWhen a frame enters the transparent bridge, the bridge adds the source Ethernet MAC address (SA) and source port to its bridging table. If the source address already exists in the table, the bridge updates the aging timer. The bridge examines the destination MAC address (DA). If the DA is a broadcast, multicast, or unknown unicast, the bridge floods the frame out all bridge ports in the Spanning Tree forwarding state, except for the source port. If the destination address and source address are on the same interface, the bridge discards (filters) the frame. Otherwise, the bridge forwards the frame out the interface where the destination is known in its bridging table.The sections that follow address in greater detail each of the five transparent bridging processes.LearningEach bridge has a table that records all of the workstations that the bridge knows about on every interface. Specifically, the bridge records the source MAC address and the source port in the table whenever the bridge sees a frame from a device. This is the bridge learningprocess. Bridges learn only unicast source addresses. A station never generates a frame with a broadcast or multicast source address. Bridges learn source MAC addresses in order to intelligently send data to appropriate destination segments. When the bridge receives a frame, it references the table to determine on what port the destination MAC address exists. The bridge uses the information in the table to either filter the traffic (if the source and destination are on the same interface) or to send the frame out of the appropriate interface(s).But when a bridge is first turned on, the table contains no entries. Assume that the bridges in Figure 3-2 were all recently powered "ON," and no station had yet transmitted. Therefore, the tables in all four bridges are empty. Now assume that Station 1 transmits a unicast frame to Station 2. All the stations on that segment, including the bridge, receive the frame because of the shared media nature of the segment. Bridge A learns that Station 1 exists off of port A.1 by looking at the source address in the data link frame header. Bridge A enters the source MAC address and bridge port in the table.Figure 3-2 Sample Bridged NetworkFloodingContinuing with Figure 3-2, when Station 1 transmits, Bridge A also looks at the destination address in the data link header to see if it has an entry in the table. At this point, Bridge A only knows about Station 1. When a bridge receives a unicast frame (a frame targeting a single destination), no table entry exists for the DA, the bridge receives an unknown unicast frame.The bridging rules state that a bridge must send an unknown unicast frame out all forwarding interfaces except for the source interface. This is known as flooding. Therefore, Bridge A floods the frame out all interfaces, even though Stations 1 and 2 are on the same interface. Bridge B receives the frame and goes through the same process as Bridge A of learning and flooding. Bridge B floods the frame to Bridges C and D, and they learn and flood. Now the bridging tables look like Table 3-1. The bridges do not know about Station 2 because it did not yet transmit.Still considering Figure 3-2, all the bridges in the network have an entry for Station 1 associated with an interface, pointing toward Station 1. The bridge tables indicate the relative location of a station to the port. Examining Bridge C's table, an entry for Station 1 is associated with port C.1. This does not mean Station 1 directly attaches to C.1. It merely reflects that Bridge C heard from Station 1 on this port.In addition to flooding unknown unicast frames, legacy bridges flood two other frame types: broadcast and multicast. Many multimedia network applications generate broadcast or multicast frames that propagate throughout a bridged network (broadcast domain). As the number of participants in multimedia services increases, more broadcast/multicast frames consume network bandwidth. Chapter 13, "Multicast and Broadcast Services," discusses ways of controlling multicast and broadcast traffic flows in a Catalyst-based network.FilteringWhat happens when Station 2 in Figure 3-2 responds to Station 1? All stations on the segment off port A.1, including Bridge A, receive the frame. Bridge A learns about the presence of Station 2 and adds its MAC address to the bridge table along with the port identifier (A.1). Bridge A also looks at the destination MAC address to determine where to send the frame. Bridge A knows Station 1 and Station 2 exist on the same port. It concludes that it does not need to send the frame anywhere. Therefore, Bridge A filters the frame. Filtering occurs when the source and destination reside on the same interface. Bridge A could send the frame out other interfaces, but because this wastes bandwidth on the other segments, the bridging algorithm specifies to discard the frame. Note that only Bridge A knows about the existence of Station 2 because no frame from this station ever crossed the bridge.ForwardingIf in Figure 3-2, Station 2 sends a frame to Station 6, the bridges flood the frame because no entry exists for Station 6. All the bridges learn Station 2's MAC address and relative location. When Station 6 responds to Station 2, Bridge D examines its bridging table and sees that to reach Station 2, it must forward the frame out interface D.1. A bridge forwards a frame when the destination address is a known unicast address (it has an entry in the bridging table) and the source and destination are on different interfaces. The frame reaches Bridge B, which forwards it out interface B.1. Bridge A receives the frame and forwards it out A.1. Only Bridges A, B, and D learn about Station 6. Table 3-2 shows the current bridge tables.AgingWhen a bridge learns a source address, it time stamps the entry. Every time the bridge sees a frame from that source, the bridge updates the timestamp. If the bridge does not hear from that source before an aging timer expires, the bridge removes the entry from the table. The network administrator can modify the aging timer from the default of five minutes.Why remove an entry? Bridges have a finite amount of memory, limiting the number of addresses it can remember in its bridging tables. For example, higher end bridges can remember upwards of 16,000 addresses, while some of the lower-end units may remember as few as 4,096. But what happens if all 16,000 spaces are full in a bridge, but there are 16,001 devices? The bridge floods all frames from station 16,001 until an opening in the bridge table allows the bridge to learn about the station. Entries become available whenever the aging timer expires for an address. The aging timer helps to limit flooding by remembering the most active stations in the network. If you have fewer devices than the bridge table size, you could increase the aging timer. This causes the bridge to remember the station longer and reduces flooding.Bridges also use aging to accommodate station moves. In Table 3-2, the bridges know the location of Stations 1, 2, and 6. If you move Station 6 to another location, devices may not be able to reach Station 6. For example, if Station 6 relocates to C.2 and Station 1 transmits to Station 6, the frame never reaches Station 6. Bridge A forwards the frame to Bridge B, but Bridge B still thinks Station 6 is located on port B.3. Aging allows the bridges to "forget" Station 6's entry. After Bridge B ages the Station 6 entry, Bridge B floods the frames destined to Station 6 until Bridge B learns the new location. On the other hand, if Station 6 initiates the transmission to Station 1, then the bridges immediately learn the new location of Station 6. If you set the aging timer to a high value, this may cause reachability issues in stations within the network before the timer expires.The Catalyst screen capture in Example 3-1 shows a bridge table example. This Catalyst knows about nine devices (see bolded line) on nine interfaces. Each Catalyst learns about each device on one and only one interface.Example 3-1 Catalyst 5000 Bridging TableConsole> (enable) show cam dynamic VLAN Dest MAC/Route Des Destination Ports or VCs ---- ------------------ ---------------------------------------------------- 2 00-90-ab-16-60-20 3/4 1 00-90-ab-16-b0-20 3/10 2 00-90-ab-16-4c-20 3/2 1 00-60-97-8f-4f-86 3/23 1 00-10-07-3b-5b-00 3/17 1 00-90-ab-16-50-20 3/91 3 00-90-ab-16-54-20 3/1 1 00-90-92-bf-74-00 3/18 1 00-90-ab-15-d0-10 3/3 Total Matching CAM Entries Displayed = 9 Console> (enable)The bridge tables discussed so far contain two columns: the MAC address and the relative port location. These are seen in columns two and three in Example 3-1, respectively. But this table has an additional column. The first column indicates the VLAN to which the MAC address belongs. A MAC address belongs to only one VLAN at a time. Chapter 5, "VLANs," describes VLANs and why this is so.Switching ModesChapter 2 discusses the differences between a bridge and a switch. Cisco identifies the Catalyst as a LAN switch; a switch is a more complex bridge. The switch can be configured to behave as multiple bridges by defining internal virtual bridges (i.e., VLANs). Each virtual bridge defines a new broadcast domain because no internal connection exists between them. Broadcasts for one virtual bridge are not seen by any other. Only routers (either external or internal) should connect broadcast domains together. Using a bridge to interconnect broadcast domains merges the domains and creates one giant domain. This defeats the reason for having individual broadcast domains in the first place.Switches make forwarding decisions the same as a transparent bridge. But vendors have different switching modes available to determine when to switch a frame. Three modes in particular dominate the industry: store-and-forward, cut-through, and fragment-free. Figure 3-3 illustrates the trigger point for the three methods.Figure 3-3 Switching Mode Trigger PointsEach has advantages and trade offs, discussed in the sections that follow. As a result of the different trigger points, the effective differences between the modes are in error handling and latency. Table 3-3 compares the approaches and shows which members of the Catalyst family use the available modes. The table summarizes how each mode handles frames containing errors, and the associated latency characteristics.Table 3-3. Switching Mode ComparisonSwitching Mode Errored Frame Handling Latency CAT Member* Store-and-forward Drop*Variable 5500, 5000,3920, 3900,One of the objectives of switching is to provide more bandwidth to the user. Each port on a switch defines a new collision domain that offers full media bandwidth. If only one station attaches to an interface, that station has full dedicated bandwidth and does not need to share it with any other device. All the switching modes defined in the sections that follow support the dedicated bandwidth aspect of switching.TipTo determine the best mode for your network, consider the latency requirements foryour applications and your network reliability. Do your network components or cablinginfrastructure generate errors? If so, fix your network problems and use store-and-forward. Can your applications tolerate the additional latency of store-and-forwardswitching? If not, use cut-through switching. Note that you must use store-and-forwardwith the Cat 5000 and 6000 family of switches. This is acceptable because latency israrely an issue, especially with high-speed links and processors and modernwindowing protocols. Finally, if the source and destination segments are differentmedia types, you must use store-and-forward mode.Store-and-Forward SwitchingThe store-and-forward switching mode receives the entire frame before beginning the switching process. When it receives the complete frame, the switch examines it for the source and destination addresses and any errors it may contain, and then it possibly applies any special filters created by the network administrator to modify the default forwarding behavior. If the switch observes any errors in the frame, it is discarded, preventing errored frames from consuming bandwidth on the destination segment. If your network experiences a high rate of frame alignment or FCS errors, the store-and-forward switching mode may be best. The absolute best solution is to fix the cause of the errors. Using store-and-forward in this case is simply a bandage. It should not be the fix.If your source and destination segments use different media, then you must use this mode. Different media often have issues when transferring data. The section "Source-Route Translation Bridging" discusses some of these issues. Store-and-forward mode is necessary to resolve this problem in a bridged environment.Because the switch must receive the entire frame before it can start to forward, transfer latency varies based on frame size. In a 10BaseT network, for example, the minimum frame, 64 octets, takes 51.2 microseconds to receive. At the other extreme, a 1518 octet frame requires at least1.2 milliseconds. Latency for 100BaseX (Fast Ethernet) networks is one-tenth the 10BaseT numbers.Cut-Through SwitchingCut-through mode enables a switch to start the forwarding process as soon as it receives the destination address. This reduces latency to the time necessary to receive the six octet destination address: 4.8 microseconds. But cut-through cannot check for errored frames before it forwards the frame. Errored frames pass through the switch, consequently wasting bandwidth;the receiving device discards errored frames.As network and internal processor speeds increase, the latency issues become less relevant. In high speed environments, the time to receive and process a frame reduces significantly, minimizing advantages of cut-through mode. Store-and-forward, therefore, is an attractive choice for most networks.Some switches support both cut-through and store-and-forward mode. Such switches usually contain a third mode called adaptive cut-through. These multimodal switches use cut-through as the default switching mode and selectively activate store-and-forward. The switches monitor the frame as it passes through looking for errors. Although the switch cannot stop an errored frame, it counts how many it sees. If the switch observes that too many frames contain errors, the switch automatically activates the store-and-forward mode. This is often known as adaptive cut-through. It has the advantage of providing low latency while the network operates well, while providing automatic protection for the outbound segment if the inbound segment experiences problems. Fragment-Free SwitchingAnother alternative offers some of the advantages of cut-through and store-and-forward switching. Fragment-free switching behaves like cut-through in that it does not wait for an entire frame before forwarding. Rather, fragment-free forwards a frame after it receives the first 64 octets of the frame (this is longer than the six bytes for cut-through and therefore has higher latency). Fragment-free switching protects the destination segment from fragments, an artifact of half-duplex Ethernet collisions. In a correctly designed Ethernet system, devices detect a collision before the source finishes its transmission of the 64-octet frame (this is driven by the slotTime described in Chapter 1). When a collision occurs, a fragment (a frame less than 64 octets long) is created. This is a useless Ethernet frame, and in the store-and-forward mode, it is discarded by the switch. In contrast, a cut-through switch forwards the fragment if at least a destination address exists. Because collisions must occur during the first 64 octets, and because most frame errors will show up in these octets, the fragment-free mode can detect most bad frames and discard them rather than forward them. Fragment-free has a higher latency than cut-through, however, because it must wait for an additional 58 octets before forwarding the frame. As described in the section on cut-through switching, the advantages of fragment-free switching are minimal given the higher network speeds and faster switch processors.Token Ring BridgingWhen IBM introduced Token Ring, they described an alternative bridging technique calledsource-route bridging. Although transparent bridging, as discussed in the previous section, works in a Token Ring environment, IBM networks have unique situations in which transparent bridgingcreates some obstacles. An example is shown in a later section, "Source-Route Transparent Bridging." Source-route bridging, on the other hand, overcomes these limitations.Many networks have a combination of transparent and source-route bridged devices. The industry developed source-route transparent bridging for hybrid networks, allowing them to coexist. However, the source-route devices cannot inherently communicate with the transparent devices. Source-route translation bridging, yet another bridging method, offers some hope of mixed media communication. The sections that follow present all three Token Ring bridging methods (source-route, source-route transparent, and source-route translational). The switching aspects of Token Ring networks are described later in the section "Token Ring Switching." Source-Route BridgingIn a Token Ring environment, rings interconnect with bridges. Each ring and bridge has a numeric identifier. The network administrator assigns the values and must follow several rules. Typically, each ring is uniquely identified within the bridged network with a value between 1 and 4095. (It is possible to have duplicate ring numbers, as long as the rings do not attach to the same bridge.) Valid bridge identifiers include 1 through 15 and must be unique to the local and target rings. A ring cannot have two bridges with the same bridge number. Source devices use ring and bridge numbers to specify the path that the frame will travel through the bridged network. Figure 3-4 illustrates a source-route bridging (SRB) network with several attached workstations.Figure 3-4 A Source-Route Bridged NetworkWhen Station A wants to communicate with Station B, Station A first sends a test frame to determine whether the destination is on the same ring as the source. If Station B responds to the test frame, the source knows that they are both on the same ring. The two stations communicate without involving any Token Ring bridges.If, however, the source receives no response to the test frame, the source attempts to reach the destination on other rings. But the frame must now traverse a bridge. In order to pass through a bridge, the frame includes a routing information field (RIF). One bit in the frame header signalsbridges that a RIF is present and needs to be examined by the bridge. This bit, called the routing information indicator (RII), is set to "zero" when the source and destination are on the same ring; otherwise, it is set to "one."Most importantly, the RIF tells the bridge how to send the frame toward the destination. When the source first attempts to contact the destination, the RIF is empty because the source does not know any path to the destination. To complete the RIF, the source sends an all routes explorer (ARE) frame (it is also possible to use something called a Spanning Tree Explorer [STE]). The ARE passes through all bridges and all rings. As it passes through a bridge, the bridge inserts the local ring and bridge number into the RIF. If in Figure 3-5, Station A sends an ARE to find the best path to reach Station D, Station D will receive two AREs. The RIFs look like the following: Ring100 - Bridge1 - Ring200 - Bridge2 - Ring300Ring100 - Bridge1 - Ring400 - Bridge3 - Ring300Each ring in the network, except for ring 100, see two AREs. For example, the stations on ring 200 receive two AREs that look like the following:Ring100-Bridge1-Ring200Ring100-Bridge1-Ring400-Bridge3-Ring300-Bridge2-Ring200The AREs on ring 200 are useless for this session and unnecessarily consume bandwidth. As the Token Ring network gets more complex with many rings interconnected in a mesh design, the quantity of AREs in the network increases dramatically.NoteA Catalyst feature, all routes explorer reduction, ensures that AREs don't overwhelmthe network. It conserves bandwidth by reducing the number of explorer frames in thenetwork.Station D returns every ARE it receives to the source. The source uses the responses to determine the best path to the destination. What is the best path? The SRB standard does not specify which response to use, but it does provide some recommendations. The source could do any of the following:∙Use the first response it receives∙Use the path with the fewest hops∙Use the path with the largest MTU∙Use a combination of criteria∙Most Token Ring implementations use the first option.Now that Station A knows how to reach Station D, Station A transmits each frame as a specifically routed frame where the RIF specifies the ring/bridge hops to the destination. When abridge receives the frame, the bridge examines the RIF to determine if it has any responsibility to forward the frame. If more than one bridge attaches to ring 100, only one of them forwards the specifically routed frame. The other bridge(s) discard it. Station D uses the information in the RIF when it transmits back to Station A. Station D creates a frame with the RIF completed in reverse. The source and destination use the same path in both directions.Note that transparent bridging differs from SRB in significant ways. First, in SRB, the source device determines what path the frame must follow to reach the destination. In transparent bridging, the bridge determines the path. Secondly, the information used to determine the path differs. SRB uses bridge/ring identifiers, and transparent bridging uses destination MAC addresses.Source-Route Transparent BridgingAlthough many Token Ring networks start out as homogenous systems, transparently bridged Ethernet works its way into many of these networks. As a result, network administrators face hybrid systems and support source-route bridged devices and transparently bridged devices. Unfortunately, the source-route bridged devices cannot communicate with the transparently bridged Ethernet devices. Non-source-routed devices do not understand RIFs, SREs, or any other such frames. To further confuse the issue, some Token Ring protocols run in transparent mode, a typically Ethernet process.NoteSource-route translational bridging (SR/TLB), described in the next section, canovercome some of the limitations of source-route transparent bridging (SRT). The bestsolution, though, is to use a router to interconnect routed protocols residing on mixedmedia.Source-route transparent bridging (SRT) supports both source-route and transparent bridging for Token Ring devices. The SRT bridge uses the RII bit to determine the correct bridging mode. If the bridge sees a frame with the RII set to "zero," the SRT bridge treats the frame using transparent bridging methods. It looks at the destination MAC address and determines whether to forward, flood, or filter the frame. If the frame contains a RIF (the RII bit set to "one"), the bridge initiates source-route bridging and uses the RIF to forward the frame. Table 3-4 compares how SRT and SRB bridges react to RII valuesThis behavior causes problems in some IBM environments. Whenever an IBM Token Ring attached device wants to connect to another, it first issues a test frame to see whether the destination resides on the same ring as the source. If the source receives no response, it sends an SRB explorer frame.The SRT deficiency occurs with the test frame. The source intends for the test frame to remain local to its ring and sets the RII to "zero." An RII set to "zero," however, signals the SRT bridge to transparently bridge the frame. The bridge floods the frame to all rings. After the test framereaches the destination, the source and destination workstations communicate using transparent bridging methods as if they both reside on the same ring. Although this is functional, transparent bridging does not take advantage of parallel paths like source-route bridging can. Administrators often create parallel Token Ring backbones to distribute traffic and not overburden any single link. But transparent bridging selects a single path and does not use another link unless the primary link fails. (This is an aspect of the Spanning-Tree Protocol described in Chapter 6, "Understanding Spanning Tree," and Chapter 7, "Advanced Spanning Tree." ) Therefore, all the traffic passes through the same links, increasing the load on one while another remains unused. This defeats the intent of the parallel Token Rings.Another IBM operational aspect makes SRT unsuitable. To achieve high levels of service availability, some administrators install redundant devices, such as a 3745 controller, as illustrated in Figure 3-5.Figure 3-5 Redundant Token Ring Controllers。

网络工程设计与系统集成杨威第3章

网络工程设计与系统集成杨威第3章
2019/11/16 第6页
3.1.1 EIA/TIA-568A商业建筑物通信 布线标准
3. EIA/TIA-568A建议的拓扑结构 该拓扑结构是主干分层星型拓扑结构,如下图所示。
2019/11/16 第7页
3.1.1 EIA/TIA-568A商业建筑物通信 布线标准
4. EIA/TIA-568A 水平线缆
50um 单模
<10um
干线子系统
等级
类别
C 3类(大对数)
D/E/ F
光纤
5e/6/7(4 对)
多模 62.5um,
50um 单模
<10um
建筑群子系统
等级
类别
C
3类(室外大对 数)
光纤
多模62.5um,
50um 单模<10um
可采用5e/6类4对双绞电缆和62.5um多模/50um多模/<10um单模光缆
3.1.2 ISO/IEC IS 11801标准
(3)通信信息插座。ISO11801允许应用2对插座,但无 特定设计和对数确定的插座规定。
频率(MHz)
衰减(dB)

3.
100Ω-120Ω
150Ω
7
1
0.1
0.05

4
息 插
10
0.1
0.05
0.1
0.10

16
0.2
0.15

20
0.2
0.15

2019/11/16 第2页
第3章 综合布线技术与工程设计
本章重点:
综合布线系统标准 综合布线系统设计与安装 双绞线、光缆测试内容与标准,UTP五类线测试 不合格的原因 电源保护与UPS的使用。

Introduction虽长途通讯有其必要(所以才介绍前几章.

Introduction虽长途通讯有其必要(所以才介绍前几章.
LAN Technologies And Network Topology
Introduction
雖長途通訊有其必要(所以才介紹前幾章的 內容),但很多網路是區域性的 小網路通訊設計用以分享資源 區域網路非使用分離的魔電及纜線,而是使 用共用的媒體,所以需輪流使用 相關的網路拓樸及例子 下三章繼續其他細節
7-15/23
LAN Technologies And Network Topology
Another Example Bus Network: LocalTalk
Apple設計給Macintosh用,內含有連上LocalTalk之硬 體,其他廠牌電腦之硬體亦有 雖為bus,但使用一種CSMA/CA
7-10/23
LAN Technologies And Network Topology
Carrier Sense On Multi-Access Networks (CSMA):乙太網路的協調傳送機制
無集中式控制器告知每個電腦如何輪流使用 共享纜線,而是每台電腦參與所謂CSMA的 分散式協調 使用纜線上的電氣活動來決定(網路)狀態
當某站欲傳送時需先得到許可 一旦得到允許,便可全權控制整個ring 所送訊框由傳送站送出,一站接著一站地往下 送,直到完整繞一圈回到原傳送站 所有站會檢視所有經過的資料,若是欲送給自 己的,可複製一份,仍繼續往下送
Token
Token ring的硬體協調所有連接的電腦以確定每
7-17/23
原因:無線傳輸之能量只能傳送短距,且若有金屬的障礙物 則會阻隔訊號,所以距離太遠的點無法收到對方的傳送訊號 說例:1正在送資料給2,但3測不到1的訊號,所以無法CS, 3亦會送出訊號給2,如此產生碰撞,但1,3皆無法CD

Panduit Integrated Network Zone System 应用说明书

Panduit Integrated Network Zone System 应用说明书

Deploying Zone Network TopologyA highly effective way to deploy EtherNet/IP*** solutions in a plant floor environment is to follow the recommendations from Rockwell Automation* and Cisco**** in their Converged Plantwide Ethernet (CPwE) design and implementation guide. In this guide the use of zoned network topology is recommended as a best practice.Using Panduit’s Network Zone System integrated with Stratix** switches from Rockwell Automation* provides a platform for implementing small VLANs for Cell/Area Zones, as recommended in the CPwE to improve manageability and limit Layer 2 broadcast domains‡.The zone enclosures allow all cables within a Cell/Area Zone to be managed and patched within a single enclosure. By using a zone cabling architecture approach, network cabling becomes easier to locate, manage, and maintain because each additionaldevice is routed within the same pathways and enclosures. Managed cabling helps reduce the number of home runs throughout a facility and helps reduce abandoned cable in plenum spaces, helping make the workplace run more efficiently and safely.Following a zone topology allows a highly scalable and flexible deployment using building blocks of the CPwE architecture.‡ For additional reference see the Panduit Popular Configuration Drawing “Overview of Industrial Switch Deployment” PCD0001.Leveraging the Rockwell Automation* Integrated Architecture** technology simplifies switch configuration and integration while providing machine-based diagnostics and troubleshooting tools. Stratix** Industrial Ethernet Switches also leverage the Cisco**** operating system, bringing the world’s leading switching technology to the plant floor environment. Advanced switch feature sets include: STP/RSTP/MST, Resilient Ethernet Protocol (REP), Flexlinks, EtherChannel (link aggregation), QoS, IGMP snooping with querier, VLANs with trunking, IEEE 1588 PTP v2, IPv6 support, storm control with alarming, egress traffic shaping; compatible with Cisco**** Network Assistant (CNA) and CiscoWorks.*Rockwell Automation and Allen-Bradley are registered by Rockwell Automation in the Patent and Trademark Office of the United States.**Stratix, Stratix 8000, Stratix 5700 and Integrated Architecture are trademarks of Rockwell Automation. ***EtherNet/IP is a trademark of ODVA. ****Cisco is a registered trademark of Cisco Technology Inc.Integrated Network Zone SystemZ 23N -S B B F D 5Z 22N -S E A D 5c o n f i g u r a t i o n o p t i o n sNEMA 4X Enclosure:For 316 Stainless Steel: Substitute “S” with “6”. For example: Z23N-6BAAD5.Shielded Connectivity:For deployments requiring shielded copper cabling, add an “S” to the end of the part number. For example: Z23N-SBAAD5S . These systems will include shielded Category 6 jacks and shielded patch cables.Singlemode Uplinks:For singlemode fiber optic uplinks substitute “D” for “E”. For example: Z23N-SBAA E 5 These systems will include two SM SFP transceivers (1783-SFP1GLX) and two SM fiber optic patch cables per switch.UPS:For systems without a UPS, substitute the “5” in the part number to a “4”. For example: Z23N-6BAAD5 without a UPS is Z23N-6BAAD 4.Z 11N -S DDDimensions are in inches. [Dimensions in brackets are metric].WORLDWIDE SUBSIDIARIES AND SALES OFFICESPANDUIT US/CANADA Phone: 800.777.3300PANDUIT EUROPE LTD. London, UKPhone: 44.20.8601.7200PANDUIT SINGAPORE PTE. LTD. Republic of Singapore Phone: 65.6305.7575PANDUIT JAPAN Tokyo, JapanPhone: 81.3.6863.6000PANDUIT LATIN AMERICA Guadalajara, Mexico Phone: 52.33.3777.6000PANDUIT AUSTRALIA PTY. LTD. Victoria, Australia Phone: 61.3.9794.9020For a copy of Panduit product warranties, log on to /warrantyFor more informationVisit us at ContactCustomerServicebyemail:**************or by phone: 800.777.3300©2019 Panduit Corp. ALL RIGHTS RESERVED.Printed in the U.S.A.ZCSP19--WW-ENG12/2019**Stratix, Stratix 8000, Stratix 5700 and Integrated Architecture are trademarks of Rockwell Automation.。

ccnp BSCI知识总结

ccnp BSCI知识总结

BSCI知识总结一般说来在规划网络的时候要考虑到一点,即可扩展性网络设计一般是把它设计为层次化的,一般分为以下三层:Access Layer(访问层):用户接入到网络中的接入点Distribution Layer(分发层):该层是访问层用户访问网络资源的一个汇聚点Core Layer(核心层):负责在各个分割的网络之间进行高速有效的数据传输如下图,就是三层模型的一个图例:网络层次划分:1.按功能,企业的不同部门来划分2.按地理位置来划分但是记得不管你按何种方式来设计你的网络,三层元素要考虑进去:1.核心层:需要大带宽,冗余连接2.访问层:用户接入到网络的接入点.地址的分发(比如DNCP),VLAN的设计,防火墙,ACL等都在这层考虑进去3.分发层:访问层设备的汇聚点看看核心层的两种设计,如下图:这样的一种设计是全互连(fully meshed)的方式,优点是提供冗余连接,但是随着网络的扩展,成本会越来越高为了节约成本,就可以采用如上图的一种星形设计方式(hub-and-spoke),让其他设备都连接到相对独立的中心设备上去.这样做节省了一些开销,但是冗余度上比全互连的方式要差Benefits of Good Network Design先来看看RFC1918定义的私有IP地址范围:A类:10.0.0.0到10.255.255.255B类:172.16.0.0到172.31.255.255C类:192.168.0.0到192.168.255.255在注意设计一个好的网络的IP地址规划的时候要注意以下几点:1.可扩展性(scalability)2.可预测性(predictability)3.灵活性(flexibility)Benefits of an Optimized IP-Addressing Plan层次化地址规划的好处:1.减少了路由表(routing table)的条目(entry)2.可以有效的对地址进行分配可以使用路由汇总(route summarization)来减少路由表的条目,路由汇总使得多个IP地址是集合看上去就像是一个IP地址从而减少在路由器中的条目.这样就可以减轻路由器的CPU和内存的负载,提供更为有效的路由服务,加快了网络收敛(convergence)的速度,简化了排错(troubleshooting)的过程来看下图:这个就是典型的规划不太合理的网络.假设有50个分部,每个分部有200个前缀为/24的子网,由于子网不连续,那么总共路由表的条目将达到10000条以上相比再看看下图:子网连续,可以做路由汇总,所以49个分部对外的路由只有1条,加上子网内部的路由条目200条,总共路由表就可以减少到249条Hierarchical Addressing Using Variable-Length Subnet Masks变长子网掩码(Variable-Length Subnet Masks,VLSM)的出现是打破传统的以类(class)为标准的地址划分方法,是为了缓解IP地址紧缺而产生的先来看看前缀(prefix)的概念:假设给你个地址范围192.168.1.64到192.168.1.79.前3个8位位组(octet)是一样的,我们来看看组后个8位位组,写成2进制,如下:64:0100 000065:0100 000166:0100 0010……………..77:0100 110178:0100 111079:0100 1111注意它们的共通点:前4位是一样的,加上前3个8位位组的24位就等于28位,所以我们就可以认定这个网段的前缀是/28(即255.255.255.240).所以在这个例子里,主机位就只有4位来看下VLSM的应用,如下图:假如这个公司使用的是172.16.0.0/16的地址空间.给公司的分部A分配172.16.12.0/22到172.16.14.255/22的地址块.D需要2个VLAN,然后每个VLAN容纳200个用户.A,B和C 连接3个以太网,分别用1个24口的交换机相连(不考虑级联等因素)先从需要最大用户的入手:要让1个子网容纳200个用户,即主机位应该保留为8位(254台主机,假如主机位是9位的话可以容纳510台主机,但是造成了地址空间的浪费).所以前缀为/24,从172.16.12.0/24开始分配起走.如下图:然后分配A,B和C的地址:A连接一个24口的交换机,即需要24个主机地址,那么主机位应该保留为5位(可以容纳30台主机),即前缀为/27(255.255.255.224),因为172.16.12.0/24和172.16.13.0/24已经被D全部占用,所以从172.16.14.0/27开始分配.如下图最后来给A和D,B和D,C和D之间分配IP地址.由于它们之间是点对点连接,所以主机位只保留2位,即前缀是/30就可以了.除掉掉2个VLAN和3个LAN占用了的地址,还剩下下面几个地址块:172.16.14.96/27172.16.14.128/27172.16.14.160/27172.16.14.192/27172.16.14.224/27我们从172.16.14.224/27来进行划分(一般选择最后1个地址块进行划分).如下图:Route Summarization and Classless Interdomain Rouring我们先来看下什么是路由汇总,如下图:4个前缀为/24的连续子网经过D汇总后成为172.16.12.0/22(转换成2进制可以看出前22位是一样的).所以在E的路由表中只有一条条目.再来看下无类域间路由(Classless Interdomain Rouring,CIDR),它是为了缓解IP地址短缺,减小路由表的体积,打破传统的以类划分网络的一种技术,如下图:4个C类地址的子网经过D的路由汇总成一条192.16.12.0/22,注意这个192.16.12.0/22既不是A类也不是B类更不是C类,可以理解成超网(supernetting),即无类的概念Understanding IP Version 6IPv6是IPv4的增强版本,和IPv4相比,有很多优点包括:1.支持更大的地址空间2. IPv6的头部格式比IPv4的头部更为简化,IPv6的头部可以根据需要进行扩展(简单并不代表短,注意不要混淆)3.更高的安全性和能很好的和移动IP相互兼容4.能够平滑过渡IPv4到IPv6移动IP是IETF标准,使得移动设备在不中断现有连接的情况下进行迁移.这一特征是内建在IPv6中的,但是IPv4就不是.IP Sec是IETF开发的标准,用于增强IP网络安全性,这一特性对IPv6是必须的IPv6 Addressing如下图对于两种版本的IP进行比较:可以看出IPv4是32位长,4字节;IPv6是128位长,16字节;IPv4支持的地址最多达到42亿,IPv6支持的地址多达3.4乘以10的38次方.增加IP地址的位长即增加了IP包头部信息的大小IPv6的表示方法:1.X:X:X:X:X:X:X:X(每个X代表16位的16进制数字).不区分大小写2.排头的0可省略,比如09C0就可以写成9C0,0000可以写成03.连续为0的字段可以以::来代替,但是整个地址中::只能出现一次.比如FF01:0:0:0:0:0:0:1就可以简写成FF01::1来看几个简写的例子:0:0:0:0:0:0:0:0可以写成::0:0:0:0:0:0:0:0可以写成::1Multicast UseIPv4中的广播(broadcast)可以导致网络性能的下降甚至广播风暴(broadcast storm).在IPv6中,就不存在广播这一概念了,取而代之的是组播(multicast)和任意播(anycast)组播的接受对象是一组成员,是个群体.任意播是多个设备共享一个地址.分配IPv6单播(unicast)地址给拥有相同功用的一些设备.发送方发送一个以任意播为目标地址的包,当路由器接受到这个包以后,就转发给具有这个地址的离它最近的设备.单播地址用来分配任意播地址.对于那些没有配备任意播的的地址就是单播地址;但是当一个单播地址分配给不止一个接口的时候,单播地址就成了任意播地址Autoconfiguration来看下IPv6的自动配置:当本地链路的路由器发送网络类型信息给所有节点的时候.支持IPv6的主机就把它自己64位的链路层地址附着在64位的前缀自动配置成128位长的地址,保证地址的唯一性.自动配置启用即插即用(Plug and Play)IPv6 RenumberingIPv6的重编号:路由器发送组播数据包,其中数据包中包含2个前缀,一个是拥有比较短的生存期的前缀,还有一个是新的拥有正常时间的前缀.通知网络上的节点用完旧的前缀后换成新的前缀,这样就能进行平滑的前缀过渡IPv4 to IPv6 Transitioning两种转换的方式:1.双栈(dual stack)2.IPv6到IPv4(6to4)的隧道(tunnel)还有一种是利用NAT来翻译46地址来看看双栈配置的例子,如下:Router#sh run(略)!interface Ethernet0ip address 192.168.99.1 255.255.255.0ipv6 address 3ffe:b00:c18:1::3/127!(略)再来看看隧道技术,如下图所示:可以看出隧道技术是在双栈路由器上,将IPv6包封装在IPv4包中,然后经过IPv4网络传递到另外一端的双栈路由器上去,然后再由它解封装要注意的是对采用了隧道技术的网络进行排错的话比较复杂,要记住的是这只是一个过渡方案,不是最终的体系结构对隧道进行配置,需要满足以下2个条件:1.在连接网络的两端采用双栈路由器2.在双栈路由器的接口同时配置IPv4和IPv6的地址如下图所示:Module2 Network Address TranslationConfiguring IP NAT with Access Lists看看NAT的术语,如下:1.IP NAT inside:属于内网部分内的需要翻译成外部地址的接口2.IP NAT outside:属于外网和路由器相连的接口或者是在它的路由表里没有运载的有IP NAT inside地址空间的信息当包在下面的接口之间进行路由的话就需要NAT:1.IP NAT inside接口到IP NAT outside接口2.IP NAT outside接口到IP NAT inside接口当有包要从inside路由到outside的时候,你就需要在NAT表里建立条NAT条目.NAT条目把源IP地址从内部地址翻译为外部地址.当IP NAT outside接口响应这个包的时候,包的目标IP 地址将和IP NAT表里的条目做比较.如果匹配的条目找到了,目标IP地址就被翻译成正确的内部地址,然后放到路由表里保证能够被路由到IP NAT inside接口去;如果没有找到匹配的条目,包将被丢弃地址超载(address overloading)是种在Internet连接上很常见的现象.超载是使用NAT将多个内部地址映射到一个或一些唯一的地址上去.NAT使用TCP和UDP端口来跟踪不同的会话当路由器决定了从IP NAT inside接口到IP NAT outside接口的路径的时候,要建立包括源IP地址和源TCP和UDP端口号的条目.每个设备被分配的有一个唯一的TCP或UDP的端口号来进行区分NAT表包含以下信息:1.协议:IP,TCP或UDP2.inside local IP address:port:等待NAT翻译的内部地址以及端口号3.inside global IP address:port:经过翻译了的地址.这些地址通常是全局的唯一地址4.outside global IP address:port:ISP分配给外网的IP地址5.outside local IP address:port:看上去像是处于内网的外网主机的地址IP NAT和访问列表ACL的结合使用的命令,如下:1.ip nat {inside|outside}:接口命令,标记IP设备接口为内部或者外部,只有标记了内部或外部的接口才需要翻译2.ip nat source list pool :当包被发送到标记为IP NAT inside的接口的时候,这个命令告诉路由器比较源地址和ACL,然后ACL告诉路由器查找地址池(address pool)看是否要进行翻译.ACL许可的地址才能被NAT翻译3.ip nat pool {prefix-length |netmask }:这个命令创建翻译池,参数为起始地址到完结地址和前缀或者是掩码如下图:如图,当一个包的源地址为10.1.2.x,路由器就根据名为sales_pool的地址池进行翻译;如果包的源地址是10.1.3.x的话,路由器就根据名为acct_pool的地址池进行翻译来看一个NAT和扩展ACL结合使用的例子,如下图:ACL102和ACL103用来控制NAT的决策,和仅基于源地址相比,扩展ACL的决策基于源地址和目标地址.如果包的源地址不是10.1.1.0/24的话,包将不会通过NAT进行翻译.如果从10.1.1.0/24而来的包的目标地址为172.16.1.0/24或者是192.168.200.0/24的话.包的地址将被翻译成地址池trust_pool里的地址,即192.168.2.0/24;如果目标地址既不匹配172.16.1.0/24也不匹配192.168.200.0/24的话,源地址就将被翻译成地址池untrust_pool中的地址,即192.168.3.0/24Defining the Route Map Tool for NATroute map是Cisco IOS提供的一项功能,它的好处如下图:当你只使用ACL的时候,如图可以看出,不支持端口号,而且inside与outside的关系是一一对应.而且带来的缺点是很难排错你可以使用NAT的超载技术或者使用一种叫做route map的Cisco IOS工具,如果你结合route map和ACL一起使用,它将生成扩展的翻译条目,如上图.这些条目包含了端口信息,可以使得应用程序能够跟踪这些会话.ACL和route map不同的地方是可以使用set命令对route map进行修改,而ACL就不行在配置模式下输入route-map map-tag [permit | deny] [sequence-number],定义路由策略;接下来输入match {conditions},定义条件;最后使用set {actions}定义对符合条件的语句采取的措施map-tag是route map号,路由器从上到下的处理这些陈述,直到找到第一个匹配的的语句然后应用它.一条单独的match语句可以包含多个条件.每条条件语句中至少要有一条符合条件的语句.sequence-number定义了检查的顺序,比如1个名为hello的router map,其中一个的sequence-number为10;另外一个为20.那么将先检查sequence-number为10的那个.和ACL一样,在末尾有条隐含的deny any语句来看看一个router map配置的例子,如下图:如图黄色部分是增加的route map.在这个例子里,名为what_is_sales_doing的route map通过使用ip nat inside source route-map what_is_sales_doing pool sales_pool命令和sales_pool相互链接.源地址为10.1.2.100的包到达E0口,即IP NAT inside接口.ip nat inside source route-map what_is_sales_doing pool sales_pool命令告诉路由器发送包给名为what_is_sales_doing的route map.这个route map的sequence 10匹配包的源地址10.1.2.100,依靠ACL 2.接下来路由器查询名为sales_pool的NAT池,得到10.1.2.100要翻译的成的新地址当使用route map的时候,路由器在IP NAT表里创建完全的扩展翻译条目,包含源地址和目标地址以及TCP或UDP端口号;当你只使用ACL的时候,路由器创建的只是一条简单的翻译条目,一个应用程序对应一个条目,不包含端口信息Verifying NAT检查NAT表的内容,使用show ip nat translation命令,注意下图,分别是NAT使用了ACL 和NAT使用了route map的输出:Module3 Routing PrinciplesPrinciples of Static Routing我们来复习下配置静态路由的语法,在全局配置模式下使用:ip route prefix mask {next-hop address|interface} [distance] [permanent]看下各个参数的含义:prefix mask:要加进路由表中去的远程网络及其子网掩码next-hop address:下一跳地址interface:到达目标网络的本地路由器的出口distance:管理距离(AD),可选permanent:路由条目永久保存在路由表中,即使路由器的接口down掉了注意,一般只在点对点的连接中使用interface选项,否则应该使用next-hop address选项使用静态路由的好很多,比如可以对网络进行完全的掌控,不会占用额外的路由器CPU和内存资源以及网络带宽.适用于小型网络中如下就是一个静态路由的例子:如图,对于A,只需要在它上面配置ip route 10.2.0.0 255.255.0.0 s0就可以了,这里采用的就不是next-hop address而是采用本地的出口接口(因为是点到点是连接);当然也可以这样配置成ip route 10.2.0.0 255.255.0.0 10.1.1.1,这里采用的就是next-hop address 同样对于B的配置就可以使用ip route 172.16.1.0 255.255.255.0 s0或者ip route 172.16.1.0 255.255.255.0 10.1.1.2默认路由(default route):一般使用在stub网络中,stub网络是只有1条出口路径的网络.使用默认路由来发送那些目标网络没有包含在路由表中的数据包或者所有的数据包.语法是把静态路由中的prefix mask写成0.0.0.0和0.0.0.0Principles of Dynamic Routing动态路由允许路由器自动交换路由信息从而了解整个网络的信息.动态路由的好处是:使用中型和大型网络,能够根据网络拓扑的变化自动更改路由表的信息,避免了人工手动更改;但是带来的缺点就是占用路由器额外的CPU和内存资源以及网络带宽来看一个以RIP为例的动态路由的配置,如下:首先在A上使用router rip命令启动RIP协议,接下来只需要使用network命令加上需要交换信息的网络即可,在这里就是:A(config)#router ripA(config-router)#network 172.16.0.0A(config-router)#network 10.0.0.0A(config-router)#^ZA#对B的配置,如下:B(config)#router ripB(config-router)#network 10.0.0.0B(config-router)#^ZB#config tB(config)#ip route 0.0.0.0 0.0.0.0 s1B(config)#^ZB#注意B的配置中的ip route 0.0.0.0 0.0.0.0 s1,告诉通往Internet采用默认路由Principles of On-Demand RoutingODR是Cisco私有的,主要用在如下的一种星型环境中:有的时候你会觉得假如你使用静态路由,当网络拓扑发生变化以后,就得手动修改路由表;但是你又不想使用动态路由,因为那样占用了你额外的一些硬件资源和网络带宽.在如上图的环境中就可以使用ODR.ODR使用Cisco发现协议(CDP)携带网络信息.ODR的好处是最大可能的减少了网络硬件和资源的负载,同时又减少了手动修改的工作量.但是ODR只使用于hub-and-spoke这样的拓扑结构中.如上图,AC,D和E就是spoke router,或者叫做stub router,B作为中心,叫做hub router.当配置了ODR以后,spoke router使用CDP发送IP前缀信息到hub router,spoke router发送IP前缀信息给所有和它直接相连的网络.ODR报告子网掩那子信息,所以ODR支持VLSM.然后hub router依次发送指向到它自己的默认路由给周边的spoke router严格说来ODR不算是真正的路由协议,因为它交换的信息仅仅局限在IP前缀信息和默认路由上,ODR没有包含度(metric)的信息.ODR使用跳数(hop count)作为度配置ODR只需要在hub router上的全局配置模式下起用router odr命令,不需要在stub router上配置IP路由信息,但是必须在接口上开启CDP功能对ODR的验证,如下:B#show ip route(略)o 172.16.1.0/24 [160/1] via 10.1.1.2, 00:00:23, Serial0o 172.16.2.0/24 [160/1] via 10.2.2.2, 00:00:03, Serial1o 172.16.3.0/24 [160/1] via 10.3.3.2, 00:00:16, Serial2o 172.16.4.0/24 [160/1] via 10.4.4.2, 00:00:45, Serial3(略)注意o代表ODR,管理距离为160Classful Routing Protocol Concept基于类的路由协议最大的特点是在路由更新(routing update)中不包含子网掩码的信息.由于不知道子网掩码的信息,当一个基于类的路由器接受或发送,路由器会假设认为网络所使用的子网掩码是包含进路由更新中的,而且这些假设是基于IP的类.在接收到路由更新以后,运行了基于类的路由协议的路由器就会根据以下其中一条来决定网络路径:1.如果路由更新信息包含相同的主网络号和接收更新的接口配置相同的话,路由器就应用接收更新的接口的那个子网掩码2.如果路由更新信息包含相同的主网络号和接收更新的接口配置不相同的话,路由器将应用默认的子网掩码:A类:255.0.0.0B类:255.255.0.0C类:255.255.255.0当使用基于类的路由协议的时候,所有的网络主网络号必须相同,而且子网必须连续.否则路由器将对子网信息做出错误的判断.运行了基于类的路由协议会在网络的边界(boundary)做自动的路由汇总常见的基于类的路由协议有:IGRP和RIPv1Network Summarization in Classful Routing来看看网络汇总在边界的发生,如下图:B作为网络的分界,从C到A,B将两条条目(172.16.1.0和172.16.2.0)的信息汇总成一条(172.16.0.0);从A到C,B将两条条目(10.1.0.0和10.2.0.0)的信息汇总成一条(10.0.0.0).在基于类的路由协议里,这样的汇总是自动进行的,不需要手动配置.前提是子网掩码等长,子网连续假如说子网不连续,如下图:如图中的表所示,D给C传送一条汇总路由10.3.0.0;B传送条汇总路由10.2.0.0给C.对于C而言它就会做出错误的判断,它区分不了10.2.0.0和10.3.0.0分别在哪边.所以说做基于类的路由协议的路由汇总,子网必须连续.但是这样一来,会造成地址空间的浪费(和VLSM相比)Examining a Classful Routing Table假设我们使用show ip route命令,产生如下输出:J# show ip route(略)Gateway of last resort is 0.0.0.0 to network 0.0.0.010.0.0.0/24 is subnetted, 3 subnets,R 10.1.1.0/24 [120/1] via 10.1.2.2, 00:00:05, Ethernet0C 10.1.2.0/24 is directly connected, Ethernet0R 10.1.3.0/24 [120/2] via 10.1.2.2, 00:00:05, Ethernet0R 192.168.24.0/24 [120/2] via 10.1.2.2, 00:00:16, Ethernet0R 172.16.0.0/16 [120/3] via 10.1.2.2, 00:00:16, Ethernet0R* 0.0.0.0/0 [120/3] via 10.1.2.2, 00:00:05, Ethernet0(略)如上,可以看出10.1.2.0/24是直接相连,其他的都是通过RIP学习到的.现在我们假设有以下几个目的地的包,它们对于上面的输出会如何进行匹配:192.168.24.3172.16.5.110.1.2.7200.100.50.010.2.2.2根据show ip route的输出可以看出,到达192.168.24.3的包会跟第四条(92.168.24.0/24)相匹配,(虽然最后一条也可以,但是匹配原则是匹配掩码最长的那条);接下来,172.16.5.1和第五条(172.16.0.0/16)匹配;10.1.2.7和第二条(10.1.2.0)相互匹配;200.100.50.0和前五条都不匹配,和第六条默认路由(0.0.0.0/0)相互匹配;10.2.2.2虽然和前三条的第一个8位位组匹配,但是后面3个8位位组不匹配,所以它将被丢弃而不会采用默认路由如果你在全局模式下使用了ip classless命令的话,目的地是10.2.2.2的包就不会被丢弃,就会采用默认路由.ip classless命令在Cisco IOS版本12.0和12.0以后默认打开的,无须手动打开Classless Routing Protocol Concepts基于无类概念的路由协议可以说是第二代路由协议,相比基于类的路由协议,它可以解决地址空间过于浪费的问题.这类协议的例子有RIPv2,OSPF,EIGRP,IS-IS,BGPv4.使用无类的路由协议,拥有相同主网络号的不同子网就可以使用不同的子网掩码(VLSM).假如在路由表中到达目的网络的匹配条目不止一条,将会选择子网掩码长的那条进行匹配.比如假如有两条条目172.16.0.0/16和172.16.5.0/24,如果目的地是172.16.5.99的包将会和172.16.5.0/24进行匹配而不是和172.16.0.0/16进行匹配还有一点是无类的路由协议的自动汇总可以手动关闭,这样的自动汇总会影响不连续的子网的使用而造成错误的汇总路由信息(这点和基于类的路由协议在不连续子网的情况下的自动汇总所带来的问题是一样的)Automatic Network-Boundary Summarization Using RIPv2 and EIGRP基于无类的路由协议一般不会对所有的子网进行宣告(advertise).默认的,比如像EIGRP和RIPv2会像基于类的路由协议那样,在网络的边界进行自动汇总,这样就使得它们能和它们的之前的RIPv1和IGRP很好的兼容但是和之前的RIPv1和IGRP不同的是,你可以手动关闭自动汇总.在配置相关路由的时候只需要输入no auto-summary就可以了.这个命令在配置OSPF和IS-IS的时候是不必输入的,因为默认OSPF和IS-IS不会进行自动汇总如下图:如果在不连续子网启用自动汇总会产生问题,比如上图的A和B都将宣告汇总路由172.16.0.0/16给C,因此C不能明确区分它相连的子网谁是谁.所以,要解决这个问题就必须关闭自动汇总.当然有的时候自动汇总能够带来好处看看RIPv2和OSPF网络对于自动汇总的处理,如下图:注意RIPv2会将172.16.2.0/24和172.16.1.0/24汇总成172.16.0.0/16;而OSPF就不会进行自动汇总,仍然是两条条目:172.16.1.0/24和172.16.2.0/24.但是假如你对RIPv2网络中使用了no auto-summary关闭自动汇总的话,OSPF网络中的C和RIPv2网络中C的路由表中的条目就是一样的了.如下图:在RIPv2中关闭自动汇总,如下:Router(config)#router ripRouter(config-router)#version 2Router(config-router)#no auto-summaryversion 2命令是启用RIPv2版本Characteristics of RIP Version 1RIPv1的特点包括:1.使用跳数(hop count)作为度来决定最佳路径2.允许最大跳数是15跳3.默认是每30秒广播路由更新(实际环境中并不是设定的固定为30秒,而是25秒到30秒之间的随机时间,防止两个路由器发送同时更新产生冲突)4.最多支持6条等价链路的负载均衡,默认是4条5.是基于类的路由协议,不支持VLSM6.不支持验证(authentication)Characteristics and Configuration of RIPv2RIPv2比RIPv1增强的特点包括:1.基于无类概念的路由协议2.支持VLSM3.可以人工设定是否进行路由汇总4.使用多播来代替RIPv1中的广播5.支持明文或MD5加密验证RIPv2使用多播地址224.0.0.9来更新路由信息RIPv2的配置命令步骤如下:1.启动RIP路由协议:Router(config)#router rip2.启动版本2 :Router(config-router)#version 23.设置进行宣告的网络号Router(config-router)#network network-number一默认Cisco IOS会接收版本1和2的更新包, ;但是只发送版本1的包.要设置成只发送和接收一种版本的包,使用version {1|2}命令即可一些其他的命令,如下:Router(config-if)#ip rip send | receive version {1|2} or 1 2在接口模式下设置摸个接口接受和发送不同版本的包或同时接收发送版本1和2的包Router(config-if)#ip summary-address rip network mask如果之前在全局配置模式下使用no auto-summary关闭了自动汇总的话,要在某个接口做人工汇总的话就用上面这个命令来看一个例子,如下图:如图所显示的是一个RIPv1和RIPv2协同工作的网络.A运行RIPv2,C运行RIPv1,B同时运行RIPv1和RIPv2对C的配置,如下:C(config)#router ripC(config-router)#version 2C(config-router)#network 10.0.0.0C(config-router)# network 192.168.1.0.对B配置,如下:B(config)#router ripB(config-router)#version 2B(config-router)#network 10.0.0.0光配置这些是不够的,还要对B做进一步配置,如下:B(config)#int s3B(config-if)#ip rip send version 1B(config-if)#ip rip receive version 1如上,我们在s3口配置了接收和发送RIPv1的更新,因为RIPv2是作为主要运做的协议,C运行的是RIPv1,所以要让B和C进行正确的信息交换的话,就要在B的s3口配置上述命令对A进行配置,如下:A(config)#router ripA(config-router)#version 2A(config-router)#network 10.0.0.0A(config-router)#network 172.16.0.0A(config-router)#no auto-summary如上我们关闭了自动汇总,当然这样是不够的,还应该做如下配置进行手动汇总:A(config)#int s2A(config-if)#ip summary-address rip 172.16.1.0 255.255.255.0这样做的目的是允许把172.16.1.0/24的信息发送给B,因为默认发送给B的信息是172.16.0.0/16Administrative Distance看看Cisco制定的各个路由协议的管理距离(AD),如下:Floating Status Route因为静态路由的AD比一些动态路由协议的AD高,假如你又想优先采用动态路由,而让静态路由作为备份路由的话,就可以在配置静态路由的时候指定一个AD值,这个AD值要比你采用的动态路由协议要高才行.所以一般当动态路由正常的时候,你在路由表里是看不到这条静态路由的;当动态路由出问题的时候,静态路由开始生效,于是出现在路由表里,这样的静态路由就叫做浮动静态路由(floating static route)来看一个例子,如下:对A的配置如下:A(config)#ip route 10.0.0.0 255.0.0.0 172.16.1.2 100A(config)#router eigrp 100A(config-router)#network 192.168.1.0A(config-router)#network 172.17.0.0如上,我们对静态路由的AD指定为100,EIGRP的AD为90,所以优先采用EIGRP对B的配置和A类似,如下:B(config)#ip route 172.17.0.0 255.255.0.0 172.16.1.1 100B(config)#router eigrp 100B(config-router)#network 10.0.0.0B(config-router)#network 192.168.1.0注意,不能在ISDN上配置EIGRP,因为ISDN一般作为备份连接,如果在ISDN上配置了EIGRP的话,那ISDN连接将永久保持,就失去了作为备份连接的意义了Criteria for Inserting Routes in the IP Routing Table路由器决定最佳路径要参考以下几个标准,如下:1.有效的下一跳IP地址2.最佳的度3.管理距离4.选择前缀匹配最长的路由Protocols, Ports, and Reliability各个路由协议的协议号,端口号和可靠性的比较,如下图:。

局域网与城域网英文版第六版课程设计 (2)

局域网与城域网英文版第六版课程设计 (2)
Network Protocol and Standards
•OSI model and its layers
•TCP/IP protocol suite
•Ethernet, Wi-Fi, and other wired/wireless protocols
•Network security protocols and standards
Course Content
The course will consist of lectures, laboratory sessions and group projects. The following are the specific topics that will be covered:
Network Topologies
•Bus, ring, and star topologies
•Mesh and hybrid topologies
•Factors affecting the topology selection
•Advantages and disadvantages of each topology
Local Area Network and Metropolitan Area Network (LAN/MAN) English Version 6th Edition Course Design
Introduction
The curriculum design for the Local Area Network and Metropolitan Area Network (LAN/MAN) English Version 6th Edition is med at providing students with a comprehensive understanding of network topology, protocols, and architectures. The design also introduces advanced network techniques, including network security, wireless networking, and network management.
  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

Minimizing Downtime


Full Mesh Network 花费昂贵 伸缩性差 大量的路由修改包和广播 存在 Partial Mesh Design with Redundancy

层次网络设计
媒体冗余


局域网的冗余链路 Spanning Tree Protocol 802.1d协议 负载均衡(Channel) 广域网接口媒体冗余 链路备份 经常采用不同的WAN技术
计算机网络、路由及交换技术
第三章 网络拓扑与局域网设计
网络设计步骤




收集用户信息 评估当前网络 瓶颈、问题、应用、协议、流量、分址模型 设计目标 用户需求、商务、应用、技术 LAN设计 WAN设计 网络协议 文档和测试
局域网




什么是LAN (Local Area Network) ? 园区、建筑物、办公室 同WAN相对应 LAN特点 区域较小 传输速度快 LAN媒体类型 光纤(optical fiber)、同轴电缆(BNC)、双绞线(UTP) LAN传输速率 10 (Ethernet)、100 (Fast Ethernet)、1000Mbps (Gigabit)、10GB 局域网技术 以太网 ATM(622Mbps、155Mbps、25Mbps) Token Ring
光 纤 配 线 箱
中 心 交 换 机
LAN Media
Type 1 2 3 4 5 10Base2 10Base5 10BaseT 100BaseT 1000BaseT Line 细缆(Thinnet) 粗缆(Thicknet) 双绞线(UTP) 双绞线(UTP) 双绞线(UTP) Length (m) 185 500 100 100 100

优点


防火墙功能

LAN类型


Large Building LAN 包括大型数据中心、楼层通讯间,常用在公司总部 Campus LAN 提供园区楼宇间的通讯 Small/Remote LAN 小型办公室、公司分部,具有较少节点
Large Building LAN


层次模型



每一层提供必要的功能 每一层可以在特定的设 备(Switch/Router)被 执行,或者联合在同一 个设备中。 每一层不需要特定的物 理实体 一个特定层可以被省略, 但不推荐
核心层(Core Layer)

核心层是一个高速交换主干网络 特性 Offer high reliability Provide redundancy Provide fault tolerance Adapt to changes quickly Offer low latency and good manageability Avoid slow packet manipulation caused by filters or other processes Have a limited and consistent diameter
拓扑结构-星形

结构特点



站点之间通信通过中央设备 所有通信由中央设备进行控制, 结构复杂 终端设备通信负担小

优点


灵活性好、任意添加节点 伸缩性较好,容易升级
中央节点负担过重 花费较高 以太网(双绞线、光纤)

缺点



典型应用

拓扑结构-总线型

结构特点 所有设备是平等的 所有节点共享同样的线 路通信,并且同时只能 有一个站点发送信息
防火墙规则

仅仅隔离LAN可以被外部访问,隔离LAN具有不同于内 网的网络号 内网可以自由访问隔离LAN 内网可以根据需要访问外网(Internet),但外网不能访 问内网,仅响应内网的包可以从外部进入内网
防火墙规则

保护外部路由器、隔离LAN 足够安全,以免成为黑客的 跳点

保证堡垒主机和路由器足够 的简单,不运行复杂程序

层次模型


简化了互联网络的设计、规划了每一层拥有特定的任务 好处: Cost savings: 分布式路由交换平台 合理资源分配 降低了培训、管理开销 Ease of understanding Easy network growth 模块特性、不同模块互不影响 Improved fault isolation 充分利用了路由协议特性,如:EIGRP、路由总结特性等等。 分层模型结构 -Core Layer:提供站点之间最佳传输 -Distribution Layer:提供基于策略的连接 -Access Layer:提供用户访问接口
服务器冗余

磁盘镜像 磁盘双工 镜像服务器 服务器集群
路由冗余

两个目的: Load Balancing(负 载均衡) Minimizing Downtime(最小停 机时间)
网 络 1
网 络 2
DDN 内 部 网 络 Internet DDR
负载均衡


大多数路由协议支持路由负载均衡(在Cisco路由器上最多支持6条冗 余路径) 但要注意保证各链路带宽相同,否则容易引起“pinhole congestion”。 因为,一些基于“hop”的路由协议认为不同带宽的链路拥有相同的度 量。 根据交换模式的不同,负载均衡的处理模式也有所不同


关闭telnet会话
仅使用静态路由 不使用TFTP 口令加密 关闭Finger
专用防火墙

防火墙,用来代替包过滤路由器 常见防火墙


Cisco PIX NetScreen Checkpoint 比包过滤路由器更简单有效 无须改变网络结构 无须经常管理 包过滤(IP地址、MAC地址、端口、协议等) NAT(网络地址转换) 其它
HSRP(Hot Standby Router Protocol)



建立一个Phantom路由器,拥有 自己的IP地址和MAC地址,工作 站使用Phantom路由器作为缺省 网关 使用两个路由器,一个作为Active Router,另一个作为Standby Router,活动路由器定期发送 Hello Message,另一个帧听,如 果收不到,则认为活动路由失败, 另一个路由器变成Active Router 路由器的切换透明实现
HSRP工作步骤




Anderson工作站被配置成使用 影子路由器作为缺省网关 启动时,Broadway路由器作为 Active路由器,另一个作为 standby路由器 当工作站发送ARP帧寻找缺省路 由时,Broadway路由器用影子 路由器的MAC地址来响应 如果Broadway路由器失败, CentralPark路由器接管,继续 处理包。 路由器变换时,对用户透明
分布层(Distribution)

分布层是核心层和访问层的划分点 特性 Policy Security Address or area aggregation or summarization Departmental or workgroup access Broadcast/multicast domain definition Routing between virtual LANs Media translations Redistribution between routing domains Filtering WAN Access QoS
网络设计

层次模型( Hierarchical models )

同平面(flat)网络相对 用于简化网络设计 划分网络为不同的功能层次 保证关键系统的不可中断性 保证网络数据的完整性、可靠性、保密性

冗余模型( Redundant models )


安全模型( Secure models )
计费主机 F1/C3548 计算 科仪 金属 生态 F8 公寓楼 综合楼 Catalyst 3508 1B 分院
F11/1 2
2B
3B 4B
1 A/2 A 2C 3A 4A Intel 550t 科研楼
六室实验 楼 条件处
新松公司
自 动 化 所 及 沈 阳 分 院 园 区 网 络 拓 扑
Small/Remote Site LAN

典型应用


拓扑结构-网状结构

结构特点

星形结构的拓展 冗余结构 发费昂贵 全网状结构 半网状结构 可应用在局域网或广域网上

缺点


典型应用

拓扑结构-层次结构

结构特点

是一种半网状结构 网络分层,不同层次有着 不同的功能 花费适中 伸缩性好 可靠性高 冗余结构

优点




距离公司总部较远,常常 使用WAN技术连接 常使用小型路由器连接公 司总部 在本地网络使用小型交换 或共享设备,Switch、 Hub 可能存在DHCP、Backup DNS服务器
LAN 布线结构
管理子系统 水平子系统 工作区子系统
垂直子系统
光 纤 配 线 箱
分 支 交 换 机
设备间子系统
安全模型



防火墙:保护网络不 被另一个非信任网络 访问 防火墙规则:2种 -阻塞模式 -允许模式 常用防火墙系统 -包过滤路由器 -堡垒主机(Bastion Host)
典型防火墙结构



Isolation LAN -处于内网与外网之间 -DMZ (Demilitarized Zone) Inside Filter -处于隔离LAN和内网之间 -常用于DNS、Email、 WWW … Outside Filter -处于外网和隔离LAN之间
相关文档
最新文档