A Parallel Routing Algorithm for Torus NoCs(7)
received deauthentication frame in run state
received deauthentication frame in run state最近,在我的Wi-Fi网络连接中出现了一个奇怪的问题:我一直接收到了一个“在运行状态下收到解除认证帧”的警告信息。
我开始打开一些Google页面来搜集关于这个问题的信息,发现这是一个普遍的问题。
这篇文章将为您介绍这个问题的原因和解决方法。
首先,让我们了解一下什么是解除认证帧。
在Wi-Fi网络中,它是一种控制帧,用于向某个设备发送断开连接的命令。
常见的情况是,当您的设备在Wi-Fi网络中登陆时,它会发送认证请求。
如果这个请求被接受,那么设备会被认为是合法的设备,它可以连接到Wi-Fi网络中。
但是,当您离开这个网络时,设备需要发送解除认证帧,以便网络知道它已经离开了网络。
现在让我们探讨一下为什么在运行状态下收到解除认证帧是一个问题。
它通常发生在两种情况下。
第一种情况是,您的设备在Wi-Fi网络中与许多其他设备进行通信,但是网络资源有限,因此需要在某些设备之间进行切换。
为了实现这个,网络会向设备发送解除认证帧,以便它可以切换到其他设备。
在这种情况下,您不必担心这个警告信息。
它只是一个网络机制的信号。
另一种情况是,有人正在试图攻击您的Wi-Fi网络,通过发送解除认证帧来中断您的连接。
这是一个非常严重的问题。
如果您经常收到这个警告信息,可能意味着您的网络受到了攻击。
您应该立即采取措施来加强您的网络安全性。
那么,如何解决这个问题呢?首先,您需要确保您的Wi-Fi网络设置是正确的。
您应该启用WPA2加密,并使用强密码来保护您的网络。
您还可以使用网络安全软件来检测和阻止攻击。
如果您经常收到解除认证帧警告信息,您还可以考虑更改您的Wi-Fi网络设置,例如更改信道或设置MAC地址过滤,以阻止攻击者进入您的网络。
总之,在运行状态下收到解除认证帧是一个非常常见的问题。
在大多数情况下,它只是一个网络机制的信号,您不必担心。
但是,如果您经常收到这种警告信息,那么您需要采取措施来加强您的网络安全性,这样您的网络就不会受到攻击。
二层漫游AC旁挂直接转发以及漫游后业务不通故障解决方案
二层漫游
流量模板与安全模板 [AC6005-wlan-view]traffic-profile name roam [AC6005-wlan-view]security-profile name roam [AC6005-wlan-sec-prof-roam]security-policy wpa2 [AC6005-wlan-sec-prof-roam]wpa2 authentication-method psk pass-phrase cipher 12345678 encryption-method ccmp 流量模板里面可以定义流量限速等,而安全模板则是定义安全策略,默认为 Open,需求是WPA2 PSK+CCMP,所以这里修改了。 服务集模板 [AC6005-wlan-view]service-set name roam [AC6005-wlan-service-set-roam]wlan-ess 101 [AC6005-wlan-service-set-roam]ssid vlan101 [AC6005-wlan-service-set-roam]traffic-profile name roam [AC6005-wlan-service-set-roam]security-profile name roam [AC6005-wlan-service-set-roam]forward-mode direct-forward [AC6005-wlan-service-set-roam]service-vlan 101 定义SSID,然后业务VLAN,以及WLAN-ESS 接口,默认转发模式是直接转 发,其他的就是关联流量模板与安全模 板
基于一种新网络拓扑结构的低功耗
大。 Torus结构[8]既具有 Mesh结构硬件实现简单、网络扩展
随着半导体制造工 艺 的 发 展,大 量 的 晶 体 管 和 计 算 资 源 构 的 关 系,并 指 出 了 不 同 互 连 结 构 的 综 合 性 能 和 各 自 的 使 用
可集成到一块芯片上。从 片 上 系 统(System on Chip,SoC)到 场合。虽然研究者提 出 了 新 的 技 术 和 架 构 来 降 低 NoC 的 功
供足够的带宽。片 上 网 络 (Network on Chip,NoC)被 视 为 解 决 下 一 代 通 信 架 构 扩 展 性 和 功 耗 效 率 的 新 范 式 ,成 为 当 前 的
HMesh来降低片上网络的功耗,并从理论上 分 析 了 它 与 经 典 拓扑结构的性能差异。然 后 提 出 了 适 合 HMesh 结 构 的 路 由
show that,compared with Mesh and Torus,the average power of HMesh drops by 12.9% and 11.24% respectively as
the networks is not in congestion,so it is more suitable for NoC.
Low Power Research Based on a New NoC Topology Architecture
CAO Hong-xin LI Guang-shun WU Jun-hua (School of Computer Science,Qufu Normal University,Rizhao 276826,China)
4个相邻的路 由 节 点 相 连。 它 的 网 络 拓 扑 结 构 是 规 则 的,对 应的路由算 法 比 较 简 单。但 它 的 网 路 直 径 和 节 点 间 距 离 较
Arduino编程参考手册中文版(带目录适合打印)
! (逻辑非)
指针运算符
* 指针运算符
& 地址运算符
位运算
& (位与)
| (位或)
^ (位异或)
~ (位非)
<< (左移)
>> (右移)
复合运算符
++ (自加)
-- (自减)
+= (复合加)
-= (复合减)
*= (复合乘)
/= (复合除)
&= (复合与)
|= (复合或)
范围
HIGH | LOW
int checkSensor(){
if (analogRead(0) > 400) {
return 1;
else{
return 0;
}
}
return关键字对测试一段代码很方便,不需“注释掉”大段的可能是错误的代码。
void loop(){
//在此测试代码是个好想法
return;
// 这里是功能不正常的代码
for
for语句
描述
for语句用于重复执行被花括号包围的语句块。一个增量计数器通常被用来递增和终止循环。for语句对于任何需要重复的操作是非常有用的。常常用于与数组联合使用以收集数据/引脚。for循环的头部有三个部分:
for (初始化部分; 条件判断部分; 数据递增部分) {
//语句块
。。。
}
初始化部分被第一个执行,且只执行一次。每次通过这个循环,条件判断部分将被测试;如果为真,语句块和数据递增部分就会被执行,然后条件判断部分就会被再次测试,当条件测试为假时,结束循环。
示例:
for (x = 0; x < 255; x ++)
received deauthentication frame in run state
received deauthentication frame in run state最近,一些用户在使用无线网络时可能会遇到“ received deauthentication frame in run state”这样的错误信息。
这个错误提示意味着设备收到了一个非法的deauthentication帧,导致无线网络连接中断。
那么,这个错误消息是如何产生的呢?首先,我们需要了解deauthentication帧。
Deauthentication 帧是指用于告知某个客户端,让其断开和AP的连接的帧。
在某些情况下,网络管理员可能会使用deauthentication帧来防止未经授权的设备访问网络。
例如,当一个设备被盗时,管理员可以发送deauthentication帧,让该设备无法连接网络。
然而,在某些情况下,deauthentication帧也可能被恶意利用。
黑客可以通过发送deauthentication帧,强制使某个设备断开与AP 的连接,从而实现对无线网络的攻击。
当设备收到一个非法的deauthentication帧时,它会立即中断与AP的连接,并显示“ received deauthentication frame in run state”这样的错误信息。
这个错误意味着设备接收到了一个不受信任的帧,因此中断了连接。
为了避免这种错误的发生,我们可以采取一些安全措施。
首先,网络管理员应该限制谁可以访问网络,并禁止未经授权的设备连接到网络。
此外,管理员还可以使用安全性较高的加密协议,如WPA2,来保护网络不受攻击。
此外,用户也应该注意安全问题。
例如,不要连接不可信的无线网络,不要使用弱密码,并定期更改密码。
总之,如果您遇到“ received deauthentication frame in run state”这样的错误信息,请不要惊慌。
这个错误消息意味着您的设备接收到了一个非法的deauthentication帧,并中断了连接。
07、数据通信技术-实训手册
实习单元 2 三层交换机 VLAN 配置............................................................................................. 19 2.1 实训说明.................................................................................................................................... 19
实习单元 3 三层交换机链路聚合配置...........................................................................................21 3.1 实训说明....................................................................................................................................... 21
1
4
3.1.1 实训目的................................................................................................................................. 21 3.1.2 实训时长................................................................................................................................. 21 3.1.3 实训准备................................................................................................................................. 21 3.2 实训规划................................................................................................................................................. 21 3.2.1 网络拓扑和数据规划.............................................................................................................21 3.3 实训任务及步骤..................................................................................................................................... 21 3.3.1 任务 1:静态聚合..................................................................................................................21 3.3.2 任务 2:动态聚合..................................................................................................................22 3.3.3 任务 3:验证方法..................................................................................................................22 3.4 总结与思考............................................................................................................................................. 23 3.4.1 实训总结................................................................................................................................. 23
PTN中级试题
PTN中级试题一、单选题1、关于 PTN 隧道线性保护的倒换,其中优先级最低的是( A )A 、人工倒换B 、保护信号失效C 、工作信号失效D 、工作信号劣化2、对于ZXCTN6300在扩缩环操作的时候,对于 1:1线性保护,手工切换保护组的时候,只在一端进行切换,另一端则会( A )A 、同时也会自动切换B 、不会自动切换,需要手工进行切换C 、状态未知D 、延迟1分钟之后再切换3、对于伪线双归保护的倒换机制,下列哪项的优先级最高( B )A 、人工倒换B 保护闭锁C 、强制倒换4、PTN 隧道线性保护,优先级最低的是(D ) A 、保护闭锁B 、保护信号失效C 、强制倒换5、下列对LAG 保护分类的叙述中,错误的是( A )A 、根据成员端口上是否启用了LACP 协议,可将链路聚合分为静态聚合和动态聚合两种模式B 、动态LAG 保护方式分类为:主备方式和负载均衡方式C 、动态LAG 保护链路聚合协商模式(LACP_Activity )有两种模式:主动协商模式( Active )和被动协商模式( Passive )D 、动态LAG 保护超时模式分为:长超时和短超时6、对于 6300 设备,从标签及性能考虑,建议隧道总数不用超过( B )A 、 1KB 、 2KC 、 4KD 、 8K 7、为了保障设备的正常运行,主控板的温度要求低于(B ) A 、50C B 、60CC 、70CD 、80C8、 ZXCTN610C 设备为根节点配置 EPTREE 业务时,最多支持配置(D )个叶子节点 A 、 8 B 、 16C 、 32D 、 649、 Y.1731 OAM 中CV 包检测功能选择启用时,其支持设置的CV 包发送周期不包括下面哪一项( C ) 10、 Y.1731 OAM 标准中,连续性检查可应用于故障管理、性能监视或保护倒换。
当 MEP 接收到一个CV/CC 帧,帧中的MEL 低于目的MEP 的MEL,且MEG ID 也不同,那么这时产生的告警是( B ) A 、错误合并缺陷B 错误MELC 错误MEPD 、连续性丢失11、在Y.1731标准中,如何标示一个报文是TMP 隧道层的OAM 报文(B )A 、隧道层的标签为13B 、隧道层的内一层标签为 13C 、隧道层外一层标签为 20D 、隧道层标签为2012、关于 OAM 类故障处理,下列说法正确的是(C )A 、若A 网元报TMC-LOC,而Z 网元无告警,可能是 A 网元下发异常,PW 和隧道未绑定导致B 、若A 网元报TMP/TMC-RDI,而Z 网元无告警,A 可能是误告警,可以通过环回检测确认C 、若A 向Z 做LB 检测通,说明 A 与Z 间的基础数据无任何问题D 、当出现不期望 UNPhb,UNPeriod,UNMep 等时,一定是两端 MEP 配置不一致导致的13、在 6000 1.1 以及 9000 32 平台版本,为避免网管断链、防止因产生大量空闲 telnet 链接D 、信号失效 D 、工作信号失效A 、 3.3msB 、 100msC 、 3.3sD 、 10ms导致的TCP 链接资源消耗,需要在设备上配置 tel net 最大空闲超时时间,其值为( C )A 、10 分钟B 、15 分钟C 、30 分钟D 、 60 分钟14、新配置一条隧道,发现隧道一端上报 LOC 告警,分析原因不可能是(A )A 、隧道两端 PE OAM channel type 参数不一致B 、基础数据配置错误C 、某段链路光功率异常D 、 oam 参数两端不一致15、为了检查ROS 平台和age nt 数据库的一致性,需要如何处理(B )17、对于OSPF 协议特点,下列说法错误的是(D )A 、快速收敛,通过快速扩散链路状态更新确保数据库的同步,并同步计算路由表B 、无路由环路,通过最短路径优先( SPF )算法,确保不会产生环路C 、路由聚合,减小路由表压力D 、使用广播方式发送更新18、对于OSPF 支持的网络类型,下边说法错误的是( E )A 、支持广播网络B 、支持非广播多路访问网络C 、支持点对点网络D 、支持点对多点网络E 、不支持虚链路19、下面对于CES 业务伪线控制字中的R 比特描述错误的是哪一项(B ) A 、本地PSN 处于包丢失状态时,置位 R 比特 B 、若PSN 迁出包丢失状态时,需要清除 R 比特置位C 、本地TDM AC 出现故障需要置位 R 比特D 、远端出现包丢失状态后,向本端发送的伪线控制字中的 R 比特置位20、关于PTN 与CE 的相同点,下列说法错误的是(B )A 、我司当前大部分场景,两类组网的管理平面路由协议均使用OSPF 或DCNB 、两者的控制平面协议均为 OSPF 或ISISC 、两者均支持链路捆绑功能(LAG )D 、两者均支持L2/L3VPN 桥接技术21、下面对于E1-AIS 告警的解释描述正确的是(B ) A 、帧丢失告警,连续收到 3个不正确的FAS 信号B 、告警指示信号,对 E1信号在连续512bit ( 2帧)中监测到0的个数少于等于2个C 、对E1信号在连续2个或5个双帧,检测到帧头的 Abit 为1D 、 CRC 复帧丢失告警22、对于CES 报文封装格式,下列哪一项是非必选的( D )A 、L2 HeaderB 、MPLS 标签C 、控制字(CW )D 、RTP 头23、关于PTN 与CE 的不同点,下列说法错误的是( C )A 、当前MPLS-TP 中一般不需要单独部署控制平面A 、DBCheck 工具检查B 、数据上载比较C 、使用设备巡检工具16、根据区域层次的划分, D 、 agent 信息收集工具 IS 设备不可以工作在(D )层次A 、 level-1 :区域内设备B 、level-2 :区域间设备C 、level-1-2 :区域内和区域间设备D 、level-0 :区域内设备B、当前主流 MPLS-TP中隧道标签由网管分配C 、 PTN 中Tunnel 两条单向组成, CE 中Tunnel 为一条双向的D 、 PTN 中的检测机制为 OAM, CE 中一般用BFD24、 Z XCTN6100V1.1p01B12/ZXCTN6110V1.1p01b9以后版本 boot 升级目录修改为(D ) A 、 /system/B 、 /FLASH/datasetC 、 /FLASH/cfgD 、 /FLASH/img25、对于独立版PTN 网元批量升级工具,下列说法错误的是( D ) A 、需要安装sqlserver 数据库 B 、只有规模一安装规模C 、相比于未裁剪的 U31批量升级工具而言,该工具功能简单,占用资源少,可在笔记本上使用D 、安装完毕后,不需要 lice nse26、目前,PTN 网元批量升级工具,下列设备类型不支持的是( E )A 、 ZXCTN6100B 、 ZXCTN6130C 、 ZXCTN6200D 、 ZXCTN6300E 、 ZXCTN9000 32版本27、当前主流版本的 PTN 设备1731 OAM CHANNEL TYPE!为(C ) A 、 32760 B 、 32761 C 、 32762 D 、 32763 28、 ZAR 文件完整性检查的命令是(B )A 、 chkbinB 、 chkzarC 、 chkdataD 、 chknow29、ZXCTN62006300设备从早于 V1.10.00P01B27版本发布的工程版本升级到 V1.10.00P01B27 版本,必须使用( A ) ? B A 、ROS 重启方式B 、升级重启方式C age nt 重启方式30、关于PTN 设备LSP 保护,下列说法错误的是(D )A 、1+1 T-MPLS 路径保护的倒换类型是单向倒换,即只有受影响的连接方向倒换至保护路径, 两端的选择器是独立的B 、 1 + 1 T-MPLS 在单向保护倒换操作模式下,保护倒换由保护域的宿端选择器完全基于本地(即保护宿端)信息来完成C 、 1 + 1 T-MPLS 工作(被保护)业务在保护域的源端永久桥接到工作和保护连接上D 、设备的连通性检查(CC 检测)仅在当前工作的 LSP 进行检查B 、如果要测试SSF 告警功能则AIS 功能需要启用C 、按需LM 功能启用时,可以设置本端 LM 功能或者对端LM 功能,以及报文转发优先级、发送间隔、上报间隔、测试时间等参数D 、CV 报文转发优先级可以随意设置34、对于OSPF 算法,下列说法错误的是( D )31、对于线性保护常见的倒换条件的优先级排序,下列正确的是( A 、LP (保护闭锁)〉SF-P (保护信号通道失效) SD (工作信道信号劣化)〉MS (人工倒换)B LP (保护闭锁)〉SF-P (保护信号通道失效) SD (工作信道信号劣化)〉MS (人工倒换)C 、LP (保护闭锁)〉SF-P (保护信号通道失效) MS (人工倒换)〉SD (工作信道信号劣化)D SF-P (保护信号通道失效)〉LP (保护闭32、为了不影响设备的正常运行。
y and
Keywords: Torus, routing, placement, bisection, interconnection network, edge separator, congestion.
1 Introduction
Meshes and torus based interconnection networks have been utilized extensively in the design of parallel computers in recent years 5]. This is mainly due to the fact that these families of networks have topologies which re ect the communication pattern of a wide
1
variety of natural problems, and at the same time they are scalable, and highly suitable for hardware implementation. An important factor determining the e ciency of a parallel algorithm on a network is the e ciency of communication itself among processors. The network should be able to handle \large" number of messages without exhibiting degradation in performance. Throughput, the maximum amount of tra c which can be handled by the network, is an important measure of network performance 3]. The throughput of an interconnection network is in turn bounded by its bisection width, the minimum number of edges that must be removed in order to split the network into two parts each with about equal number of processors 8]. Here, following Blaum, Bruck, Pifarre, and Sanz 3, 4], we consider the behavior of torus networks with bidirectional links under heavy communication load. We assume that the communication latency is kept minimum by routing the messages through only shortest (minimal length) paths. In particular, we are interested in the scenario where every processor in the network is sending a message to every other processor (also known as complete exchange or all-to-all personalized communication). This type of communication pattern is central to numerous parallel algorithms such as matrix transposition, fast Fourier transform, distributed table-lookup, etc. 6], and central to e cient implementation of high-level computing models such as the PRAM and Bulk-Synchronous Parallel (BSP). In Valiant's BSP-model for parallel computation 14] for example, routing of h-relations, in which every processor in the network is the source and destination of at most h packets, forms the main communication primitive. Complete-exchange scenario that we investigate in this paper has been studied and shown to be useful for e cient routing of both random and arbitrary h-relations 7, 12, 13]. The network of d-dimensional k-torus is modeled as a directed graph where each node represents either a router or a processor-router pair, depending on whether or not a processor is attached at this node, and each edge represents a communication link between two adjacent nodes. Hence, every node in the network is capable of message routing, i.e. directly receiving from and sending to its neighboring nodes. A fully-populated d-dimensional k-torus where each node has a processor attached, contains kd processors. Its bisection width is 4kd? (k even), which gives kd=2 processors on each component of the bisection. Under the complete-exchange scenario, the number of messages passing through the bisection in both directions is 2(kd =2)(kd=2). Dividing by the bisection bandwidth, we nd that there must exist an edge in the bisection with a load kd =8. This means that unlike multistage networks, the maximum load on a link is not linear in the number of processors injecting messages into the network. To alleviate this problem, Blaum et al. 3, 4] have proposed partially-populated tori . In this model, the underlying network is torodial, but the nodes do not all inject messages into the network. We think of the processors as attached to a (relatively small) subset of nodes (called a placement), while the other nodes are left as routing nodes. This is similar to the case of a
并行计算:第二章 并行机系统互连与基本通信操作
12
嵌入(2)
1000 1100 0100 0000
1001 1101 0101 0001
1011 1111 0111 0011
1010 1110 0110 0010
0110
0111
0100
0101
国家高性能计算中心(合肥)
0010
0011
0000
0001
1110
1111
1100
1101
1010
1011
800Mbps。FCSI厂商也正在推出未来具有更高速度(1、2或 4Gbps)的光纤通道 光纤通道的价值已被现在的某些千兆位局域网所证实,这些局 域网就是基于光纤通道技术的
连网拓扑结构的灵活性是光纤通道的主要财富,它支持点到点、 仲裁环及交换光纤连接
FDDI :
光纤分布式数据接口FDDI(Fiber Distributed Data Interface)
总线:PCI、VME、Multibus、Sbus、MicroChannel
多处理机总线系统的主要问题包括总线仲裁、中断处理、协议转换、 快速同步、高速缓存一致性协议、分事务、总线桥和层次总线扩展等
CPU板
LM
CPU
本地外围设备 (SCSI总 线 )
IOC
存储器板 存储器单元
本地总线
存储器总线
级间互连(Interstage Connection ):
均匀洗牌、蝶网、多路均匀洗牌、交叉开关、立方连接 n输入的Ω网络需要 级 log2 n 2× 2 开关,在Illinois大学的
Cedar[2]多处理机系统中采用了Ω网络 Cray Y/MP多级网络,该网络用来支持8个向量处理器和256
Routing algorithm
Routing AlgorithmAbstractRouting algorithm can be distinguished by many features depending on the designer’s specific objectives. There are many kinds of routing algorithms with different affections on the network and router resources.The purpose of a routing algorithm is to define a set of rules for transferring units of data, known as packets, from one node to another.Key Wordshop path length least cost update time time delay Dijkstra’s algorithm Bellman-Ford algorithm. IntroductionRouting algorithm is to improve the function of routing protocol with the least overhead.In our book,it only talks about routing in switched networks including circuit-switching network and packet-switching network.In a circuit-switching network,to cope with the growing demands on public telecommunication networks,virtually all providers have moved away from the static hierarchical approach to a dynamic approach.In a packet-switching network ,the selection of a route is generally based on some performance criterion asfollows:1.to choose the minimum-hop route(least-cost routing)2.decision time and placework information source and update timeHence,a large number of routing stragies have evolved for dealing with the routing requirements of packet-switching networks including fixed routing,flooding,random routing and adaptive routing.The original routing algorithm designed in 1969 was a distributed adaptive algorithm ,which is a version of the Bell-Ford algorithm.After some years of experience the original routing algorithm was replaced by a quite difference using delay as the performance criterion.The third generation damp routing oscillations and reduce routing overhead. Routing algorithm should be flexible. The key technological factors are as follows:1. the shorest route(least hop or shorest path length) or the best route2.the communication subnet should adopt virtual circuit or datagram3.general routing algorithm or distributed routing algorithm4. cosider about the network topology,traffic and time delay5.static routing or dynamic routingThe most common routing algorithm is least-cost algorithm which is the variation of Dijkstra’s algorithm and the Bellman-Ford algorithm.System ModelExamples of adaptive-routing algorithms are the Routing Information Protocol (RIP) and the Open-Shortest-Path-First protocol (OSPF). Adaptive routing dominates the Internet. However, the configuration of the routing protocols often requires a skilled touch; networking technology has not developed to the point of the complete automation of routing. In P2P logical network, there is a simple ROP route which is responding to a complex RON route in communication network.The key point of this routing algorithm is how to pass the data to the destination fastly and reliably.。
Torus拓扑结构的双端口NoC模型与性能分析
。 NoC 从通信架构层
Received Date: 201607
* 基金项目: 国家自然科学基金 ( 61106020 ) 、 国家自然科学基金( 61204024 ) 、 国家自然科学基金( 61179036 ) 资助项目
·362·
电子测量与仪器学报
第 31 卷
性能的关键因素
中, 片上网络的通信能力已超过计算单元成为限制整体 [5 ] 。 通常基于 NoC 的多核系统结构只
1
引
言
面有效地解决了总线型 SoC 扩展性差和并行度低的问 题, 并且通过采用全局时钟异步局部时钟同步( GALS ) 的 机制克服了 SoC 全局时钟同步造成的功耗和面积的额外 开销。 片上网络的设计包括: 拓扑结构、 路由算法和交换机 制等方面, 其中拓扑结构对后者具有重大影响。 片上网 络拓扑结构对片上多处理器整体性能和功耗的影响越来 在许多实例中, 特别是面向高密度计算的多核系统 越高,
实验结果表明基于torus结构的对角线双端口noc在单目事物实验中较单端口网络平均吞吐量和平均包延迟最大改善了833和559在双目事物实验中较同一维序双端口网络平均吞吐量和平均包延迟最大改善了911和543
第 31 卷 第 3 期 2017 年 3 月
电子测量与仪器学报
JOURNAL OF ELECTRONIC MEASUREMENT AND INSTRUMENTATION
Vol. 31 No. 3 ·361·
DOI: 10. 13382 / j. jemi. 2017. 03. 005
Torus 拓扑结构的双端口 NoC 模型与性能分析 *
宋宇鲲 钱庆松 张多利
合肥 230009 ) ( 合工业大学微电子设计研究所 摘
《Docker容器技术 配置、部署与应用》习题及答案
《Docker容器技术配置、部署与应用》习题项目一Docker安装选择题1.有关Docker的叙述中, 正确的是()。
A.Docker不能将应用程序发布到云端进行部署。
B.Docker将应用程序及其依赖打包到一个可移植的镜像中。
C.Docker操作容器时必须关心容器中有什么软件。
D.容器依赖于主机操作系统的内核版本,因而Docker局限于操作系统平台。
2.关于Docker的优势, 不正确的说法是()。
A.应用程序快速、一致地交付。
B.响应式部署和伸缩应用程序。
C.Docker用来管理容器的整个生命周期,但不能保证一致的用户界面。
D.在同样的硬件上运行更多的工作负载。
3、容器化开发流程中, 项目开始时分发给所有开发人员的是()。
A.DockerfileB.Docker镜像C.源代码D.基础镜像4.以下关于docker命令的基本用法的说法中, 不正确的()。
A.短格式的单字符选项可以组合在一起使用。
B.使用布尔值选项时不赋值, Docker将选项值视为false。
C.多值选项可以在单个命令行中多次定义。
D.对于较长的单行命令可以使用续行符进行换行。
简答题1. 什么是Docker?2. 容器与虚拟机有什么不同?3. Docker引擎包括哪些组件?4. 简述Docker架构。
5. Docker使用了哪些底层技术?6. Docker命令行接口有哪些类型?项目二Docker快速入门选择题1.以下镜像名称中, 完整的表示是()。
A.myregistryhost/fedora/httpd:version1.0。
B.myregistryhost:5000/httpd:version1.0。
C.myregistryhost:5000/fedora/httpd。
D.myregistryhost:5000/fedora/httpd:version1.0。
2.关于Docker镜像操作, 不正确的说法是()。
A.可以通过dangling的布尔值列出无标签的镜像。
PTN考试专题-答案
PTN考试专题-答案1、IEEE 1588V2 时钟基本架构中____B__具有多个物理接口同网络通信A 、OCB 、BC 正确C 、TC2、T2000进行接口配置时,PTN业务接口中的以太网接口用于承载Tunnel的端口模式为____B__A 、基本属性B 、二层属性C 、三层属性3、下列单板不支持LMSP线性复用段保护的是___D____A 、STM-1光接口板B 、POD41C 、AD1D 、ASD14、PTN设备的MPLS Tunnel1:1倒换时间要求___D__A 、10msB 、15msC 、30msD 、50ms 正确5、OptiX PTN 3900设备MQ1和接口板配合,可以实现_____C_____TPS保护。
A 、1:2B 、1:3C 、1:4D 、1:86、网元ID和IP的规划原则说法错误的是__C____A 、每个网元必须有一个独立的网元IDB 、同一个DCN管理网中不能有ID号相同的网元C 、127.0.0.2可以用做网元IP地址D 、每个网元必须有一个唯一的IP地址7、带内DCN规划原则说法错误的是____D____A 、使用T2000管理网元时,同一个网关网元接入的非网关网元数量不能超过60个B 、若和第三方设备混合组网,要求其设备支持对DCN报文设定特定VLAN(默认值4094,网管可配置)C 、ETH端口(EX2/EFG2/EG16)的DCN带宽非网关网元建议配1Mbit/s,网关网元DCN 带宽建议配置成2Mbit/s;其他场景默认DCN带宽(512Kbit/s)D 、1端口(CD1/MQ1/MD1)的DCN带宽配置,带PTN框式设备(3900/2900/1900)建议配置为1Mbit/s,8、以下关于LAG保护说法错误的是___A_____A 、LAG的负载分担模式无法对Qos提供很好的保证,因此在PTN产品中,该模式只能应用在用户侧,不能应用在网络侧B 、LAG 保护以太网端口(GE/FE)C 、非负载分担:正常情况下,业务只在工作端口上传送,保护端口上不传业务D 、LAG 保护低速业务处理板(MD1/MQ1)答题状态:9、以下常用术语说法正确的是____B_____A 、倒换锁定(Lockout):不管主备通道的好坏,固定从主通道上收发数据B 、强制倒换(Force):在备用通道正常时,不管主用通道是否正常,倒换到主用通道上收数据C 、人工倒换(Manual):在主备用通道都正常时,倒换到备用通道上收数据D 、练习倒换(Exercise):触发协议报文的发送和接收,测试保护倒换协议是否正常,不真正触发倒换10、以下哪个是PTN产品的正确配置流程____A______A 、创建网络根据业务规划配置使用的接口配置Tunnel 配置各种业务其他配置如各种保护配置、时钟配置等B 、创建网络根据业务规划配置使用的接口其他配置如各种保护配置、时钟配置等配置Tunnel 配置各种业务C 、配置Tunnel 创建网络根据业务规划配置使用的接口其他配置如各种保护配置、时钟配置Tunnel 配置各种业务D 、配置各种业务创建网络根据业务规划配置使用的接口其他配置如各种保护配置、时钟配置配置Tunnel 配置各种业务等11、POD41单板主要功能和特性说法错误的是_____D_____A 、客户侧提供2路光接口B 、支持提取线路侧时钟C 、端口不支持自动解环回正确D 、系统侧提供4路GE主备数据接口12、以下哪个单板是以太网业务处理板___A_____A 、EG16B 、ETFCC 、EFG2D 、POD4113、如果单板在位,单板颜色为____B____A 、浅蓝色B 、绿色C 、黄色D 、红色14、创建网元时墨认使用的网元用户及密码为__D____A 、root/rootB 、root/adminC 、admin/adminD 、root/password15、MD1单板正常工作状态ACT灯__A___A 、常亮B 、1秒亮1秒灭C 、灭D 、100ms 间隔闪烁16、T2000 serve admin用户的默认密码是__C____A 、adminB 、win2000C 、T2000D 、root17、MP1作为处理板母版,不支持的处理子卡有(D )。
Can the Production Network Be the Testbed
Can the Production Network Be the Testbed?Rob Sherwood∗,Glen Gibb†,Kok-Kiong Yap†,Guido Appenzeller,Martin Casado ,Nick McKeown†,Guru Parulkar†∗Deutsche Telekom Inc.R&D Lab,Los Altos,CA USA†Stanford University,Palo Alto,CA USANicira Networks,Palo Alto,CA USAAbstractA persistent problem in computer network researchvalidation.When deciding how to evaluate a newor bugfix,a researcher or operator must trade-offism(in terms of scale,actual user traffic,realand cost(larger scale costs more money,real userfic likely requires downtime,and real equipment vendor adoption which can take years).Building atic testbed is hard because“real”networking takeson closed,commercial switches and routers withcial purpose hardware.But if we build our testbed software switches,they run several orders of slower.Even if we build a realistic network testbed, is hard to scale,because it is special purpose and is addition to the regular network.It needs its own tion,support and dedicated links.For a testbed to global reach takes investment beyond the reach of researchers.In this paper,we describe a way to build athat is embedded in—and thus grows with—the work.The technique—embodied in ourfirst FlowVisor—slices the network hardware by placing layer between the control plane and the data plane. demonstrate that FlowVisor slices our own network,with legacy protocols running in their protected slice,alongside experiments created by searchers.The basic idea is that if unmodified supports some basic primitives(in our prototype,Open-Flow,but others are possible),then a worldwide testbed can ride on the coat-tails of deployments,at no extra ex-pense.Further,we evaluate the performance impact and describe how FlowVisor is deployed at seven other cam-puses as part of a wider evaluation platform.1IntroductionFor many years the networking research community has grappled with how best to evaluate new research ideas.WhiteboardPlanC/C++/JavaNS2OPNetCustomVINIEmulabVMsFlowVisorVendorAdoption Design Simulate Test Deployin SliceDeployControl RealismFigure Today’s evaluation process is a continuum from controlled but synthetic uncontrolled but realistic testing,with no clear path vendor adoption. Simulation[17,19]and emulation[25]provide tightly controlled environments to repeatable experiments, but lack scale and realism;neither extend all the way to the end-user nor carry real user traffic.Special isolated testbeds[10,22,allow testing at scale,and can carry real user traffic,usually dedicated to a particular type of experiment are beyond the budget of most researchers.Without the means to realistically idea there has been relatively little technology the re-search lab to real-world networks.vendors are understandably reluctant to features be-fore they have been thoroughlyconditions with real user traffic.This slows the pace of innovation,and many good ideas never see the light of day.Peeking over the wall to the distributed systems com-munity,things are much better.PlanetLab has proved in-valuable as a way to test new distributed applications at scale(over1,000nodes worldwide),realistically(it runs real services,and real users opt in),and offers a straight-forward path to real deployment(services developed in a PlanetLab slice are easily ported to dedicated servers). In the past few years,the networking research commu-nity has sought an equivalent platform,funded by pro-grams such as GENI[8],FIRE[6],etc.The goal is to allow new network algorithms,features,protocols or ser-vices to be deployed at scale,with real user traffic,on a real topology,at line-rate,with real users;and in a man-ner that the prototype service can easily be transferred to run in a production network.Examples of experimen-tal new services might include a new routing protocol, a network load-balancer,novel methods for data center routing,access control,novel hand-off schemes for mo-bile users or mobile virtual machines,network energy managers,and so on.The network testbeds that come closest to achieving this today are VINI[1]and Emulab[25]:both provide a shared physical infrastructure allowing multiple simulta-neous experiments to evaluate new services on a physi-cal ers may develop code to modify both the data plane and the control plane within their own isolated topology.Experiments may run real routing software, and expose their experiments to real network events.Em-ulab is concentrated in one location,whereas VINI is spread out across a wide area network.VINI and Emulab trade off realism forflexibility in three main ways.Speed:In both testbeds packet processing and forwarding is done in software by a conventional CPU.This makes it easy to program a new service,but means it runs much slower than in a real network.Real networks in enter-prises,data centers,college campuses and backbones are built from switches and routers based on ASICs.ASICs consistently outperform CPU-based devices in terms of data-rate,cost and power;for example,a single switch-ing chip today can process over600Gb/s[2].Scale:Because VINI and Emulab don’t run new network-ing protocols on real hardware,they must always exist as a parallel testbed,which limits their scale.It would,for example,be prohibitively expensive to build a VINI or Emulab testbed to evaluate data-center-scale experiments requiring thousands or tens of thousands of switches, each with a capacity of hundreds of gigabits per second. VINI’s geographic scope is limited by the locations will-ing to host special servers(42today).Without enormous investment,it is unlikely to grow to global scale.Emu-lab can grow larger,as it is housed under one roof,but is still unlikely to grow to a size representative of a large network.Technology transfer:An experiment running on a net-work of CPUs takes considerable effort to transfer to specialized hardware;the development styles are quite different,and the development cycle of hardware takes many years and requires many millions of dollars.But perhaps the biggest limitation of a dedicated testbed is that it requires special infrastructure:equip-ment has to be developed,deployed,maintained and sup-ported;and when the equipment is obsolete it needs to be working testbeds rarely last more than one generation of technology,and so the immense engineer-ing effort is quickly lost.Our goal is to solve this problem.We set out to answer the following question:can we build a testbed that is embedded into every switch and router of the production network(in college campuses,data centers,W ANs,en-terprises,WiFi networks,and so on),so that the testbed would automatically scale with the global network,rid-ing on its coat-tails with no additional hardware?If this were possible,then our college campus networks—for example—interconnected as they are by worldwide backbones,could be used simultaneously for production traffic and new W AN routing experiments;similarly,an existing data center with thousands of switches can be used to try out new routing schemes.Many of the goals of programs like GENI and FIRE could be met without needing dedicated network infrastructure.In this paper,we introduce FlowVisor which aims to turn the production network itself into a testbed(Fig-ure1).That is,FlowVisor allows experimenters to eval-uate ideas directly in the production network(not run-ning in a dedicated testbed alongside it)by“slicing”the hardware already installed.Experimenters try out their ideas in an isolated slice,without the need for dedicated servers or specialized hardware.1.1Contributions.We believe our work makesfive main contributions: Runs on deployed hardware and at real line-rates. FlowVisor introduces a software slicing layer between the forwarding and control planes on network devices. While FlowVisor could slice any control plane message format,in practice we implement the slicing layer with OpenFlow[16].To our knowledge,no previously pro-posed slicing mechanism allows a user-defined control plane to control the forwarding in deployed production hardware.Note that this would not be possible with VLANs—while they crudely separate classes of traffic, they provide no means to control the forwarding plane. We describe the slicing layer in§2and FlowVisor’s architecture in§3.Allows real users to opt-in on a per-flow basis. FlowVisor has a policy language that mapsflows to slices.By modifying this mapping,users can easily try new services,and experimenters can entice users to bring real traffic.We describe the rules for mapping flows to slices in§3.2.Ports easily to non-sliced networks.FlowVisor(and its slicing)is transparent to both data and control planes, and therefore,the control logic is unaware of the slicinglayer.This property provides a direct path for vendor adoption.In our OpenFlow-based implementation,nei-ther the OpenFlow switches or the controllers need be modified to interoperate with FlowVisor(§3.3). Enforces strong isolation between slices.FlowVisor blocks and rewrites control messages as they cross the slicing layer.Actions of one slice are prevented from affecting another,allowing experiments to safely coexist with real production traffic.We describe the details of the isolation mechanisms in§4and evaluate their effectiveness in§5.Operates on deployed networks FlowVisor has been deployed in our production campus network for the last7 months.Our deployment consists of20+users,40+net-work devices,a production traffic slice,and four stand-ing experimental slices.In§6,we describe our cur-rent deployment and future plans to expand into seven other campus networks and two research backbones in the coming year.2Slicing Control&Data PlanesOn today’s commercial switches and routers,the con-trol plane and data planes are usually logically distinct but physically co-located.The control plane creates and populates the data plane with forwarding rules,which the data plane enforces.In a nutshell,FlowVisor as-sumes that the control plane can be separated from the data plane,and it then slices the communication between them.This slicing approach can work several ways:for example,there might already be a clean interface be-tween the control and data planes inside the switch.More likely,they are separated by a common protocol(e.g., OpenFlow[16]or ForCes[7]).In either case,FlowVisor sits between the control and data planes,and from this vantage point enables a single data plane to be controlled by multiple control planes—each belonging to a separate experiment.With FlowVisor,each experiment runs in their own slice of the network.A researcher,Bob,begins by re-questing a network slice from Alice,his network admin-istrator.The request specifies his requirements including topology,bandwidth,and the set of traffic—defined by a set offlows,orflowspace—that the slice controls.Within his slice,Bob has his own control plane where he puts the control logic that defines how packets are forwarded and rewritten in his experiment.For example,imagine that Bob wants to create a new http load-balancer to spread port80traffic over multiple web servers.He requests a slice:its topology should encompass the web servers, and itsflowspace should include allflows with port80. He is allocated a control plane where he adds his load-balancing logic to control howflows are routed in theProprietaryControl LogicProprietaryBusSwitchAlice'sLogicBob'sLogicCathy'sLogicOpenFlowProtocolForwardingLogicOpenFlowSwitchForwardingLogicOpenFlowFlowVisorControllersControlLogic1ControlLogicNSwitchForwardingLogicSlicing Layer...ClassicalSwitch ArchitectureGeneric SlicedSwitch ArchitectureSliced OpenFlowSwitch ArchitectureFigure2:Classical network device architectures have distinct forwarding and control logic elements(left).By adding a transparent slicing layer between the forward-ing and control elements,FlowVisor allows multiple control logics to manage the same forwarding element (middle).In implementation,FlowVisor uses OpenFlow and sits between an OpenFlow switch—the forwarding element—and multiple OpenFlow controllers—the con-trol logic(right).data plane.He may new service so as to at-tract users.Interested“opt-in”by contacting their network administratortheflowspace of Bob’sIn this example,for Bob,and allowsers)in the data plane.flows(e.g.when aplane.FlowVisorallowing him to control switches within his slice. FlowVisor slices the network along multiple dimen-sions,including topology,bandwidth,and forwarding table entries.Slices are isolated from each other,so that actions in one slice—be they faulty,malicious,or otherwise—do not impact other slices.2.1Slicing OpenFlowWhile architecturally FlowVisor can slice any data plane/control plane communication channel,we built our prototype on top of OpenFlow.OpenFlow[16,18]is an open standard that allows re-searchers to directly control the way packets are routed in the network.As described above,in a classical net-work architecture,the control logic and the data path are co-located on the same device and communicate via an internal proprietary protocol and bus.In OpenFlow,the control logic is moved to an external controller(typi-cally a commodity PC);the controller talks to the dat-apath(over the network itself)using the OpenFlow pro-tocol(Figure2,right).The OpenFlow protocol abstractsVoIP HTTP Game FlowVisorDougAlice'sControl LogicBob'sControl LogicCathy'sControl LogicVoIPServerWWWCacheDetourNodeGameServerFigure3:FlowVisor allows users(Doug)to delegate control of subsets of their traffic to distinct researchers (Alice,Bob,Cathy).Each research experiment runs in its own,isolated network slice.forwarding/routing directives as“flow entries”.Aflow entry consists of a bit pattern,a list of actions,and a set of counters.Eachflow entry states“perform this list of actions on all packets in thisflow”where a typical action is“forward the packet out port X”and theflow is defined as the set of packets that match the given bit pattern.The collection offlow entries on a network device is called the“flow table”.When a packet arrives at a switch or router,the device looks up the packet in theflow table and performs the corresponding set of actions.If the packet doesn’t match any entry,the packet is queued and a newflow event is sent across the network to the OpenFlow controller.The controller responds by adding a new rule to theflow table to handle the queued packet.Subsequent packets in the sameflow will be handled without contacting the con-troller.Thus,the external controller need only be con-tacted for thefirst packet in aflow;subsequent packets are forwarded at the switch’s full line rate. Architecturally,OpenFlow exploits the fact that mod-ern switches and routers already logically implement flow entries andflow tables—typically in hardware as TCAMs.As such,a network device can be made OpenFlow-compliant viafirmware upgrade.Note that while OpenFlow allows researchers to experiment with new network protocols on deployed hardware,only a single researcher can use/control an OpenFlow-enabled network at a time.As a result,with-out FlowVisor,OpenFlow-based research is limited to isolated testbeds,limiting its scope and realism.Thus, FlowVisor’s ability to slice a production network is an or-thogonal and indepenent contribution to OpenFlow-like software-defined networks.Designour main goal,FlowVisor aims to use the pro-network as a testbed.In operation,the FlowVisorthe network by slicing each of the network’s corre-packet forwarding devices(e.g.,switches andand links(Figure3).the FlowVisor,resources are sliced in terms of their band-topology,forward table entries,and device CPUslice has control over a set offlows,called its .Users can arbitrarily add(opt-in)and removetheir ownflows from a slice’sflowspace at any-(§3.2).slice has its own distinct,programmable con-that manages how packets are forwarded andfor traffic in the slice’sflowspace.In practice, slice owner implements their slice-specific controlas an OpenFlow controller.The FlowVisor inter-between data and control planes by proxying con-between OpenFlow switches and each slice con-(§3.3).•Slices are defined using a slice definition policy lan-guage.The language specifies the slice’s resource limits,flowspace,and controller’s location in terms of IP and TCP port-pair(§3.4).3.1Slicing Network ResourcesSlicing a network means correctly slicing all of the cor-responding network resources.There are four primary slicing dimensions:Topology.Each slice has its own view of network nodes (e.g.,switches and routers)and the connectivity between them.In this way,slices can experience simulated net-work events such as link failure and forwarding loops. Bandwidth.Each slice has its own fraction of bandwidth on each link.Failure to isolate bandwidth would allow one slice to affect,or even starve,another slice’s through-put.Device CPU.Each slice is limited to what fraction of each device’s CPU that it can consume.Switches and routers typically have very limited general purpose computational resources.Without proper CPU slicing, switches will stop forwarding slow-path packets(§5.3.2), drop statistics requests,and,most importantly,will stop processing updates to the forwarding table. Forwarding Tables.Each slice has afinite quota of for-warding work devices typically support afiniteTranslation Isolation Enforcement ResourceAllocationPolicyAlice'sSlice Def.Bob'sSlice Def.Cathy'sSlice Def.Alice's ControllerBob'sControllerCathy'sControllerFlowVisor12 34SwitchFigure4:The FlowVisor intercepts OpenFlowfrom guest controllers(1)and,using the user’s policy(2),transparently rewrites(3)the messagetrol a slice of the network.Messages from(4)forwarded only to guests if it matches theirof forwarding rules(e.g.,TCAM entries).ure to isolate forwarding entries between sliceslow slice to prevent another fromets.3.2Flowspace and Opt-InAsubsetform a well-defined(but not necessarily contiguous)sub-space of the entire space of possible packet headers.Ab-stractly,if packet headers have n bits,then the set of all possible packet header forms an n-dimensional space. An arriving packet is a single point in that space repre-senting all packets with the same header.Similar to the geometric representation used to describe access control lists for packet classification[14],we use this abstrac-tion to partition the space into regions(flowspace)and map those regions to slices.Theflowspace abstraction helps us manage users who opt-in.To opt-in to a new experiment or service,users signal to the network administrator that they would like to add a subset of theirflows to a slice’sflers can precisely decide their level of involvement in an ex-periment.For example,one user might opt-in all of their traffic to a single experiment,while another user might just opt-in traffic for one application(e.g.,port80for HTTP),or even just a specificflow(by exactly specify-ing all of thefields of a header).In our prototype the opt-in process is manual;but in a ideal system,the user would be authenticated and their request checked auto-matically against a policy.For the purposes of testbed we concludedflow-level opt-in is adequate—in fact,it seems quite powerful.An-other approach might be to opt-in individual packets, which would be more onerous.Message Slicingis a slicing layer interposed be-control planes of each device in the net-FlowVisor acts as a transpar-OpenFlow-enabled network devicesdata planes)and multiple OpenFlow(acting as programmable control logic—OpenFlow messages between the switchare sent through FlowVisor.FlowVi-protocol to communicate upwardsand and downwards to OpenFlowFlowVisor is transparent,the sliceno modification and believe they aredirectly with the switches.the FlowVisor’s operation by extend-from§2(Figure4).Recall that a re-has created a slice that is an HTTP proxyspread all HTTP traffic over a set of webthe controller will work on any HTTPFlowVisor policy slices the network sosees traffic from users that have opted-inHis slice controller doesn’t know the net-sliced,so doesn’t realize it only sees aHTTP traffic.The slice controller thinksi.e.,insertflow entries for,all HTTP traf-user.When Bob’s controller sends aflow entry to the switches(e.g.,to redirect HTTP traffic toa particular server),FlowVisor intercepts it(Figure4-1),examines Bob’s slice policy(Figure4-2),and re-writes the entry to include only traffic from the allowed source(Figure4-3).Hence the controller is controlling only theflows it is allowed to,without knowing that the FlowVisor is slicing the network underneath.Similarly, messages that are sourced from the switch(e.g.,a new flow event—Figure4-4)are only forwarded to guest con-trollers whoseflowspace match the message.That is,it will only be forwarded to Bob if the newflow is HTTP traffic from a user that has opted-in to his slice. Thus,FlowVisor enforces transparency and isolation between slices by inspecting,rewriting,and policing OpenFlow messages as they pass.Depending on the re-source allocation policy,message type,destination,and content,the FlowVisor will forward a given message un-changed,translate it to a suitable message and forward, or“bounce”the message back to its sender in the form of an OpenFlow error message.For a message sent from slice controller to switch,FlowVisor ensures that the message acts only on traffic within the resources as-signed to the slice.For a message in the opposite di-rection(switch to controller),the FlowVisor examines the message content to infer the corresponding slice(s) to which the message should be forwarded.Slice con-trollers only receive messages that are relevant to theirSwitch Switch Switch Switch SwitchFlowVisor FlowVisorFlowVisorAlice's ControllerBob's ControllerCathy's ControllerEric's Controller4455FlowVisor can trivially recursively sliced network,creating hierarchies of slice.Thus,from a slice controller’s tive,FlowVisor appears as a switch (or a from a switch’s perspective,FlowVisor controller.FlowVisor does not require a 1-to-1mapping sor even 3.4The slice policy defines the network resources,flows-pace,and OpenFlow slice controller allocated to each slice.Each policy is described by a text configuration file—one file per slice.In terms of resources,the policy defines the fraction of total link bandwidth available to this slice (§4.3)and the budget for switch CPU and for-warding table work topology is specified as a list of network nodes and ports.The flowspace for each slice is defined by an ordered list of tuples similar to firewall rules.Each rule descrip-tion has an associated action,e.g.,allow ,read-only ,or deny ,and is parsed in the specified order,acting on the first matching rule.The rules define the flowspace a slice controls.Read-only rules allow slices to receive Open-Flow control messages and query switch statistics,but not to write entries into the forwarding table.Rules are allowed to overlap,as described in the example below.Let’s take a look at an example set of rules.Alice,the network administrator,wants to allow Bob to conduct an HTTP load-balancing experiment.Bob has convinced some of his colleagues to opt-in to his experiment.Al-ice wants to maintain control of all traffic that is not part of Bob’s experiment.She wants to passively monitor all network performance,to keep an eye on Bob and the pro-duction network.Here is a set of rules Alice could install in the FlowVi-sor:Bob’s Experimental Network includes all HTTP traffic to/from users who opted into his experiment.Thus,his network is described by one rule per user:tcp port:80and ip=user ip .messages from the switch matching any of rules are forwarded to Bob’s controller.Any flow that Bob tries to insert are modified to meet these Production Network is the complement of Bob’s For each user in Bob’s experiment,the produc-traffic network has a negative rule of the form:tcp port:80and ip=user ip .The network would have a final rule that matches flows:Allow:all .only OpenFlow messages that do not go to Bob’s are sent to the production network controller.production controller is allowed to insert forwarding so long as they do not match Bob’s traffic.Monitoring Network is allowed to see all traffic all slices.It has one rule,Read-only:all .This rule-based policy,though simple,suffices for the and deployment described in this paper.We that future FlowVisor deployments will have more policy needs,and that researchers will create resource allocation policies.4FlowVisor ImplementationWe implemented FlowVisor in approximately 8000lines of C and the code is publicly available for download from .The notable parts of the im-plementation are the transparency and isolation mech-anisms.Critical to its design,FlowVisor acts as a transparent slicing layer and enforces isolation between slices.In this section,we describe how FlowVisor rewrites control messages—both down to the forwarding plane and up to the control plane—to ensure both trans-parency and strong isolation.Because isolation mech-anisms vary by resource,we describe each resource in turn:bandwidth,switch CPU,and forwarding table en-tries.In our deployment,we found that the switch CPU was the most constrained resource,so we devote partic-ular care to describing its slicing mechanisms.4.1Messages to Control PlaneFlowVisor carefully rewrites messages from the Open-Flow switch to the slice controller to ensure transparency.First,FlowVisor only sends control plane messages to a slice controller if the source switch is actually in the slice’s topology.Second,FlowVisor rewrites Open-Flow feature negotiation messages so that the slice con-troller only sees the physical switch ports that appear in the slice.Third,OpenFlow port up/port down mes-sages are similarly pruned and only forwarded to the af-fected ing these message rewriting techniques,FlowVisor can easily simulate network events,such as link and node failures.4.2Messages to Forwarding PlaneIn the opposite direction,FlowVisor also rewrites mes-sages from the slice controller to the OpenFlow switch. The most important messages to the forwarding plane were insertions and deletions to the forwarding table. Recall(§2.1)that in OpenFlow,forwarding rules consist of aflow rule definition,i.e.,a bit pattern,and a set of actions.To ensure both transparency and isolation,the FlowVisor rewrites both theflow definition and the set of actions so that they do not violate the slice’s definition. Given a forwarding rule modification,the FlowVisor rewrites theflow definition to intersect with the slice’s flowspace.For example,Bob’sflowspace gives him con-trol over HTTP traffic for the set of users—e.g.,users Doug and Eric—that have opted into his experiment.If Bob’s slice controller tried to create a rule that affected all of Doug’s traffic(HTTP and non-HTTP),then the FlowVisor would rewrite the rule to only affect the in-tersection,i.e.,only Doug’s HTTP traffic.If the inter-section between the desired rule and the slice definition is null,e.g.,Bob tried to affect traffic outside of his slice,e.g..,Doug’s non-HTTP traffic,then the FlowVi-sor would drop the control message and return an error to Bob’s controller.Becauseflowspaces are not necessar-ily contiguous,the intersection between the desired rule and the slice’sflowspace may result in a single rule be-ing expanded into multiple rules.For example,if Bob tried to affect all traffic in the system in a single rule,the FlowVisor would transparently expand the single rule in to two rules:one for each of Doug’s and Eric’s HTTP traffic.FlowVisor also rewrites the lists of actions in a for-warding rule.For example,if Bob creates a rule to send out all ports,the rule is rewritten to send to just the sub-set of ports in Bob’s slice.If Bob tries to send out a port that is not in his slice,the FlowVisor returns a“action is invalid”error(recall that from above,Bob’s controller only discovers the ports that do exist in his slice,so only in error would he use a port outside his slice).4.3Bandwidth IsolationTypically,even relatively modest commodity network hardware has some capability for basic bandwidth iso-lation[13].The most recent versions of OpenFlow ex-pose native bandwidth slicing capabilities in the form of per-port queues.The FlowVisor creates a per-slice queue on each port on the switch.The queue is configured for a fraction of link bandwidth,as defined in the slice def-inition.To enforce bandwidth isolation,the FlowVisor rewrites all slice forwarding table additions from“send out port X”to“send out queue Y on port X”,where Y is a slice-specific queue ID.Thus,all traffic from a given slice is mapped to the traffic class specified by the re-source allocation policy.While any queuing discipline can be used(weighted fair queuing,deficit round robin, strict partition,etc.),in implementation,FlowVisor uses minimum bandwidth queues.That is,a slice configured for X%of bandwidth will receive at least X%and pos-sibly more if the link is under-utilized.We choose min-imum bandwidth queues to avoid issues of bandwidth fragmentation.We evaluate the effectiveness of band-width isolation in§5.4.4Device CPU IsolationCPUs on commodity network hardware are typically low-power embedded processors and are easily over-loaded.The problem is that in most hardware,a highly-loaded switch CPU will significantly disrupt the network. For example,when a CPU becomes overloaded,hard-ware forwarding will continue,but the switch will stop responding to OpenFlow requests,which causes the for-warding tables to enter an inconsistent state where rout-ing loops become possible,and the network can quickly become unusable.Many of the CPU-isolation mechanisms presented are not inherent to FlowVisor’s design,but rather a work-around to deal with the existing hardware abstraction ex-posed by OpenFlow.A better long-term solution would be to expose the switch’s existing process scheduling and rate-limiting features via the hardware abstraction. Some architectures,e.g.,the HP ProCurve5400,already use rate-limiters to enforce CPU isolation between Open-Flow and non-OpenFlow VLANs.Adding these features to OpenFlow is ongoing.There are four main sources of load on a switch CPU: (1)generating newflow messages,(2)handling requests from controller,(3)forwarding“slow path”packets,and (4)internal state keeping.Each of these sources of load requires a different isolation mechanism.New Flow Messages.In OpenFlow,when a packet arrives at a switch that does not match an entry in the flow table,a newflow message is sent to the controller. This process consumes processing resources on a switch and if message generation occurs too frequently,the CPU resources can be exhausted.To prevent starvation,the FlowVisor rate limits the newflow message arrival rate. In implementation,the FlowVisor tracks the newflow message arrival rate for each slice,and if it exceeds some threshold,the FlowVisor inserts a forwarding rule to drop the offending packets for a short period.For example,the FlowVisor keeps a token-bucket style counter for eachflow space rule(“Bob’s slice gets(1)。
记一个openwrtreboot异步信号处理死锁问题
记⼀个openwrtreboot 异步信号处理死锁问题写在前⾯觉得本页⾯排版单调的话,可以尝试到看。
问题背景在 openwrt 上碰到了⼀个偶现的 reboot 失效问题。
执⾏ reboot 之后系统并没有重启,此时控制台还能⼯作。
初步排查⾸先复现问题,发现复现后控制台仍可正常运⾏,但此时重复执⾏ reboot 也⽆效,执⾏ reboot -f 则可正常触发重启。
此处 reboot 是⼀个指向 busybox 的软链接,从 help 信息-f Force (don't go through init)中可以看出 reboot 和 reboot -f 的区别在于 reboot 会先通知 init 进程进⾏⼀系列操作,⽽ reboot -f 则直接调内核。
看下 busybox 源码, 如果带了 -f 则直接调⽤ C 库的 reboot 函数,如果没有带 -f 参数,则只会通过 kill 发信号给 1号进程。
if (!(flags & 4)) { /* no -f *///TODO: I tend to think that signalling linuxrc is wrong // pity original author didn't comment on it... if (ENABLE_LINUXRC) { /* talk to linuxrc */ /* bbox init/linuxrc assumed */ pid_t *pidlist = find_pid_by_name("linuxrc"); if (pidlist[0] > 0) rc = kill(pidlist[0], signals[which]); if (ENABLE_FEATURE_CLEAN_UP) free(pidlist); } if (rc) { /* talk to init */ if (!ENABLE_FEATURE_CALL_TELINIT) { /* bbox init assumed */ rc = kill(1, signals[which]); if (init_was_not_there()) rc = kill(1, signals[which]); } else { /* SysV style init assumed */ /* runlevels: * 0 == shutdown * 6 == reboot */ execlp(CONFIG_TELINIT_PATH, CONFIG_TELINIT_PATH, which == 2 ? "6" : "0", (char *)NULL ); bb_perror_msg_and_die("can't execute '%s'", CONFIG_TELINIT_PATH); }}} else {###rc = reboot(magic[which]);}⽬前 reboot -f 正常,那问题就出在⽤户空间调⽤ reboot() 之前的操作中了。
华为认证HCIP-Routing题库(含答案)
华为认证HCIP-Routing考试题库大全一、单选题1.小王在进行设备选型时,需要从华为提供的一系列园区交换机中选择一台百兆接口的三层交换机,那么他该选择如下哪个系列的交换机A、S2700B、S3700C、S5700D、S6700答案:B2.在进行设备环境检查时,以下哪项是正确的?A、设备应放在通风.干燥的环境中.且放置位置牢固、平整。
设备周围只允许堆积少量杂物B、为了节能.只要求机房的当前温度在设备规定的范围内.不要求空调可持续稳定运行C、电源线与业务线缆建议-起布放.且布放应整齐.有序D、在检查接地方式及接地电阻是否符合要求时.一般要求机房的工作地.保护地、建筑防雷地分开设署答案:D3.以下哪个选项不属于高危操作流程的三大步骤?A、方案制作B、获取三授权C、技术评审D、操作实施与结果反馈答案:C4.关于华为网络优化服务(NOS),下列哪个选项是错误的?A、生命周期评估是指定明检查分析现网设备的软硬件生命周期.对将处于停止销售生产.停止软件更新及停止技术支持的产品做相应的应对措施,防止由于生命周期原因引起的运行风险B、软件评估推荐是指针对服务期内甲方所用的所有软件平台进行评估分析.并结合评估情况推荐软件版本.避免因已知BUG导致异常事件的发生C、配置评估优化是批根据客户要求开发并定期持续维护设备配置模板.结合定期软件评估推荐的结果.将相关使用的软件特性具体到相应推荐目标版本的命令行,实现配置的精细化管理D、网络健康检查是指华为的技术专家采用业界领先的最佳经验为客户审核全网架构的合理性.安全性和可扩展性.并根据评估结果提出改进建议答案:D5.网络中配置了OSPF且R1和R3的router-id相同,此时会产生什么问题{-*{1 576457743846_124_862437.png}*-}A、以上都不对B、区域间路由学习异常C、邻居关系异常D、外部路由学习答案:A6.在排除TELNET登录故障时,哪一步是节要优先做的好A、检查客户端能否PING通服务器B、查看登录设备的用户数是否到达了上限C、查看设备上VTY型用户界面视图下是否配置了ACLD、查看VTY类型用户界面视图下允许接入的协议配置是否正确答案:A7.关于设备软件升级的可行性评估,下列哪项是正确的A、设备的软件升级通常不会对网络的稳定性造成威胁.所以升级前通常不需要进行风险评估和放线规避B、升级通常只可能导致很短暂的业务中断.所以不必与用户确认.从而提高效率C、如进行重大或较复杂的升级操作时.应预先在模拟环境中进行相试.并完成升级方案,应急预案的测试D、为了保障网络设备的稳定运行.建议及时对设备软件进行升级答案:C8.胖AP和瘦AP最大的区别就是胖AP能够接入更多的用户,提供更高的性能,且更适合大型公司组网,A、正确B、错误答案:B9.遇到大型的网络割接项目,可以把其分化成多个相对对立、但是前后又有关联的小割接A、对B、错答案:A10..判断Eudemon防火墙双机热备功能中会话表不是实时备份的。
frr routing 事件机制
frr routing 事件机制
FRR(Fast Reroute)是一种网络故障恢复机制,它旨在提供快
速的路由故障切换,以确保网络的连通性和可靠性。
FRR通过在网
络中预先计算备用路径,并在主路径发生故障时快速切换到备用路
径来实现快速故障恢复。
在FRR中,事件机制是指触发快速路由切换的条件和过程。
当
主路径发生故障时,FRR需要能够及时检测到故障,并触发备用路
径的使用,以确保数据传输的连续性。
事件机制通常涉及以下几个
方面:
1. 故障检测,FRR需要能够及时检测到主路径的故障,这通常
涉及到监控网络链路和节点的状态。
常见的故障检测机制包括基于
协议的心跳检测、链路状态监测等。
2. 路由计算,一旦检测到主路径故障,FRR需要快速计算出备
用路径,并更新路由表,以确保数据能够尽快转发到备用路径上。
3. 路由切换,一旦备用路径计算完成,FRR需要触发路由切换,将数据流量从主路径切换到备用路径上,通常涉及到路由器的转发
表更新等操作。
4. 事件通知,在发生路由切换时,FRR通常需要发送通知消息给相关的网络设备,以便它们也能够及时更新自己的路由表和转发表。
总的来说,FRR的事件机制涉及到故障检测、备用路径计算、路由切换和事件通知等多个方面,以确保在网络发生故障时能够快速、可靠地切换到备用路径,保障数据传输的连续性和可靠性。
hoplimit字段 -回复
hoplimit字段-回复hoplimit字段是一种在计算机网络中用于控制数据包传输的功能。
它被广泛应用于Internet协议(IP)中的IPv6协议中,用来限制数据包在网络中可以传输的最大跳数(hop)或最大路由数。
在现代互联网中,数据包往往需要通过多个网络节点传递才能抵达目标地址。
这些网络节点通常称为“跳点”或“路由器”,它们负责将数据包从源主机路由到目标主机。
然而,在某些情况下,数据包可能因为路由器不正确地将其转发到错误的跳点而导致传输失败或延迟。
为了解决这个问题,IPv6协议引入了hoplimit字段。
该字段使用一个8位字节来表示数据包允许经过的最大跳数。
每当数据包经过一个路由器时,路由器会将hoplimit减1。
当hoplimit减为0时,路由器会丢弃该数据包,并向源主机发送一个ICMPv6(Internet控制消息协议版本6)通知。
hoplimit字段的引入带来了许多好处。
首先,它能够防止数据包在网络中无限循环。
在一些网络拓扑中,节点可能因为故障或配置错误而导致数据包陷入无限循环的情况,hoplimit字段能够限制数据包经过的跳数,从而避免这种情况的发生。
其次,hoplimit字段还能够提高网络的安全性。
通过限制数据包传输的最大跳数,它可以限制数据包在网络中传播的范围。
这对于减少DDoS(分布式拒绝服务)攻击和其他网络攻击非常有帮助。
攻击者通常会通过将大量数据包洪泛到目标主机来消耗网络资源,通过限制数据包传输的范围,hoplimit字段能够减轻这些攻击的影响。
最后,hoplimit字段还可以用于网络性能优化。
通过设置适当的hoplimit 值,可以优化数据包的传输路径。
例如,在某些情况下,设置较小的hoplimit值可以使数据包优先选择近距离的路由路径,从而减少传输延迟和网络拥塞。
而对于长距离的数据传输,设置较大的hoplimit值可以保证数据包能够到达目标主机。
然而,hoplimit字段也存在一些限制和挑战。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
2012 International Conference on Computer Networks and Communication Systems (CNCS 2012)IPCSIT vol.35(2012) © (2012)IACSIT Press, SingaporeA Parallel Routing Algorithm for Torus NoCsKhaled Day, Nasser Alzeidi, Bassel Arafeh, Abderezak TouzeneDepartment of Computer Science, Sultan Qaboos University, Muscat, Oman Abstract. This paper proposes a parallel routing algorithm for routing multiple data streams over disjoint paths in the torus Network-on-Chip (NoC) architecture. We show how to construct a maximal set of disjoint paths between any two nodes of a torus network topology and then make use of the constructed paths for the simultaneous routing of multiple data streams between these nodes. Analytical performance evaluation results are obtained showing the effectiveness of the proposed parallel routing algorithm in reducing communication delays and increasing throughput when transferring large amounts of data in an NoC-based multi-core system.Keywords: network-on-chip (NoC), 2D mesh, torus, multipath routing, disjoint paths1.IntroductionWith advances in technology, chips with hundreds of cores are expected to become a reality in the near future. Traditionally, communication between processing elements was based on buses. When the number of processing elements is large, the bus becomes a bottleneck from performance, scalability and power dissipation points of view [1]. A network-on-chip (NoC) is instead used to interconnect the processing elements. The topology of the network-on-chip has a major impact on the overall multi-core system performance [2]. Several topologies have been proposed and studied for NoCs including mesh-based and tree-based topologies [2]. Mesh-based topologies (especially the 2D mesh and the torus) have been the most popular of these topologies. Their popularity is due to their modularity (they can be easily expandable by adding new nodes and links without modifying the existing structure), their ability to be partitioned into smaller meshes (a desirable feature for parallel applications), their simple XY routing strategy, and their facilitated implementation. They also have a regular structure and short inter-switch wires. They have been used in many systems such as the RAW processor [3], the TRIPS processor [4], the 80-node Intel's Teraflops research chip [5], and the 64-node chip multiprocessor from Tilera [6].In this paper we contribute to the study of the torus topology by showing how to construct a maximal set of disjoint paths between any two nodes of a torus and how to use these paths for parallel routing. The proposed parallel routing algorithm allows the transfer of multiple data streams between any two nodes in the torus over disjoint paths resulting in faster transfer of large amounts of data and higher throughput. PRA can also be used for fault-tolerance purposes by sending multiple copies of critical data on disjoint paths. The critical data can still be delivered even if only one of the disjoint paths is fault-free and the others are faulty. Sources of communication faults in NoCs include crosstalk, faulty links and congested network areas.2.Notations and PreliminariesA 2D mesh NoC consists of k×k switches interconnecting IP nodes. One disadvantage of the 2D mesh topology is its long diameter which has a negative effect on the communication latency. A torus NoC, illustrated in figure 1a, is basically the same as a 2D mesh NoC with the exception that the switches on the edges are connected with wrap-around links. Every switch in a torus has five active ports: one connected to the local IP node, and the other four connected to the four neighboring switches (left, right, up and down). The torus topology reduces the latency of the 2D mesh while keeping its simplicity. In order to reduce the length of the wrap-around links, the torus can be folded as shown in figure 1b.Fig. 1: (a) A Torus NoC Topology (k = 4) (b) A Folded Torus NoC Toplogy (k = 4)We refer to a node in the torus topology by its pair of X -Y coordinates as illustrated in figure 1a. We show in the next section how to construct node-disjoint paths from a source node S to a destination node D .A path from S to D is a sequence of nodes starting at S and ending at D such that any two consecutive nodes in the sequence are neighbor nodes. We say two paths are node-disjoint if they do not have any common nodes other than the source node and the destination node. A path from S to D can be specified by the sequence of node to node moves that lead from S to D . There are four possible moves from a node to a neighbor node (right, left, up, and down). We denote these moves as +X , -X , +Y and –Y respectively. When a node is on the rightmost border of the torus, a move to the right uses the wrap-around link that leads to a node on the leftmost border of the torus. Similarly for nodes on the leftmost, top and bottom borders. In other word all + and – operations on the X -Y coordinates are modulo k . With respect to a given source node S and a given destination node D , all +X moves are called forward X moves (denoted F X ) if and only if an initial +X move from S decreases the distance to D along the X dimension. Otherwise all +X moves are called backward X moves and are denotedB X . These F X and B X notations are only defined when S and D differ in the X dimension. Figure 2 provides more precise definitions for F X and B X in the form of two functions F X (S ,D ) and B X (S , D ) which return the moves that correspond to F X and B X for a given source S = (x S , y S ) and a given destination D = (x D , y D ). The forward Y moves and backward Y moves (along the Y dimension) and their corresponding F Y and B Y notations are similarly defined. We can obtain F Y (S , D ) and B Y (S , D ) functionsX X 3. Node-Disjoint Paths in A TorusLet S = (x S , y S ) and D = (x D , y D ) be any source and destination nodes in the torus. There are at most four node-disjoint paths from S to D corresponding to the four possible starting moves +X , -X , +Y and –Y from S . We now show how to construct a maximal set of four disjoint paths from S to D . Each of the constructed paths is defined by a sequence of moves that lead from S to D . In a path description we use a superscript notation to indicate the number of consecutive times a move is repeated. For example +X 2 denotes a sequence of two consecutive +X moves. Let δx be the distance from Sto Dalong the x dimension (i.e. δx = min(|x D – x S |, k –|x D – x S |) and let δy be the distance from S to D along the y dimension (i.e. δy = min(|y D – y S |, k –|y D – y S |). We distinguish the following three cases in the construction of disjoint paths from S to D :Case 1: If x S ≠ x D and y S ≠ y D (S and D on different rows and different columns): Table 1 shows sequences of routing moves of four node disjoint paths from S to D for Case 1 and Figure 3 illustrates these four paths for a 5×5 Torus. The wrap-around links are not shown for clarity of the figure.IP NodeSwitch (0, 0) (0, 1) (0, 2) (1, 0) (1, 1) (1, 2) (2, 0) (2, 1) (2, 2) (3, 0) (3, 1)(3, 2) (0, 3) (1, 3) (2, 3) (3, 3)Fig. 3: Disjoint Paths for Case 1 (k = 5) Case 2: If x S = x D and y S ≠ y D (S and D on the same column but different rows): Table 2 shows sequences of routing moves of four node disjoint paths from S to D for Case 2 and Figure 4 illustrates these four paths defined for a 5×5 Torus. The wrap-around links are not shown for clarity of the figure.Fig. 4: Disjoint Paths for Case 2 (k = 5) Case 3: If x S ≠ x D and y S = y D (S and D on the same row but different columns): Symmetric to Case 2.4. The Parallel Routing Algorithm (Pra)We now propose a parallel routing algorithm (PRA) that allows any source node S in the torus to send to any destination node D , a set of m packets in parallel over disjoint paths. Figure 5 outlines the operation of the parallel routing algorithm at a source node S that wants to send m packets p 1, p 2, …, p m in parallel to a destination node D . The source node scatters the m packets over the disjoint paths in a round-robin fashion. Table 1: Four Disjoint Paths from S to D for Case 1Path Sequence of Routing Movesπ11 F x δx , F y δy π12 F y δy , F x δx πB F δy+1F δx+1B Table 2: Four Disjoint Paths from S to D for Case 2Path Sequence of Routing Movesπ21 F y δy π22 +X, F y δy , -X π23 -X, F y δy , +X π24B y , +X 2, F y δy+2, -X 2, B y5. Performance EvaluationIn this section we derive performance characteristics of PRA. We first obtain the lengths of the constructed parallel paths. These lengths are readily obtained from Table 1 and Table 2. The lengths, d ij , of the constructed four πij paths, 1 ≤ i ≤ 3, 1 ≤ j ≤ 4, are shown in Table 3.Table 3: Lengths of the Constructed Parallel PathsPath Case 1 Case 2 Case 31 δx + δy δy δx2 δx + δy δy + 2 δx + 23 δx + δy +4 δy + 2 δx + 24 δx + δy + 4 δy + 8 δx + 8The proposed routing algorithm splits a message of size M flits over four disjoint paths resulting in approximately M/4 flits sent on each path. The message latency for a message can then be calculated as the maximum latency of transferring M/4 flits on the four disjoint paths. There are in total k 2(k 2-1) source-destination pairs (where the source and the destination are different) of which k 2(k -1)2 pairs correspond to Case 1, k 2(k -1) pairs correspond to Case 2, and another k 2(k -1) pairs correspond to Case 3. Therefore, the probability p i of generating a message that belongs to Case i is given by the following formula:(1)/(1)11/(1)21/(1)3i k k i p k i k i −+=⎧⎪+=⎨⎪+=⎩(1) Averaging over the three possible cases, the average message latency can be calculated as: mean message latency ()123431max ,,,i i i i i i p T T T T ==Σ (2)In what follows we calculate . Under uniform traffic pattern, the channel arrival rate can be found by dividing the total channel arrival rates over the number of channels in the network. If each IP generates an average of messages per network cycle then a total of messages will be generated in the network. Since each message on path traverses hops and there are 4 output channels in each node, the rate of message received by each channel in the path can be calculated as:44g i j g ijij N d d N λλλ==(3)Since the traffic is uniform and the network is symmetric, all channels in the network have similar statistical characteristics. Therefore the message latency along path is composed of the time to transmit the flits, M/4, the routing time, , and the blocking delay encountered at each hop along the path. We assume here that each flit takes one network cycle to be transmitted from one node to the next and the routing decision takes also one cycle. Hence the message latency along path can be written as:()4i j i j i j ij ij ij M T d d w PB V ⎡⎤=++⎢⎥⎣⎦ (4)where is the multiplexing factor andis the blocking delay calculated by multiplying the mean waiting time to acquire a channel by the blocking probability. The mean waiting time to acquire a channel can be approximated as the mean waiting time of an M/G/1 queue [7]:()()()()2221/4/21ij ij ij ij ij ij ij T T M T W T λλ⎡⎤+−⎢⎥⎣⎦=− (5)To calculate the blocking probability for path , we assume that virtual channels can be used per physical channel where any available virtual channel can be selected to route the message to the next hop. Therefore a message will be blocked only when all virtual channels are busy. The probability that v virtual channels are busy at a physical channel can be determined using a Markovian model [8] as follows:(1)()()i j i j v ij i j ij v ij v ij T T P T λλλ⎧−⎪=⎨⎪⎩ 1v V v V ≤<= (6)When multiple virtual channels are used per physical channel they share the bandwidth in a time-multiplexed manner. Therefore the message latency has to be scaled by the average degree of multiplexing, (see equation (3) above) which takes place at a physical channel. This can be calculated as follows [8]:211/v v v v ij v ij v ij V v p vp ===ΣΣ(7) An iterative technique with error bound of 0.0001 has been used to evaluate the different variables of the above model. Figure 7 shows plots of the obtained message latency (in network cycles) against the traffic load (message generation rate) for different scenarios. It can be seen from this figure that the message latency is significantly reduced by the use of parallel routing of the message flits over the constructed disjoint paths. It should be mentioned however, that an extra overhead will be needed to assemble the flits of a message at destination nodes. This also requires that flits utilize sequence numbering to maintain the correct order of the message flits. The plots of figure 7 also reveal that using path 1 or path 2 only gives lower latency. This is an expected behavior as these two paths correspond to the optimal routing paths (i.e. shortest paths between any source and destination nodes). It is also clear from the plots that when PRA is used for routing in the torus6. ConclusionWe have proposed a parallel routing algorithm (PRA) for transferring multiple data streams over disjoint paths in a torus NoC architecture. The algorithm is based on a construction of disjoint paths between network nodes. Analytical performance evaluation results have been obtained showing the effectiveness of the proposed parallel routing algorithm in reducing communication delays and increasing throughput. The algorithm can be adapted to support fault-tolerant routing of multiple copies of critical data in a Torus NoC over the multiple disjoint paths.7. References[1] L. Benini and G. D. Micheli, Networks on Chips: A New SoC Paradigm, Computer , vol. 35, no. 1, Jan 2002, pp.70-78.[2]L. Benini and G. D. Micheli, Networks on Chips: Technology and Tools, Morgan Kaufmann, 2006.[3]M. B. Taylor, W. Lee, S. Amarasinghe, and A, Agarwal. Scalar Operand Networks: On-Chip Interconnect for ILPin Partitioned Architectures, International Symposium on High-Performance Computer Architecture (HPCA), pp.341–353, Anaheim, California, 2003.[4]P. Gratz, C. Kim, R. McDonald, S. Keckler, and D. Burger, Implementation and Evaluation of On-Chip NetworkArchitectures, International Conference on Computer Design (ICCD), 2006.[5]S. Vangal et al. An 80-Tile 1.28TFLOPS Network-on-Chip in 65nm CMOS, IEEE Int'l Solid-State CircuitsConference, Digest of Technical Papers (ISSCC), 2007.[6] A. Agarwal, L. Bao, J. Brown, B. Edwards, M. Mattina, C. - C. Miao, C. Ramey, and D. Wentzlaff, Tile Processor:Embedded Multicore for Networking and Multimedia, Hot Chips 19, Stanford, CA, Aug. 2007.[7]L. Kleinrock, Queuing Systems: Theory, vol. 1, New York: John Wiley, 1975.[8]W.J. Dally, Virtual channel flow control, IEEE Transactions on Parallel and Distributed Systems, 3(2), 1992.。