Cross Layer QoS Guarantees in Multiuser WLAN Systems

合集下载

光传送网(OTN)多业务承载技术要求

光传送网(OTN)多业务承载技术要求
6. OTN 多业务承载分组交换功能要求 ............................................................................................... 17
6.1. 概述 ..................................................................................................................................................... 17 6.2. 以太网交换 ......................................................................................................................................... 18 6.3. MPLS-TP 交换.................................................................................................................................... 18
2. 规范性引用文件 ................................................................................................................................... 1
3. 缩略语 ................................................................................................................................................... 2

5G网络QoS特征--一纸禅

5G网络QoS特征--一纸禅

5G网络QoS特征--一纸禅QoS(Quality of Service)作为服务质量在5G网络中定义了数据流(QoS Flow)在终端(UE)和UPF之间进行包转发处理的规则和要求;分数据业务类型、优先级共六方面,具体如下:•资源类型(GBR、nonGBR或Delay critical GBR)•优先级•数据包延迟预算•误包率•平均窗口(仅适用于GBR和Delay-critical GBR)•最大数据突发量(仅适用于Delay-critical GBR)根据以上分类和定义,3GPP无线网络中每个链路层根据QoS流进行资源配置和处理;标准化或预配置5G QoS特性通过5QI值指示,不在任何接口上单独指示(除非某些5G QoS特性被修改)。

一、资源类型(Resource Type)确定是否永久分配与QoS流级别保证流比特率(GFBR)值相关专用资源(如无线基站的准入控制功能),具体包括:•GBR•non GBR•Delay critical GBRGBR QoS流通常“按需”授权,通过灵活和计费控制;其中不资源类型的PDB和PER不同;而MDBV(Maximum Data Burst Volume)参数仅适用于Delay-critical GBR资源类型。

non GBR QoS 流可通过静态和计费控制进行预授权。

二、优先级(Priority Level)表示在QoS流之间资源调度的优先级,具有以下特性:•最低优先级值对应于最高优先级;•优先级用于区分同一终端(UE)或不同终端(UE)的QoS Flow;•在拥塞情况下当不能满足一个或多个QoS流的所有要求时,应使用优先级来选择,以便具有优先级值N的QoS流优先于QoS流具有更高优先级值(即N+1、N+2等);•在不拥塞情况下应使用优先级来定义QoS流之间的资源分配。

此外调度器可以根据其他参数(例如资源类型、无线条件)对QoS流进行优先级排序,以优化应用程序性能和网络容量;•每个标准化的5QI都与QoS特性中指定优先级的默认值相关联;•优先级也可以与标准化5QI一起发送给无线网(RAN),收到后应使用它代替默认值;•优先级也可以与预配置5QI一起发送到RAN,收到后应使用它代替预配置值。

qos传输机制 规则

qos传输机制 规则

qos传输机制规则
QoS传输机制规则是网络传输中的一项重要机制。

它可以确保网络中的各种数据流获得适当的带宽和优先级,以保证网络服务质量。

以下是QoS传输机制规则的具体内容:
1. 分类和标记
QoS传输机制通过分类和标记来识别不同类型的数据流,并根据其重要性提供不同的服务质量。

常用的分类方法包括IP地址、端口号、协议类型等。

通常,高优先级的数据流将被标记为“优先”或“严格优先”,低优先级的数据流则被标记为“普通”或“批量”。

2. 流量控制
QoS传输机制还可以通过控制网络流量来确保服务质量。

例如,在网络拥塞时,QoS传输机制可以限制流量,以确保高优先级的数据流仍然能够获得足够的带宽。

流量控制可以采用不同的算法,例如令牌桶算法、速率限制算法等。

3. 优先级队列
为了确保高优先级的数据流能够更快地传输,QoS传输机制通常会使用优先级队列。

在优先级队列中,高优先级的数据流会被先处理,而低优先级的数据流则会被放在队列的后面。

这可以确保高优先级的数据流能够得到及时处理。

4. 拥塞控制
QoS传输机制还可以通过拥塞控制来防止网络拥塞。

拥塞控制可以通过控制数据流的速率来实现。

例如,在网络拥塞时,QoS传输机
制可以降低所有数据流的速率,以减少网络拥塞的风险。

总之,QoS传输机制规则是网络传输中的一项重要机制,它可以确保网络服务质量,并为不同类型的数据流提供不同的服务优先级。

各种典型传输协议丢包门限

各种典型传输协议丢包门限

各种典型传输协议丢包门限全文共四篇示例,供读者参考第一篇示例:在计算机网络中,传输协议是保证数据能够在网络中传输的关键组成部分。

不同的传输协议有不同的特点和丢包处理策略。

丢包是指在数据传输过程中出现数据包丢失的情况,通常是由网络拥塞、网络故障或传输错误导致的。

各种典型传输协议在面临丢包时会根据其设计理念和目标采取不同的处理方式,其中一个重要参数就是丢包门限。

1. TCP协议丢包门限:TCP是一种面向连接的传输协议,在数据传输过程中会保证数据的可靠传输。

TCP通过序列号、确认号、校验和等机制来确保数据的完整性和顺序性。

在丢包发生时,TCP会进行数据重传以确保数据的可靠传输。

TCP的丢包门限通常由超时重传时间来确定,即在一定时间内没有接收到对应的确认信息,则认为数据包丢失,会进行重传。

TCP的丢包门限通常设定在几百毫秒到几秒之间,具体取决于网络的延迟和拥塞情况。

2. UDP协议丢包门限:UDP是一种无连接的传输协议,不提供数据的可靠传输和数据包的顺序性。

UDP在传输过程中不会对数据包进行重传,也不会对数据包的丢失进行处理。

UDP的丢包门限通常由应用程序自行处理。

如果应用程序需要保证数据的可靠传输,则需要在应用层实现重传机制。

UDP通常用于实时性要求较高的应用,如语音通话、视频传输等,对丢包要求较低。

3. IP协议丢包门限:IP是一种网络层协议,负责数据包的路由和转发。

IP并不关心数据包的传输可靠性,不会对数据包进行重传。

当网络发生拥塞或故障时,IP协议会将数据包丢弃或丢失。

IP的丢包门限通常由网络设备或路由器来确定,具体取决于网络的拥塞状况和路由策略。

在网络拥塞时,路由器会根据队列长度和负载情况来决定是否丢包,从而进行流量控制。

丢包门限是一个重要的参数,决定了传输协议在面临丢包时的处理策略。

不同的传输协议对于丢包的处理方式不同,可以根据应用场景和需求选择合适的传输协议。

在实际应用中,需要综合考虑网络的延迟、拥塞状况和数据传输的实时性要求,来确定合适的丢包门限和处理策略。

环形网络的QoS保证机制

环形网络的QoS保证机制

环形网络的QoS 保证机制环形网络的QoS保证机制环形网络是一种常见的网络拓扑结构,它由一系列节点以环形的方式连接起来。

在环形网络中,每个节点都与其相邻的节点直接相连,数据可以在环形路径上进行传输。

然而,由于环形网络的特殊结构,QoS(Quality of Service)保证成为了一个重要的问题。

QoS保证机制可以确保网络在传输数据时满足一定的服务质量要求,例如低延迟、高带宽等。

在环形网络中,QoS保证机制可以通过以下步骤来实现:第一步,定义QoS目标:确定所需的服务质量目标,例如最大延迟、最小带宽等。

这些目标将指导后续的QoS保证机制设计。

第二步,流量管理:在环形网络中,流量管理是实现QoS保证的关键。

通过合理的流量管理,可以控制网络流量的传输速度和优先级,以确保满足QoS目标。

第三步,拥塞控制:拥塞是环形网络中常见的问题,当网络中的数据流量超过网络资源的处理能力时,就会发生拥塞。

拥塞会导致延迟增加、丢包率增加等问题,因此需要采取拥塞控制策略来保证QoS。

第四步,优先级调度:在环形网络中,不同的数据流可能有不同的QoS要求。

通过为不同的数据流分配不同的优先级,可以确保高优先级的数据流能够在网络中得到优先传输,从而满足其QoS要求。

第五步,错误处理:在环形网络中,可能会出现各种错误,例如数据丢失、错误的路由选择等。

为了保证QoS,需要采取相应的错误处理策略,例如重新传输数据、重新选择路由等,以确保数据能够正确地传输。

第六步,性能监测与调优:QoS保证机制的实施并不是一次性的过程,需要不断地监测网络性能并进行调优。

通过对网络性能的实时监测和分析,可以及时发现问题并采取相应的措施来提高网络的QoS。

综上所述,QoS保证机制在环形网络中是非常重要的。

通过合理的流量管理、拥塞控制、优先级调度、错误处理以及性能监测与调优等步骤,可以有效地实现对网络服务质量的保证。

只有确保QoS,才能满足用户对网络的要求,提高网络的可靠性和稳定性。

POD为什么会OOM

POD为什么会OOM

POD为什么会OOM应⽤运⾏在k8s平台上,有时候会发现POD⾃动重启造成业务影响,通过kubectl describe pod可以看到POD重启的原因,如果是OOM killed,则是因为应⽤使⽤内存超过了limit,被OOM killed了。

其实,应⽤被OOM killed应该分为两种情况:1. POD OOM killed;2. 宿主机内存不⾜,跑在宿主机上的进程被OOM killed;这篇⽂章只讨论第⼀种情况。

其实我们在定义POD的时候,可以通过resource requests及limits值,定义POD的资源需求。

根据requests和limits值,POD的QoS分为三种类型:Guaranteed - 如果POD中所有容器的所有resource的limit等于request且不为0,则这个POD的QoS class就是Guaranteed;Best-Effort - 如果POD中所有容器的所有resource的request和limit都没有赋值,则这个POD的QoS class就是Best-Effort;Burstable - 以上两种情况之外就是Burstable了,通常也是毕竟普遍的配置;这三种QoS分别会有什么待遇呢?因为cpu不是⼀种hard limit,cpu到达上限之后会被限制,但不会引起进程被杀;这⾥我们主要看Memory,可以简单这样理解:Guaranteed - 优先级最⾼,它能够确保不达到容器设置的Limit⼀定不会被杀。

当然,如果连宿主机都OOM的话,宿主机也是会根据评分选择性杀进程;Best-Effort - 优先级最低,如果系统内存耗尽,该类型的POD中的进程最先被杀;Burstable - 在系统存在内存瓶颈时,⼀旦内存超过他们的request值并且没有Best-Effort类型的容器存在,这些容器就先被杀掉,当然,内存到达limit时也会被杀;不管是哪种QoS的POD,如果内存使⽤量持续上涨,都是有可能被OOM killed的。

Qos的术语

Qos的术语

路由器的拥塞管理FIFO 先入先出qos fifo +队列长度PQ 优先队列配置优先队列列表PQL 针对关键业务分了top、middle、bottom、normal队列,“饿死”高优先级的队列空了,才去低优先级的队列CQ 定制队列根据报文特征建立匹配规则,轮询每次发送一个报文,就在发送额度里面减去这个报文长度,然后继续发,直到发送额度小于报文长度的时候,给发送额度加上一个配置额度,然后进入下一个队列。

这种轮询,让每个队列每发一个报文,都查询一次紧急队列和协议队列,一直到发送满配置额度。

然后才交给下一个队列。

防“饿死”WFQ 加权平均队列不同特征的流加入不同队列,散列和CQ差不多,唯一区别是“配置额度”由优先级唯一决定RTPQ RTP(实时传输协议)优先队列将语音RTP放入高优先级队列,让它和协议队列优先级相同。

交换机拥塞队列交换机处理报文转发,在本地标记报文优先级,进入相应的本地队列。

SPQ 队列严格优先级队列关键业务在拥塞发生时优先获得服务以减少响应的延时,只要不空,就不往下轮“饿死”WRR队列加权轮询可以和SP队列混合轮流调度,发配置额度后就往下轮SP+WRR混合队列方法多;把流量划分进不同的组,在组内可以WRR算法或者SP算法,组间再用SP算法高级Qos管理工具Qos policyCBQ 基于类的队列(基于Qospolicy):LLQ——>EF类,快速转发低延迟队列BQ——>AF类确保转发带宽保证队列WFQ——>BE类尽力转发类。

交叉立方体的最大导出子图与拥塞

交叉立方体的最大导出子图与拥塞

交叉立方体的最大导出子图与拥塞
交叉立方体,也称为互连网,是一种用于并行计算的拓扑结构。

它由多个的n维立方体组成,每个n维立方体被连接到相邻的n维立方体上,形成一个交叉结构。

交叉立方体具有良好的可扩展性和容错性,因此在并行计算中被广泛应用。

交叉立方体中的导出子图,指的是从交叉立方体中选择一部分节点和边构成的子图。

导出子图的选择在网络拓扑的设计和并行计算任务的分配中起着重要作用,可以影响计算性能和通信性能。

在选择导出子图时,一个重要的指标是拥塞。

拥塞是指网络中的通信链路或节点资源利用率超过其容量的程度。

在并行计算中,网络通信是一个重要的瓶颈,过高的拥塞会导致通信延迟增加、吞吐量下降和计算性能下降。

为了最大化导出子图的通信性能,通常需要避免导出子图中的拥塞。

一种常见的方法是选择具有良好连通性和较低拥塞的节点作为导出子图的节点。

在交叉立方体中选择具有较低邻居节点数量的节点作为导出子图的节点。

还可以选择具有较低通信延迟的节点作为导出子图的节点,以降低通信延迟和拥塞。

还可以采用动态调整导出子图的策略来减少拥塞。

在并行计算过程中,根据实际通信需求动态调整导出子图的节点和边。

当某个节点或链路出现拥塞时,可以通过动态调整导出子图中的节点和边来缓解拥塞,提高通信性能。

交叉立方体的最大导出子图与拥塞有着密切的关系。

通过合理选择导出子图的节点和边,并采用动态调整策略,可以最大化导出子图的通信性能,并减少拥塞对并行计算性能的影响。

qos协议原理

qos协议原理

qos协议原理宝子!今天咱们来唠唠QoS协议原理,这就像是网络世界里超级有趣又超级重要的事儿呢。

你想啊,网络就像是一个超级大的城市,里面各种各样的数据就像是来来往往的车辆和行人。

有时候啊,这个城市里的数据流量特别大,就像上下班高峰期的大马路,堵得一塌糊涂。

这时候QoS协议就闪亮登场啦。

QoS,也就是Quality of Service,服务质量的意思。

它的基本原理呢,就是要给网络里的数据分分类。

比如说,有些数据就像是救护车、消防车一样,是非常紧急的。

像视频通话的数据,要是延迟太高,你在屏幕这边就只能看到对方嘴巴动,声音却半天传不过来,或者声音和画面完全对不上,那多尴尬呀。

所以这种实时性要求很高的数据,QoS协议就会把它们当成“特权车辆”,优先让它们在网络这个“道路”上通行。

那QoS是怎么知道哪些数据是紧急的,哪些是不那么着急的呢?这就靠给数据打标记啦。

就好像给每个要出门的人或者车都贴上一个小标签,上面写着“我很着急”或者“我不着急,慢慢走也行”。

在网络里,这个标记的方式有很多种哦。

有一种常见的是根据端口号来标记。

你可以把某些端口号对应的服务当成是重要的。

比如说,80端口通常是用来做网页浏览的,那这个数据可能就被标记为比较重要的普通数据。

而像语音通话可能用的是另外的端口,这个端口的数据就会被标记为超级紧急的那种。

还有呢,QoS协议会去管理网络的带宽。

这就好比城市里的道路宽窄一样。

网络的带宽是有限的,就像道路的宽度是固定的。

QoS协议就像是一个聪明的交通管理员,它会根据数据的重要性,合理分配这个带宽。

比如说,对于那些紧急的视频通话数据,它就会给多分配一些带宽,让这些数据可以快速地通过网络,就像给救护车专门开辟一条宽敞的车道一样。

而对于那些不是那么紧急的,像下载个文件这种,就可以少分一点带宽,让它慢慢走。

宝子,你再想象一下,如果没有QoS协议会怎么样呢?那网络就完全乱套啦。

所有的数据都在网络这个大锅里乱炖,紧急的数据被堵在那里,不紧急的数据却占着大量的资源。

通用路由平台 VRP 说明书 QoS 分册

通用路由平台 VRP 说明书 QoS 分册

目录第1章 QoS简介.....................................................................................................................1-11.1 简介....................................................................................................................................1-11.2 传统的分组投递业务..........................................................................................................1-11.3 新业务引发的新需求..........................................................................................................1-21.4 拥塞的产生、影响和对策...................................................................................................1-21.4.1 拥塞的产生..............................................................................................................1-21.4.2 拥塞的影响..............................................................................................................1-31.4.3 对策.........................................................................................................................1-31.5 几种主要的流量管理技术...................................................................................................1-4第2章流量监管和流量整形配置............................................................................................2-12.1 简介....................................................................................................................................2-12.1.1 流量监管..................................................................................................................2-12.1.2 流量整形..................................................................................................................2-32.1.3 接口限速..................................................................................................................2-52.2 配置流量监管.....................................................................................................................2-62.2.1 建立配置任务...........................................................................................................2-62.2.2 配置流量监管列表....................................................................................................2-72.2.3 配置流量监管策略....................................................................................................2-72.2.4 检查配置结果...........................................................................................................2-72.3 配置流量整形.....................................................................................................................2-82.3.1 建立配置任务...........................................................................................................2-82.3.2 配置流量整形...........................................................................................................2-82.3.3 检查配置结果...........................................................................................................2-92.4 配置接口限速.....................................................................................................................2-92.4.1 建立配置任务...........................................................................................................2-92.4.2 配置接口限速.........................................................................................................2-102.4.3 检查配置结果.........................................................................................................2-102.5 配置举例...........................................................................................................................2-102.5.1 流量监管配置示例..................................................................................................2-102.5.2 流量整形配置示例..................................................................................................2-12第3章拥塞管理配置..............................................................................................................3-13.1 简介....................................................................................................................................3-13.1.1 拥塞管理策略...........................................................................................................3-13.1.2 拥塞管理技术的对比................................................................................................3-53.2 配置先进先出队列..............................................................................................................3-63.2.1 建立配置任务...........................................................................................................3-63.2.2 配置FIFO队列的长度.............................................................................................3-73.3 配置优先队列.....................................................................................................................3-73.3.1 建立配置任务...........................................................................................................3-73.3.2 配置优先列表...........................................................................................................3-83.3.3 配置缺省队列...........................................................................................................3-93.3.4 配置队列长度...........................................................................................................3-93.3.5 在接口上应用优先列表组.........................................................................................3-93.3.6 检查配置结果.........................................................................................................3-103.4 配置定制队列...................................................................................................................3-103.4.1 建立配置任务.........................................................................................................3-103.4.2 配置定制列表.........................................................................................................3-113.4.3 配置缺省队列.........................................................................................................3-113.4.4 配置队列长度.........................................................................................................3-123.4.5 配置各队列每次轮询发送的字节数........................................................................3-123.4.6 在接口上应用定制列表..........................................................................................3-123.4.7 检查配置结果.........................................................................................................3-133.5 配置加权公平队列............................................................................................................3-133.5.1 建立配置任务.........................................................................................................3-133.5.2 配置加权公平队列..................................................................................................3-143.5.3 检查配置结果.........................................................................................................3-143.6 配置RTP队列..................................................................................................................3-143.6.1 建立配置任务.........................................................................................................3-143.6.2 在接口上应用RTP队列.........................................................................................3-153.6.3 配置最大预留带宽..................................................................................................3-163.6.4 检查配置结果.........................................................................................................3-163.7 优先队列配置举例............................................................................................................3-16第4章拥塞避免配置..............................................................................................................4-14.1 简介....................................................................................................................................4-14.2 配置WRED........................................................................................................................4-34.2.1 建立配置任务...........................................................................................................4-34.2.2 启用WRED............................................................................................................4-44.2.3 配置WRED计算平均队长的指数............................................................................4-44.2.4 配置WRED各优先级参数.......................................................................................4-44.2.5 检查配置结果...........................................................................................................4-5第5章基于类的QoS配置.....................................................................................................5-15.1 简介....................................................................................................................................5-15.1.1 流分类......................................................................................................................5-25.1.2 标记.........................................................................................................................5-25.1.3 DSCP......................................................................................................................5-35.1.4 标准的PHB.............................................................................................................5-35.1.5 基于类的队列CBQ(Class Based Queue)..........................................................5-4 5.2 配置流分类.........................................................................................................................5-45.2.1 建立配置任务...........................................................................................................5-45.2.2 在类视图中定义匹配类的规则.................................................................................5-55.2.3 检查配置结果...........................................................................................................5-6 5.3 配置基于类的标记动作.......................................................................................................5-75.3.1 建立配置任务...........................................................................................................5-75.3.2 配置标记报文的DSCP值........................................................................................5-85.3.3 配置标记报文的IP优先级值...................................................................................5-85.3.4 配置标记FR报文的DE标志位的值........................................................................5-85.3.5 配置标记ATM信元的CLP标志位的值...................................................................5-85.3.6 配置标记MPLS EXP域的值...................................................................................5-95.3.7 配置标记VLAN优先级8021P的值.........................................................................5-9 5.4 配置基于类的流量监管和流量整形动作.............................................................................5-95.4.1 建立配置任务...........................................................................................................5-95.4.2 配置基于类的流量监管动作...................................................................................5-105.4.3 配置基于类的流量整形动作...................................................................................5-105.4.4 检查配置结果.........................................................................................................5-11 5.5 配置基于类的流量限速动作..............................................................................................5-115.5.1 建立配置任务.........................................................................................................5-115.5.2 配置基于类的流量限速动作...................................................................................5-125.5.3 检查配置结果.........................................................................................................5-12 5.6 配置CBQ动作.................................................................................................................5-125.6.1 建立配置任务.........................................................................................................5-125.6.2 配置AF..................................................................................................................5-135.6.3 配置WFQ..............................................................................................................5-135.6.4 配置最大队列长度..................................................................................................5-145.6.5 配置EF.................................................................................................................5-145.6.6 检查配置结果.........................................................................................................5-14 5.7 配置基于类的WRED动作...............................................................................................5-155.7.1 建立配置任务.........................................................................................................5-155.7.2 配置基于类的WRED丢弃方式.............................................................................5-155.7.3 配置基于类的WRED的丢弃参数.........................................................................5-165.7.4 检查配置结果.........................................................................................................5-16 5.8 配置流量策略...................................................................................................................5-175.8.1 建立配置任务.........................................................................................................5-175.8.2 定义策略并进入策略视图.......................................................................................5-175.8.3 为流分类指定流动作..............................................................................................5-185.8.4 检查配置结果.........................................................................................................5-185.9 配置策略嵌套动作............................................................................................................5-185.9.1 建立配置任务.........................................................................................................5-185.9.2 配置策略嵌套动作.................................................................................................5-195.9.3 检查配置结果.........................................................................................................5-205.10 应用策略.........................................................................................................................5-205.10.1 建立配置任务.......................................................................................................5-205.10.2 应用策略..............................................................................................................5-215.10.3 检查配置结果.......................................................................................................5-215.11 调试CBQ.......................................................................................................................5-215.12 配置举例.........................................................................................................................5-225.12.1 基于类的队列配置举例........................................................................................5-225.12.2 策略嵌套配置举例...............................................................................................5-26第6章 QPPB配置..................................................................................................................6-16.1 简介....................................................................................................................................6-16.2 配置QPPB.........................................................................................................................6-26.2.1 建立配置任务...........................................................................................................6-26.2.2 配置路由策略...........................................................................................................6-36.2.3 应用路由策略...........................................................................................................6-46.2.4 定义类及类的匹配规则............................................................................................6-46.2.5 配置基于类的动作....................................................................................................6-46.2.6 定义流量策略...........................................................................................................6-46.2.7 在接口下应用流量策略............................................................................................6-46.2.8 在接口下应用QPPB................................................................................................6-56.2.9 检查配置结果...........................................................................................................6-56.3 QPPB配置举例..................................................................................................................6-56.4 故障排除...........................................................................................................................6-11第7章链路效率机制配置.......................................................................................................7-17.1 简介....................................................................................................................................7-17.1.1 IP报文头压缩..........................................................................................................7-17.1.2 链路分片与交叉.......................................................................................................7-27.2 配置IP报文头压缩.............................................................................................................7-37.2.1 建立配置任务...........................................................................................................7-37.2.2 启动IP头压缩........................................................................................................7-47.2.3 配置TCP头压缩的最大连接数................................................................................7-47.2.4 配置RTP头压缩的最大连接数................................................................................7-57.2.5 检查配置结果...........................................................................................................7-57.3 配置链路分片和交叉..........................................................................................................7-57.3.1 建立配置任务...........................................................................................................7-57.3.2 使能LFI..................................................................................................................7-67.3.3 配置LFI分片的最大时延........................................................................................7-67.3.4 配置MP绑定带宽....................................................................................................7-67.3.5 启动VT接口动态QoS的限速功能.........................................................................7-77.4 维护....................................................................................................................................7-77.4.1 调试IP头压缩.........................................................................................................7-77.4.2 清空压缩运行信息....................................................................................................7-8第8章帧中继QoS配置.........................................................................................................8-18.1 简介....................................................................................................................................8-18.1.1 帧中继class............................................................................................................8-28.1.2 实现的帧中继QoS...................................................................................................8-28.2 配置帧中继流量整形..........................................................................................................8-58.2.1 建立配置任务...........................................................................................................8-58.2.2 配置帧中继流量整形参数.........................................................................................8-68.2.3 将整形参数应用到接口............................................................................................8-78.2.4 使能帧中继流量整形................................................................................................8-78.3 配置帧中继流量监管..........................................................................................................8-88.3.1 建立配置任务...........................................................................................................8-88.3.2 配置帧中继流量监管参数.........................................................................................8-98.3.3 将流量监管参数应用到接口.....................................................................................8-98.3.4 使能帧中继流量监管................................................................................................8-98.4 配置帧中继接口的拥塞管理..............................................................................................8-108.4.1 建立配置任务.........................................................................................................8-108.4.2 配置帧中继接口的拥塞管理策略............................................................................8-108.5 配置帧中继虚电路的拥塞管理..........................................................................................8-118.5.1 建立配置任务.........................................................................................................8-118.5.2 配置帧中继虚电路的拥塞管理策略........................................................................8-128.5.3 配置虚电路的DE规则...........................................................................................8-128.5.4 将拥塞策略应用到虚电路.......................................................................................8-138.6 配置帧中继通用队列........................................................................................................8-138.6.1 建立配置任务.........................................................................................................8-138.6.2 配置帧中继通用队列..............................................................................................8-148.6.3 将通用队列应用到帧中继接口...............................................................................8-158.6.4 将通用队列应用到帧中继虚电路............................................................................8-158.6.5 检查配置结果.........................................................................................................8-158.7 配置帧中继PVC PQ队列................................................................................................8-168.7.1 建立配置任务.........................................................................................................8-168.7.2 配置帧中继接口的PVC PQ队列...........................................................................8-168.7.3 配置帧中继虚电路PVC PQ队列等级....................................................................8-178.8 配置帧中继分片................................................................................................................8-188.8.1 建立配置任务.........................................................................................................8-188.8.2 配置帧中继分片.....................................................................................................8-198.8.3 将帧中继分片应用到虚电路...................................................................................8-198.8.4 检查配置结果.........................................................................................................8-198.9 调试帧中继QoS...............................................................................................................8-208.10 配置举例.........................................................................................................................8-208.10.1 帧中继流量整形配置举例.....................................................................................8-208.10.2 帧中继分片配置举例............................................................................................8-22第9章 ATM QoS配置............................................................................................................9-19.1 简介....................................................................................................................................9-19.2 配置ATM PVC的拥塞管理................................................................................................9-29.2.1 建立配置任务...........................................................................................................9-29.2.2 配置ATM PVC的FIFO队列...................................................................................9-39.2.3 配置ATM PVC的CQ队列.....................................................................................9-49.2.4 配置ATM PVC的PQ队列......................................................................................9-49.2.5 配置ATM PVC的WFQ队列..................................................................................9-49.2.6 应用CBQ................................................................................................................9-49.2.7 配置ATM PVC的RTPQ队列.................................................................................9-59.2.8 配置ATM PVC的预留带宽.....................................................................................9-59.3 配置ATM PVC的拥塞避免................................................................................................9-59.3.1 建立配置任务...........................................................................................................9-59.3.2 配置ATM PVC的拥塞避免.....................................................................................9-69.4 配置ATM接口的流量监管.................................................................................................9-79.4.1 建立配置任务...........................................................................................................9-79.4.2 配置ATM接口的流量监管.......................................................................................9-79.5 配置ATM接口基于类的策略..............................................................................................9-89.5.1 建立配置任务...........................................................................................................9-89.5.2 配置ATM接口基于类的策略...................................................................................9-99.6 配置PVC业务映射............................................................................................................9-99.6.1 建立配置任务...........................................................................................................9-99.6.2 配置PVC-Group内PVC的IP优先级..................................................................9-109.6.3 为PVC-Group内创建的PVC配置流量参数.........................................................9-109.7 Multilink PPPoA QoS配置...............................................................................................9-119.7.1 建立配置任务.........................................................................................................9-119.7.2 创建Multilink PPPoA虚拟接口模板......................................................................9-129.7.3 创建PPPoA虚拟接口模板并绑定到Multilink PPPoA...........................................9-129.7.4 配置PPPoA应用...................................................................................................9-129.7.5 在Multilink PPPoA虚拟接口模板上应用QoS策略...............................................9-129.7.6 重启PVC...............................................................................................................9-139.8 配置举例...........................................................................................................................9-139.8.1 ATM PVC上的CBQ配置举例..............................................................................9-13。

03-第3章 二层QoS配置

03-第3章 二层QoS配置

目录第3章二层QoS配置.............................................................................................................3-13.1 二层QoS简介....................................................................................................................3-13.1.1 二层QoS概述.........................................................................................................3-13.1.2 基本原理..................................................................................................................3-23.2 处理通过MPLS域的流量...................................................................................................3-33.2.1 建立配置任务...........................................................................................................3-33.2.2 隧道入口处将优先级信息映射到MPLS外层标签中................................................3-33.2.3 隧道出口处将MPLS外层标签中的优先级映射到报文中.........................................3-43.3 配置二层QoS在Diff-Serv边缘的复杂流分类...................................................................3-53.3.1 建立配置任务...........................................................................................................3-53.3.2 定义MAC地址组.....................................................................................................3-53.3.3 定义二层ACL规则..................................................................................................3-63.3.4 定义流分类..............................................................................................................3-63.3.5 定义复杂流分类动作................................................................................................3-63.3.6 定义策略..................................................................................................................3-73.3.7 应用策略到VLAN中的二层端口.............................................................................3-73.3.8 启用流量策略...........................................................................................................3-83.3.9 检查配置结果...........................................................................................................3-83.4 配置二层QoS在Diff-Serv核心的简单流分类...................................................................3-83.4.1 建立配置任务...........................................................................................................3-83.4.2 配置简单流分类行为,优先级映射关系...................................................................3-93.4.3 将端口加入DS域..................................................................................................3-103.4.4 端口配置信任8021p..............................................................................................3-103.5 配置举例...........................................................................................................................3-103.5.1 配置二层 QoS典型示例........................................................................................3-103.5.2 配置根据8021p优先级对流量进行带宽限制和保证的示例...................................3-133.5.3 配置VPLS示例.....................................................................................................3-153.6 故障处理...........................................................................................................................3-18第3章二层QoS配置二层QoS是基于Diff-Serv(Differentiated Services)模型,分为边缘和核心部分。

跨层Qos管理机制

跨层Qos管理机制

跨层Qos管理机制说明:无线环境存在着各种衰落的影响、网络间漫游的影响以及用户移动性的影响,无线链路的质量在不断的变化,这些可由瞬时的SNR和BER反映出来。

这些变化导致链路层传输带宽即信道服务速率发生变化,继而又使多媒体业务在应用层的延时、失真发生变化。

由于上述各种变化因素,在无线环境下保证多媒体业务的Qos是很困难的。

Qos保证基本上可分为两类:一类是保证“硬”Qos,它试图通过带宽预留、流量管理、接入控制等手段淡化底层资源的变化对应用层的影响;另一类则是保证“软”Qos,这需要事先给多媒体业务定一个Qos范围,当底层的Qos参数(例如带宽、误码率)发生变化时,多媒体业务可以相应调节自己的Qos参数(例如数据产生速率、编码方式),以此来满足用户需要。

保证“软”Qos的管理机制则更容易实现,而且资源利用率更高,但是它需要各层间信息交互以及不同层间Qos参数的映射。

高效的Qos管理机制首先需要一个简单而精确的信道模型作为基础,它需要根据Qos参数(例如,数据速率、时延、时延违约概率)建模无线信道,但是现存的信道模型(例如,瑞利衰落模型)不能根据Qos参数来反映无线信道的特征。

为了解决这个问题,我们采用了链路层信道模型—有效容量模型。

首先用两个EC函数建模一个无线链路(缓存非空概率,连接的Qos指数),然后用一个简单有效的方法估计这两个函数。

EC链路模型的好处在于:(1)很容易把Qos参数和信道模型挂钩;(2)简单易实现;(3)在做接纳控制和资源预留时,采纳这种模型精确而有效。

通过物理信道模型我们可以估计无线通信系统中物理层的表现(例如,一定信噪比条件下的符号差错概率等),但是从这个模型中我们却不能很轻易地得到链路层的Qos表现(例如,时延和丢包概率)。

因为这需要结合链路层的队列分析。

因此,在支持Qos的机制中(例如,接纳控制和资源预留),这种物理层的信道模型很难直接使用。

为了解决这个问题,需要把信道模型从物理层提升到链路层。

A survey of QoS multicasting issues

A survey of QoS multicasting issues

A Survey of QoS Multicasting IssuesA.Striegel and G.ManimaranDependable Computing&Networking LaboratoryDepartment of Electrical and Computer EngineeringIowa State University,USAadstrieg@ gmani@Abstract—The recent proliferation of QoS-aware group applications over the Internet has accelerated the need for scalable and efficient multicast support.In this article,we present a mul-ticast“life-cycle”model which identifies the various issues that are involved in a typical multicast session.During the life-cycle of a multicast session,three important events can occur:group dynamics,network dynamics,and traffic dy-namics.Thefirst two aspects are concerned with maintain-ing a good quality(e.g.,cost)multicast tree taking into ac-count member join/leave and changes in the network topol-ogy due to link/node failures/additions,respectively.The third aspect is concerned withflow,congestion,and error control.In this article,we examine various issues and solu-tions for managing group dynamics and failure handling in QoS multicasting and outline several future research direc-tions.Keywords—QoS multicasting,QoS routing,Group dynam-ics,Fault-tolerance,Tree rearrangement,Core migration, DiffServI.Introduction to MulticastingThe phenomenal growth of group communications and QoS-aware application over the Internet have acceler-ated the need for scalable and efficient network sup-port[1],[2].These group applications include video-conferencing,shared workspaces,distributed interactive simulations(DIS),software upgrading,and resource loca-tion.The traditional unicast model is extremely inefficient for such group-based applications as the same data is un-necessarily transmitted across the network to each receiver. The difference between multicasting and separately uni-casting data to several destinations is best captured by the host group model[3]:“a host group is a set of net-work entities sharing a common identifying multicast ad-dress,all receiving any data packets addressed to this mul-ticast address by senders(sources)that may or may not be members of the same group and have no knowledge of the groups’membership”.This definition implies that,from the sender’s point of view,this model reduces the multi-cast service interface to a unicast one.Thus,the multicast model was proposed to reduce the many unicast connec-tions into a single multicast connection for a group of re-ceivers.The multicast definition also allows the behavior of the group to be unrestricted in multiple dimensions:groups may have local(LAN)or global(WAN)membership,be transient or persistent in time,and have constant or vary-ing membership.Consequently,we have the following types of multicast(or host)groups:•dense groups which have members on most of the links or subnets in the network,whereas sparse groups have mem-bers only on a small number of widely separated links.•open groups are those in which the sender need not be a member of the group,whereas closed groups allow only members to send to the group.•permanent groups are those groups which exist forever or for a longer duration compared to the duration of transient groups.•static groups are those groups whose membership re-mains constant in time,whereas dynamic groups allow members to join/leave the group.A.Life-cycle of a Multicast GroupA network architecture that aims to provide complete support for multicast communication is burdened with the task of managing the multicast session in a manner trans-parent to the users.This goal of transparent multicast service imposes specific requirements on the network im-plementation.To demonstrate the different functionalities that such a network must provide,Figure1shows the var-ious steps and events that can take place in the“life-cycle”of a typical multicast session.The sequence of phases/steps relevant to the multicast session are(i)multicast group (session)creation,(ii)multicast tree construction and re-source reservation,(iii)data transmission,and(iv)multi-cast session tear-down.B.Multicast Group CreationThefirst step in the creation of a multicast session is the assigning of a unique address to the multicast group such that the data of one group does not clash with other groups.Both groups and addresses have associated life-times.Similar to groups,group addresses are classified as either static or dynamic,depending on whether they are as-signed permanently to a given group,or assigned to differ-ent groups at different instants of time.The most obvious matching between groups and addresses is to assign static addresses to permanent groups and dynamic addresses to transient groups.Note that assignment of static addresses to transient groups could result in insecure communication (wherein non-members receive messages meant for a cer-tain group)whereas assignment of dynamic addresses to permanent groups merely causes unnecessary communica-tion overhead.Session Tear-MigrationCore gementTraffic ControlCOLM Routinglife-time expiresRegulate/adaptproblem ArbitrateMultiple sendersNode/link failure Session degradesquality Core R e c o n f i g u r egraft/prune TxNew centerdegrades qualityTreeRearrangedtree Data Trans-missiondownReser-vationSession ControlRearran-Tree Failure Handlingjoin/leave Resource tree Multicast Multicast Routing identifierGroup Group CreationMulticast call requestestablished SessionFig.1.Life-cycle of a multicast sessionC.Multicast Tree Construction &Resource Reservation Once the group is created,the next phase in the mul-ticast session is the construction of a multicast distribu-tion tree,spanning the source(s)and all the receivers (QoS routing),and reserving resources on the tree.Multicast route determination is traditionally formulated as a prob-lem related to tree construction.There are three reasons for adopting the tree structure:•The source needs to only transmit a single packet down the multicast tree.•The tree structure allows parallel transmission to the var-ious receiver nodes•The tree structure minimizes data replication,since,the packet is replicated by routers only at branch points in the tree.It has been established that determining an optimal mul-ticast tree for a static multicast group can be modeled as Steiner problem in networks which is shown to be NP-complete [2].A number of algorithms have been developed using heuristic-based approaches such as KMB and TM (discussed in [2]).An additional dimension to the multi-cast routing problem is the need to construct trees that will satisfy the QoS requirements of modern networked multi-media applications (delay,delay jitter,loss,etc.).QoS routing and resource reservation are two important,closely related issues.Resource reservation is necessary for the network to provide QoS guarantees -in terms of throughput,end-to-end delay,and delay jitter -to multi-media applications.Hence,the data transmission of the connection will not be affected by the traffic dynamics of other connections sharing the common links.Before the reservation can be done,a tree that has the best chance to satisfy the resource requirements must be selected.Re-source reservation and tree construction are discussed fur-ther in Section III.D.Data TransmissionOnce the above two phases have been completed success-fully,data transmission can begin.During data transmis-sion,the following four types of run-time events can occur:•Group dynamics:Since group membership can be dy-namic,the network must be able to track current member-ship during a session’s lifetime.Tracking is needed both to start forwarding data to new group members and to stop the wasteful transmission of packets to members that have left the group (identified as COLM (Constrained On-Line Multicast)routing in Figure 1).As a result of group dynamics,the tree quality may degrade and may neces-sitate re-optimization (identified as Tree Rearrangement).For core-based trees,a similar degradation may occur that may necessitate a change in the core node (identified as Core Migration).•Network Dynamics:During the life-time of a multicast session,if any node or link supporting the multicast session fails,service will be disrupted.This requires mechanisms to detect node and link failures and to reconfigure (restore)the multicast tree around the faulty links/nodes (identified as Failure Handling in Figure 1).Note that multicast rout-ing protocols based on underlying unicast routing protocols are as survivable as the unicast routing protocol.If the multicast routing protocol is independent of the unicast routing protocol,it must implement its own restoration mechanism [5].•Transmission problems:This could include events such as swamped receivers (needing flow control),or faulty packet transmissions (needing error control).The traffic control mechanism,working in conjunction with the sched-ulers at the receivers and the routers,is responsible for per-forming the necessary control activities to overcome these transmission problems (identified as Traffic Control in Fig-ure 1).•Competition among Senders:In a many-to-many mul-ticasting,when multiple senders share the same multicast tree (resources)for data transmission,resource contention occurs among the senders.This will result in data loss due to buffer overflow,thus triggering transmission prob-lems.This requires a session control mechanism that arbi-trates transmission among the senders.(identified as Ses-sion Control in Figure 1).E.Group Tear-DownAt some point in time,when the session’s lifetime has elapsed,the source will initiate the session tear-down pro-cedures.This involves releasing the resources reserved for the session along all of the links of the multicast tree and3purging all session-specific routing table entries.Finally, the multicast address is released and group tear-down is complete.As is evident from the life cycle of a multicast session, the multicasting problem contains many interesting issues. Rather than covering all of the issues associated with mul-ticasting,this article focuses on two of the key areas from the life cycle,group dynamics and failure recovery.The rest of the article is structured as follows.Section II dis-cusses an overview of multicast routing protocols.Section III investigates QoS multicast routing from the perspective of group dynamics.Next,Section IV examines the motiva-tion for fault tolerance in multicasting and the implications for QoS multicast routing.Finally,in Sections V and VI, we make several concluding remarks and comment on open areas of research.II.An Overview of Multicast RoutingProtocolsMulticast routing protocols can be classified into two main approaches:source-based protocols and center-based protocols.The source-based approach uses the notion of a shortest path tree(SPT)rooted at each source node. The SPT is typically obtained by concatenating shortest paths from the source to each receiver.Since the short-est path is usually the shortest delay path,the receivers in the multicast tree typically receive excellent delay per-formance.However,source-based trees introduce scalabil-ity problems for large networks as each individual source must have a separate tree constructed,spanning all the re-ceivers,rooted at the source node.Source-based routing is currently employed in Distance Vector Multicast Routing (DVMRP),dense-mode Protocol Independent Multicast-ing(PIM-dense),and Multicast Open Shortest Path First (MOSPF)[4].The other type of protocols,center-based or shared-tree protocols,construct a multicast tree spanning the members whose root is the center or core node.These protocols are highly suitable for sparse groups and are scalable for large networks.However,just as shortest-path trees provided excellent QoS at the cost of network bandwidth,shared trees provide excellent bandwidth conservation at the cost of poor QoS to the receivers.The Core Based Tree(CBT) [6]is a well-known example of a shared tree routing pro-tocol.When a node wishes to transmit a message to the multicast group in the CBT protocol,it sends the mes-sage towards the core.The message is distributed to group members along the path to the core and then the message is distributed to the remaining members once it reaches the core.Requests to join or leave the multicast group are processed by sending the request towards the group core. When a join request reaches an on-tree node,the on-tree node becomes the point of attachment for the new node. Conversely,when a node leaves a group,the part of the tree that does not support any members is pruned. Recently,several hybrid routing protocols have been pro-posed which allow receivers to switch from a shared-tree to a shortest-path tree.Protocols such as sparse-mode PIM (PIM-sparse)and Multicast Internet Protocol(MIP)are examples of hybrid routing protocols[4].III.Managing Group DynamicsThe QoS of the multicast tree(receiver-perceived QoS) is not solely affected by the multicast routing protocol. Rather,the QoS of the multicast tree is a function of group dynamics which includes the following issues:•QoS-aware routing•Tree rearrangement•Core/tree migrationFigure2summarizes the issues that will be discussed in the following sections.A.QoS-aware RoutingA multicast tree is incrementally constructed as members join and leave a group.When an existing member leaves the group,it sends a control message up the tree to prune the branch that no longer has active members.When a new member joins the group,the tree must be extended to cover it.The dynamic QoS multicast routing problem can be informally stated as:Given a new member M new,find a path from M new to an on-tree node that satisfies the QoS requirements of M new. The QoS requirements can be classified into link con-straints(e.g.,bandwidth)or path constraints(e.g.,end-to-end delay or path cost).The optimization problem with two or more path constraints is known to be NP-complete [2].Many practical instances of QoS routing problems have at least two path constraints and hence,most QoS multi-cast routing protocols employ heuristic solutions.In addition to satisfying the QoS requirements of the re-ceiver,a good QoS-aware multicast routing protocol should aim at:•improving the probability of successful join •minimizing the cost of the joining path •minimizing the joining time•being scalable to large networksBased on how the new member is connected to the tree, multicast routing protocols can be classified into two broad categories:single-path routing(SPR)and multiple-path routing(MPR).An SPR provides a single path connecting the new member to the tree,whereas an MPR provides multiple candidate paths.Most SPR protocols were originally designed for best-effort traffic.Well known examples are CBT and PIM wherein a new member i is connected to the multicast tree along the unicast shortest path from i to the core/source of the tree.The unicast path is typically the shortest path in terms of hop length which is good for best-effort traffic. However,such a shortest path may not have the required resources to support the QoS needs of the member.Sev-eral SPR protocols such as DCUR and RDM[7]have been proposed for QoS unicast routing which can be used for multicast routing as well.These protocols typically use delay and cost tables for making routing decisions during QoS path setup.4Multicast Group DynamicsQoS of existing membersQoS of new members Tree TypeRouting MethodShortest Path Shared Tree HybridSingle Path Multiple PathTreeRearrangementCore/Tree MigrationQuality Monitoring Rearrangement SchemeCore Selection Tree ConstructionTree/Member MigrationFig.2.Issues in Multicast Group DynamicsIn contrast to SPR protocols,MPR protocols provide or probe multiple candidate paths in order to increase the chances of finding a feasible path (i.e.,a path that satis-fies the QoS requirements of the member).Among these candidate paths,the best path is selected.Spanning-join,QoSMIC,QMRP,and parallel probing are among the re-cently proposed MPR protocols [8],[9](and the references therein).Spanning Join:In the Spanning-join protocol,the new member broadcasts join-request messages in its neighbor-hood to find on-tree nodes.Upon receiving the join-request message,the on-tree nodes reply to this message and the member chooses the best on-tree node to connect to from the set of on-tree nodes that have replied.QoSMIC :In QoSMIC,the search for candidate paths consists of two parallel procedures:local search and tree search.The local search is equivalent to spanning-join,except that only a small neighborhood is searched.The tree search handles the case when there is no on-tree node in the neighborhood checked by local search.In the tree search,the new member contacts a designated Manager node which is responsible for ordering a subset of on-tree nodes to establish a path from them to the member.Each such path is a candidate path.The new member then se-lects best path out of these candidate paths.QRMP:The QMRP protocol consists of two sequential procedures:single-path mode and multiple-path mode.The protocol starts and continues with single-path mode until it reaches a node that has insufficient resources to satisfy the join request.When such a node is encountered,the protocol switches to multi-path mode.Parallel Probing:This protocol was originally proposed for QoS unicast routing and can be adapted to multicast routing.The objectives of the protocol are to minimize the path setup time and to minimize the resource reservation along the multiple candidate paths.To achieve this,the member sends multiple probes,using different heuristics for each probe,to an intermediate destination (ID).Upon receiving the first probe message,the ID initiates a paral-lel probe to the next ID.Upon receiving later probes,the ID releases the resources reserved between the ID and the previous ID (or member)by those later probes.Although the above mentioned protocols do take into ac-count some of the above requirements,none of the abovelisted protocols are designed to take into account all the requirements of a good multicast routing protocol.For example,Spanning-join and QoSMIC are not scalable to the Internet because of high message overhead due to their flooding nature.Whereas,QMRP has the same problem in multiple-path mode,QMRP also incurs a high joining time due to its sequential invocation of multi-path mode from single-path mode.Although parallel probing takes into ac-count many of the stated goals,its effectiveness primarily depends on the selection of intermediate destinations.B.Tree RearrangementIn a dynamic multicast session,it is important to ensure that member join/leave will not disrupt the ongoing multi-cast session,and the multicast tree after member join/leave will still remain near-optimal and satisfy the QoS require-ments of all on-tree receivers [2].One way to handle dy-namic member join/leave is by reconstructing the tree ev-ery time a member joins or leaves the session.This in-volves migration of on-tree nodes to the new tree which may result in a large service disruption that may not be tolerable,especially by QoS multicast sessions.Another way to handle dynamic member join/leave is by incremen-tally changing the multicast tree through the graft/prune mechanism.This incremental change approach suffers be-cause the quality (e.g.,tree cost)of the tree maintained may deteriorate over time.Therefore,an on-line multicast routing algorithm must take into account two important and possibly contradicting goals [10]:cost-reduction and minimization of service disruption.Thus,a balance needs to be struck between these goals by employing a technique that monitors the quality of the tree or portion of the tree and triggers tree rearrangement when the quality degrades below a threshold.The tree rearrangement mechanism is a means by which balance can be struck between these goals [10].C.Core &Tree MigrationAnother significance of tree maintenance is in core-based multicasting.In core-based multicasting,core selection is an important problem because the location of the core in-fluences the tree cost and delay.The quality (e.g.,cost)of the tree based on the current core may deteriorate over time due to dynamic join and leave of members,i.e.,the5core degenerates[11]with time.The maintenance of a good quality multicast tree requires online selection of a new core,online construction of a multicast tree based on the new core,and migration of the members from the old multicast tree to the new multicast tree.Core migration can also be invoked as part of core failure recovery when the current core fails.A number of heuris-tics have been proposed in[12]for core selection.Also, several algorithms have been proposed for multicast tree construction of core-based trees(references in[2]).The is-sues in core migration frequently involve tradeoffs which are discussed below:•Tradeoff-Invocation of Core Migration:Maintaining a good quality tree requires frequent core migration,but in-curs service disruption and overhead.Thus,there exists a tradeoffbetween(i)minimizing service disruption and overhead and(ii)maintaining a good quality tree(i.e., cost).The triggering mechanism for core migration is crit-ical for capturing the tradeoffbetween service disruption and quality of the tree.•Tradeoff-Multicast Tree Construction:During tree con-struction,again there exists a tradeoffbetween cost of the tree and service disruption.•Tradeoff-Tree Migration:During tree migration,there exists a tradeoffbetween service disruption and resource wastage(i.e.,the amount of resources overallocated mo-mentarily during core migration).In other words,more resource wastage may result in less service disruption and vice-versa.IV.Failure Recovery in QoS MulticastingA communication network failure can have an adverse effect on today’s society.In the future,as more applica-tions employ multicast routing,a strong need will emerge for algorithms that can be employed by survivable multi-cast routing protocols[5].Although faults may seem un-common,the chance of faults is higher than one might ex-pect.For example,it was recently observed that the Inter-net occasionally experiences periods of routing instability, also known as routingflaps,when network can lose con-nectivity asfloods of routing updates are processed.This network instability could lead to timeouts that result in transient faults.Moreover,fault handling is even more im-portant in mobile and multi-hop adhoc networks wherein, as the mobile host moves,the connectivity of the network is likely to change and the structure of the multicast tree may break.Therefore,multicast protocols must be equipped with mechanisms to survive or detect and recover from link/node failures.In order for them to be widely deploy-able,these mechanisms need to take into account several design considerations,such as scalability,fast recovery,and minimizing protocol overhead.The most important aspect of failure recovery is to min-imize service disruption.For unicasting,one type of failure handling approach is the protection based approach wherein dedicated protection mechanisms,such as backup paths, are employed to cope with failures.This approach is more suitable for hard real-time communication wherein everyCore Failure RecoveryGlobal RecoveryCore Selection &Tree ConstructionLocal RecoveryCore Evaluation& Tree Constrn.Core/TreeMigrationMulticast TreeMain & CandidateCores Selection ConstructionFig.3.Issues in Core Failure Recoverypacket is critical.The other type is the restoration based approach wherein,on detection of a failure,an attempt is made to reroute(restore)the path around the faulty nodes/links with minimal service disruption.With multimedia multicasting,the problem is much more complicated than with unicasting,as resource reser-vations are shared and group dynamics interact with net-work reconfigurations.Very little is known as to how to deal with such problems[1].The use of the protection based approach for multicasting is prohibitively expensive in terms of resource usage as it requires one or more backup trees for each primary tree,and hence not scalable.More-over,for dynamic groups,this approach does not suit well as the structure of the tree itself changes with time.There-fore,it is more appropriate to use the restoration based approach.A.Failure Recovery in Core-Based MulticastingWith regard to core-based multicasting,the main prob-lem is that it has a single point of failure at the core.If the core fails,then the whole multicast session would be disrupted.To provide reliable multicast services,the mul-ticast routing protocols need to be equipped with mecha-nisms for handling core failures as well as node/link fail-ures.Figure3details several of the issues associated with core failure recovery.Core recovery may be further subdivided into the following areas:•Core Selection&Tree Construction:A new core must be selected from a list of candidate cores with a multicast tree that minimizes service disruption and tree cost.•Local Recovery:Once a core failure or link/node fail-ure has occurred,recovery may take place on a local scale whereby nodes contact other nearby nodes and perform local rerouting.•Global Recovery:In the event of a severe failure,it may be necessary to globally recover the multicast group.For these instances,the new core must be evaluated and the members of the multicast tree must be migrated to the new tree.For core based trees,there has been some work for link/node failure recovery.The original specification[6] for core based trees included a mechanism for recovering from link or node failure.In order to resolve the problem of loop formation,the protocol specification of core based trees was modified to eliminate the possibility of generat-6ing loops during failure recovery viaflushing.Although theflushing modification eliminated loop formation,it had several drawbacks:•substantial time in rebuilding the tree.•substantial overhead if the number of members in the subtree is large.•overhead at on-tree nodes may be high when processing many simultaneous join requests.Another protocol was proposed recently in[13]with the aim of applying theflush operation less frequently.Al-though this protocol has a reasonably high success rate of restoring the tree without using theflush operation,it in-curs a high message overhead and recovery time due to more message exchanges.V.Targeting Towards Next-Generation QoSArchitecturesMost of the QoS multicasting solutions discussed in this article are well suited for per-flow QoS architectures such as Integrated Services.However,the adaption of these so-lutions to aggregation-based QoS architectures such as Dif-ferentiated Services(DiffServ)is non-trivial due to several architectural conflicts as given below[14].•DiffServ requires that the core routers be stateless whereas multicasting relies on per-group(per-flow)state information at all nodes in the multicast tree.•DiffServ employs sender-driven QoS whereas multicast-ing typically employs receiver-driven QoS for accommodat-ing heterogeneous members.The issues of managing group dynamics and failures are further compounded due to the distributed nature of de-cision making by the edge routers in a DiffServ domain. Moreover,an additional complexity arises when provid-ing end-to-end QoS across multiple DiffServ and/or non-DiffServ domains.In summary,the integration of DiffServ and multicasting is an important and relatively unexplored area and requires significant research attention.VI.ConclusionsIn this article,wefirst outlined the various issues in mul-ticast communication through tracing the life-cycle of a multicast session.Then,we focused on two key issues, namely,managing group dynamics and failure recovery. These issues have a profound impact on QoS multicast routing and the QoS experienced by the end user.For these issues,we identify the following important research problems:•Join/Leave QoS Routing:Although significant work has been done in QoS routing,the currently proposed schemes do not meet all of the goals of a good multicast routing protocol.Thus,further research is needed in developing such schemes that provide better performance on both an intra-and inter-domain routing scale.•Tree Maintenance:Tree rearrangement,has received sig-nificant attention[10]in the recent past and needs further research[2].The management of group dynamics in an integrated manner addressing all of the subproblems(QoS routing,Tree rearrangement,and Tree migration)is an im-portant problem for further research.•Core&Tree Migration:Though there has been some work[11],[12]on online core evaluation,to the best of our knowledge,there is no work on tree migration taking service disruption aspect into account(Currently,the effect of changing the core on data loss is not well-understood [12]).•Failure Recovery:As of today,very little work([13]for link/node failure in core-based trees)has been done on fail-ure recovery in multicast communication and hence this topic needs further research[5].•Interaction with QoS Architectures:As discussed in Sec-tion V,the interactions of multicasting with QoS architec-tures need further research.Besides group dynamics and failure handling,other is-sues such as traffic control and session control have many interesting problems which bear future research.References[1]J.C.Pasquale,G.C.Polyzos,and G.Xylomenos,“The multimediamulticasting problem,”Multimedia Systems,vol.6,no.1,pp.43-59,1998.[2]J.Hou and B.Wang,“Multicast routing and its QoS extension:Problems,algorithms,and Protocols,”IEEE Network,Jan./Feb.2000.[3] D.R.Cheriton and S.Deering,“Host groups:A multicast exten-sion for datagram internetworks,”in Proc.Data Communications Symposium,pp.172-179,1985.[4]M.Ramalho,“Intra-and Inter-domain multicast routing pro-tocols:A survey and taxonomy,”IEEE Commuications Surveys and Tutorials,vol.3,no.1,pp.2-25,Jan.-Mar.2000.[5]L.Sahasrabuddhe and B.Mukherjee,“Multicast routing algo-rithms and protocols:A tutorial,”IEEE Network,Jan./Feb.2000.[6]T.Ballardie,P.Francis,and J.Crowcroft,“Core-based trees(CBT):An architecture for scalable inter-domain multicast rout-ing,”in Proc.ACM SIGCOMM,pp.85-95,1993.[7]R.Sriram,G.Manimaran,and C.Siva Ram Murthy,“Preferredlink based delay-constrained least cost routing in wide area net-works,”Computer Communications,vol.21,no.18,pp.1655-1669, Nov.1998.[8]S.Chen,K.Nahrstedt,and Y.Shavitt,“A QoS-aware multicastrouting protocol,”IEEE INFOCOM,pp.1594-1603,2000.[9]G.Manimaran,H.Shankar Rahul,and C.Siva Ram Murthy,“A new distributed route selection approach for channel estab-lishment in real-time networks,”IEEE/ACM working, vol.7,no.5,pp.698-709,Oct.1999.[10]R.Sriram,G.Manimaran,C.Siva Ram Murthy,“A rearrange-able algorithm for the construction of delay-constrained dynamic multicast trees,”IEEE/ACM working,vol.7,no.4, pp.514-529,Aug.1999.[11] C.Donahoo and Zegura,“Core migration for dynamic multicastrouting,”in Proc.ICCCN,1995.[12] E.Fleury,Y.Huang,P.K.McKinley,“On the performance andfeasibility of multicast core selection heuristics,”in Proc.Intl.Conf.on Computer Communications and Networks,pp.296-303, 1998.[13]L.Schwiebert and R.Chintalapati,“Improved fault recovery forcore based trees,”Computer Communications,vol.23,no.9,Apr.2000.[14] A.Striegel,G.Manimaran,“A Scalable Protocol for MemberJoin/Leave in DiffServ Multicast,”to appear at Proc.of IEEE LCN’2001,Tampa,Florida,Nov.2001.。

qos的控制策略

qos的控制策略

qos的控制策略QoS的控制策略QoS(Quality of Service)是一种网络管理和控制的技术,用于保证网络的性能和服务质量。

在网络中,不同的应用和服务对网络性能要求不同,通过QoS的控制策略可以根据不同的需求对网络资源进行合理分配和调度,从而保证网络的稳定性和可靠性。

QoS的控制策略主要包括流量控制、拥塞控制和差错控制。

流量控制是指通过限制数据流量的速率,以防止网络拥塞和资源浪费。

拥塞控制是为了保证网络的畅通,当网络中出现拥塞时,通过调整数据传输速率和重传机制来降低拥塞程度。

而差错控制则是为了保证数据传输的可靠性,通过纠错码、重传和确认机制来提高数据传输的正确性。

流量控制是QoS中的重要一环,通过限制数据的传输速率来控制网络中的流量。

常见的流量控制策略有令牌桶算法和Leaky Bucket算法。

令牌桶算法是一种基于令牌的流量控制算法,网络中的数据传输需要消耗令牌,当令牌不足时,数据传输将会被限制。

而Leaky Bucket算法则是基于漏桶的流量控制算法,它通过设置一个固定容量的漏桶,当数据流入漏桶时,如果漏桶已满,则数据将会被丢弃或延迟传输。

拥塞控制是为了保证网络的畅通和稳定,当网络中出现拥塞时,通过调整数据传输速率和重传机制来降低拥塞程度。

常见的拥塞控制策略有TCP的拥塞控制机制和RED(Random Early Detection)算法。

TCP的拥塞控制机制通过动态调整发送窗口和重传超时时间来控制数据的传输速率,当网络中出现拥塞时,TCP会减小发送窗口的大小以降低拥塞程度。

而RED算法则是一种基于随机丢弃的拥塞控制算法,当网络中的数据包超过一定阈值时,RED算法会随机丢弃一部分数据包,以降低网络的拥塞程度。

差错控制是为了保证数据传输的可靠性,通过纠错码、重传和确认机制来提高数据传输的正确性。

常见的差错控制策略有前向纠错码、自动重传请求(ARQ)和确认应答机制。

前向纠错码是一种通过添加冗余信息来纠正数据传输中的错误的编码方式,通过在数据包中添加校验码来实现纠错。

多业务的QOS描述

多业务的QOS描述

一、QoS 概述在任何时刻、任何地址和任何人实现任何媒介信息的沟通是人类在通信领域的永久需求,在IP 技术成熟以前,全部的网络都是单一业务网络,如PSTN 只能开业务,有线电视网只能承载电视业务,网只能承载数据业务等。

网络的分别造成业务的分别,降低了沟通的效率。

由于互联网的流行,IP 应用日趋普遍,IP 网络已经渗入各类传统的通信范围,基于IP 构建一个多业务网络成为可能。

可是,不同的业务对网络的要求是不同的,如安在分组化的IP 网络实现多种实时和非实时业务成为一个重要话题,人们提出了QoS〔效劳质量,Quality of Service〕的概念。

IP QoS 是指 IP 网络的一种力量,即在跨越多种底层网络技术〔FR、ATM、Ethernet、SDH 等〕的 IP 网络上,为特定的业务供给其所需要的效劳。

QoS 包括多个方面的内容,如带宽、时延、时延抖动等,每种业务都对 QoS 有特定的要求,有些可能对其中的某些指标要求高一些,有些那么可能对另外一些指标要求高些。

衡量IP QoS 的技术指标包括以下几个。

(1)可用带宽:指网络的两个节点之间特定应用业务流的平均速度,要紧衡量用户从网络取得业务数据的力量,全部的实时业务对带宽都有必定的要求,如关于视频业务,当可用带宽低于视频源的编码速度时,图像质量就无法保证。

(2)时延:指数据包在网络的两个节点之间传送的平均来回时刻,全部实时性业务都对时延有必定要求,如 VoIP 业务,一样要求网络时延小于 200ms,当网络时延大于 400ms 时,通话就会变得无法忍受。

(3)丢包率:指在网络传输进程中丧失报文的百分比,用来衡量网络正确转发用户数据的力量。

不同业务对丢包的灵敏性不同,在多媒体业务中,丢包是致使图像质量恶化的最全然原因,少量的丢包就可能使图像显现马赛克现象。

(4)时延抖动:指时延的转变,有些业务,如流媒体业务,能够通过适当的缓存来削减时延抖动对业务的阻碍;而有些业务那么对时延抖动超级灵敏,如语音业务,稍许的时延抖动就会致使语音质量快速下降。

QoS Guarantees

QoS Guarantees

Link Layer: critical QoS component
Buffering and bandwidth: the scare resources cause of loss and delay packet scheduling discipline, buffer management will determine loss, delay seen by a call FCFS Weighted Fair Queuing (WFQ)
Comparing Internet and ATM service classes: how are guaranteed service and CBR alike/different? how are controlled load and ABR alike/different?
Radical changes required!
Possible token bucket uses: shaping, policing, marking
delay
pkts from entering net (shaping) drop pkts that arrive without tokens (policing function) let all pkts pass thru, mark pkts: those with tokens, those without network drops pkts without tokens in time of congestion (marking)
Q.2931:
ATM call setup protocol sender initiates call setup by passing Call Setup Message across UNI boundary (i.e., into network) network immediately returns Call Proceeding indication back network must:
  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

Wireless Pers Commun(2009)51:549–563DOI10.1007/s11277-009-9750-zCross Layer QoS Guarantees in Multiuser WLANSystemsNizar Zorba·Ana I.Pérez-Neira·Andreas Foglar·Christos VerikoukisPublished online:2July2009©Springer Science+Business Media,LLC.2009Abstract The use of real-time delay-sensitive applications in wireless systems has sig-nificantly increased during the last years.Consequently,the demand to guarantee certain Quality of Service(QoS)is a challenging issue for the system’s designers.A cross-layer based dynamically tuned queue length scheduler is presented in this paper,for the Down-link of multiuser WLAN systems with heterogeneous traffic requirements.An opportunistic scheduling algorithm is applied,while users from higher priority traffic classes are priori-tized.A trade-off between the throughput maximization of the system and the guarantee of the users QoS requirements is obtained.Therefore the length of the queue is dynamically adjusted to select the appropriate conditions based on the operator requirements. Keywords Qualify of Service guarantee·Wireless local area network(WLAN)·Heterogeneous applications1IntroductionThe demand for using in-home Wireless Local Area Networks(WLANs)to support real-time delay-sensitive applications such as voice,video streaming or online-gaming has N.ZorbaUniversity of Jordan,Amman,Jordane-mail:n.zorba@.joA.I.Pérez-NeiraTechnical University of Catalonia,Barcelona,Spaine-mail:anuska@A.FoglarInfineon Technologies,Munich,Germanye-mail:andreas.foglar@infiC.Verikoukis(B)Telecommunications Technological Center of Catalonia(CTTC),Barcelona,Spaine-mail:cveri@cttc.es550N.Zorba et al. been remarkably growing during the last years.However,current IEEE802.11WLAN sys-tems fail to fulfill the strict Quality of Service(QoS)requirements in terms of maximum allowed delay and/or delay jitter for such applications.The fact that the wireless environ-ments are characterized by a harsh scenario for communications adds difficulties to guarantee certain QoS in WLAN based systems,where the wireless channel suffers from multiple unde-sired effects such as deep fades and multipath that produce errors to the original information. Therefore,providing QoS by using the scarce resources in the home wireless medium is a challenging aspect for such system objective.Different notions of QoS are available at different communication layers[1].At the phys-ical layer,QoS means an acceptable signal strength level and/or Bit Error Rate at the receiver, while at the Data Link Control(DLC)or higher layers,QoS is usually expressed in terms of minimum guaranteed throughput,maximum allowed delay and/or delay jitter.The fulfillment of QoS requirements depends on procedures that follow each layer.At the DLC layer,QoS guarantees can be provided by appropriate scheduling and resource allocation algorithms, while at the physical layer,adaptation of transmission power,modulation level or symbol rate are employed to maintain the link quality.The consideration of the physical layer transmission characteristics from the higher lay-ers can significantly improve the efficiency of the wireless systems.The vertical coupling among layers is well-known as Cross-Layer[2].Cross-layer between the physical layer and the higher layers seems to be unavoidable in wireless environments in order to exploit the physical layer instantaneous conditions.Such kind of schemes are needed to guarantee the QoS requirements in heterogeneous traffic systems,where the network includes users with different applications and different QoS restrictions.Cross-Layer further advantages can include improvements in terms of link throughput,reduction of the network latency,energy savings in the mobile nodes or minimization of transmitted power[2].One of the system capabilities that can be employed to improve the system performance is the Multiuser availability.An adaptive opportunistic scheduler[3],that selects the user with the best channel conditions at each time instant,can take advantage of the variable chan-nel conditions among the users,thus it enhances the system average throughput and QoS satisfaction[4].Its employment has been already standardized in the UMTS-HSDPA[5], while it is expected to be part of WLAN-VHT standard[6].The opportunistic transmission (OT)besides showing high performance,it is also a low complexity design and only partial channel information is required at the transmitter side.Furthermore,OT can be operated and adopted to fulfill the QoS requirements demanded by the users for their correct operation[1].An interesting remark concerning the QoS compliance in commercial wireless systems refers to the outage concept[7],where due to the wireless channel characteristics,the100% satisfaction of the strict QoS demands is impossible,for what is known as outage in the QoS requirements[7].The notion of outage is widely used in cellular systems where several commercial systems(e.g.GSM and WCDMA)allow for2–5%outage.Therefore,the exten-sion of the QoS outage to WLAN based systems,when running delay-sensitive applications, seems to be the most tractable approach to asset their efficiency.Taking into account the aforementioned concepts,the main contribution of this paper is to propose a Dynamic Queue Length in the Data Link Control Layer,in order to guarantee certain QoS,in the Downlink of multiuser WLAN systems with heterogeneous traffic.There-fore,the proposed solution considers both the physical and application layers characteristics of the system,as shown in Fig.1.To be more precise,the length of the queue at the DLC layer depends on the QoS requirements for each application,in terms of the system throughput and the maximum allowed delay(and jitter)of the delay-sensitive applications,where certain outage is considered in the satisfaction of the QoS requirements.Cross Layer QoS Guarantees in Multiuser WLAN Systems551Fig.1System cross layer proposalThe reminder of the paper is as follows:a review of the state of the art on Dynamic Queue scheduling is performed in Sect.2,and in Sect.3the system model is presented.Sec-tion4explains the Opportunistic Transmission scheme followed by Sect.5with the QoS requirements.Section6describes the dynamic queue length strategy while the performance evaluations of the proposed scheme are given in Sect.7.Finally,Sect.8concludes the ideas of the paper.2Related Work on Dynamic Queue SchedulingWith respect to the aforementioned concepts in a Downlink system with heterogeneous traf-fic,several proposals in literature tackle the dynamic queue consideration but with different objectives and requirement.The authors in[8]propose a Media Access Control(MAC)protocol for afinite-user slot-ted channel with multipacket reception(MPR)capability.By adaptively changing the size of the contention class(defined as a subset of users who can access the channel at the same time)according to the traffic load and the channel MPR capability,the proposed dynamic queue protocol provides superior channel efficiency at high traffic load and minimum delay at low traffic load.However,this protocol is dynamic in terms of traffic load queue and does not deal with the problem of having users running several kinds of applications with different QoS demands.An admission control problem for a multi-class single-server queue is considered in[9]. The system serves multiple demand streams,each having a rigid due-date lead time.To meet the due-date constraints,a system manager may reject orders when a backlog of work is judged to be excessive,thereby incurring lost revenues.Nevertheless,in this paper,service classes are turned-away based on pre-defined load(packets in the queue)thresholds and only the average mean delay is guaranteed,while the maximum delay is not.A dynamically queuing feature for service enhancement is proposed in[10]according to the increment of service subscribers and their mobility.In addition,it presents a dynamic queue manager that handles the queue size to increase call completion rates for service enhancements in wireless intelligent network environments.In spite of this,other QoS demands are not possible and the problem of having different users with different QoS demands is not dealt with.552N.Zorba et al.Various QoS requirements of bursty traffic and a dynamic priority queue with two types of traffic are proposed and analyzed in[11].The system has two separate buffers to accom-modate two types of customers,the capacities of the buffers being assumed to befinite for practical applications.But the service order is only determined by the queue length of the first buffer,so that only average QoS demands can be satisfied.The scheduler gives some buffers and bandwidth to every priority class at every port in [12].The scheme adapts to changes in traffic conditions,so that when the load changes the system goes through a transient.Therefore,each queue individually carries out its blocking process,which does not provide any tight control on the QoS demands.3System ModelWe focus on the single cell Downlink channel where N receivers,each one of them equipped with a single receiving antenna,are being served by a transmitter at the Base Station(BS) provided with a single transmitting antenna.A heterogeneous scenario has been set up where users run any of the four different classes of applications.Class1represents voice users(the most delay-sensitive application)and has the highest priority,while Class4is the lowest priority best-effort class.It is worth mentioning that the demand of real-time services,such as V oice over IP(V oIP), for strict QoS delay demands,leads to the re-consideration of the ring scattering model[13] which is widely used in the evaluation of WLAN systems with non-real time(e.g.data traf-fic)applications.This is because the QoS requirements have to be satisfied in a tighter time scale,which requires for detailed models to account for the instantaneous channel random fluctuations.A wireless multiantenna channel h(t)is considered between each of the users and the BS,where a quasi-static block fading model is assumed,which keeps constant through the coherence time,and independently changes between consecutive time intervals with inde-pendent and identically distributed(i.i.d.)complex Gaussian entries∼CN(0,1).Therefore, the channel for each user is assumed to befixed within each fading block(i.e.scenario coher-ence time)and i.i.d from block to block,so that for the QoS objective,this model[4,14] captures the instantaneous channelfluctuations in a better approach than the circular rings model.Let s i(t)denotes the uncorrelated data symbol to the i th user with E{|s i|2}=1,then the received signal y i(t)is given byy i(t)=h i(t)s i(t)+z i(t)(1) where z i(t)is an additive plex noise component with zero mean and E{|z i|2}=σ2.A total transmission power of P t=1is assumed,and for ease of notation,time index is dropped whenever possible.4Opportunistic TransmissionA main scheduling policy in multiuser scenarios is the opportunistic transmission[3,4],where the transmitter selects the user with the best channel conditions at each time to increase the system aggregate throughput.During the acquisition step,a known training sequence is transmitted for all the users in the system,and each one of the users calculates the received Signal-to-Noise-Ratio(SNR),and feeds it back to the BS.The BS scheduler chooses the user with the highest SNR value to benefit from its current channel situation.So,it gets theCross Layer QoS Guarantees in Multiuser WLAN Systems553 multiuser gain from the scenario to increase the system throughput.After that,the BS enters to the transmission stage and starts transmitting to the selected user.This opportunistic strategy is low complexity,and proved to be optimal[3]as it obtains the maximum average throughput(TH)asT H=Elog2(1+max1≤i≤NSN R i)(2)where E{.}is the expectation operator to denote the average value.Notice that the value ofmax1≤i≤N SN R i reflects the serving SNR(i.e.the SNR that the selected user i obtains whenit is selected for transmission).The OT scheme is shown to improve the system average throughput[3],but the main target of this work is in providing a precise and guaranteed QoS control for all the users,mainly in terms of the maximum allowed delay and minimum guaranteed throughput.As it will be later explained,this is achieved through the optimization of the DLC queue length,where the simulations will show an interesting tradeoff between the QoS satisfaction and the system average throughput.It has to be noted that the minimum allowed rate,the maximum allowed delay and the minimum guaranteed throughput stand as QoS realistic constraints for both real and non-real time applications,providing the commercial operator with a wider view than the fairness concept[4],as the QoS is stated in terms of per user exact requirements. 5System QoS PerformanceFor the consideration of any transmission scheme in commercial standards that run real-time applications,the QoS of the users is a crucial aspect that can be characterized by several met-rics or indicators based on the design objectives.So,QoS can be expressed in terms of rate, reflecting the minimum required rate per user,or in terms of delay,showing the maximum delay that a user can tolerate for its packets.This paper considers both of the aforementioned QoS concepts,where the proposed transmission scheme guarantees a minimum rate R per user,which is presented by a minimum SNR restriction(snr th),through the classical relation (R=log2(1+snr th)),and delivered to it within a maximum tolerable time delay K.As this work deals with real-time applications in WLAN systems,then the QoS demands cannot be satisfied for the100%of cases due to the channel impairments.Therefore,some outageξout in the QoS is accepted[7].Current cellular system such as GSM and UMTS use the same approach,and it is expected that future WLAN systems will employ the outage concept for real-time applications.As an example,V oIP can accept erroneous and delayed packets up to10−3of the total number of packets.The paper defines two concepts for outage[1]:the scheduling delay outage and the rate outage.Thefirst one is related to the opportunistic access policy and the time instant when the i th user is provided service.Section5.1characterizes the user opportunistic access and obtains the expression for its access delay probability.The second outage concept accounts for the received data rate once the i th user is selected for transmission,and whether its rate requirement is satisfied or not.Section5.2derives the corresponding SNR distribution for the selected user,and obtains the minimum guaranteed rate under an outageξrate.5.1Access Delay OutageIn TDMA systems(e.g.GSM)each user knows,in advance,its exact access slot;but in an opportunistic scheduler,as a continuous monitorization of the users’channel quality is554N.Zorba et al. performed to select the best one in each slot,so that the access to the wireless medium is not guaranteed.Therefore,the study of the access to the channel in the OT scheme offers several challenges that must be solved for the OT consideration in practical systems.This section calculates the maximum access delay from the time that a user’s packet is available for transmission at the scheduler until the user is serviced.If an active user is in the system,but it is not scheduled within its maximum allowed delay(e.g.because its channel conditions are not good enough to be selected by the OT scheduler),then that user is declared as being in access delay,with an outage probabilityξaccess given byξaccess=1−V(K)(3) with V(K)as the probability that a maximum of K time slots are required to select a user i from a group of N ers,1where this probability follows a Geometric Distribution[15] asV(K)=1−(1−P access)K(4) In the OT schem,each one of the N independent users attempts to be serviced with P access=1N,therefore from previous equation,the maximum number of time slots K until the i th user is selected for transmission,with a probability of delay outageξaccess,is given byK=log2(1−V)log2(1−P access)=log2(ξaccess)log2(1−1/N)(5)showing the effect of the number of active users N.5.2Minimum Rate OutageIf the BS scheduler selects a user for Downlink transmission,it means that he/she has the max-imum SNR among the users.But the instantaneous channel conditions(i.e.the instantaneous SNR)may correspond to a transmission rate that does not satisfy its current application rate requirements(e.g.for a predefined Packet Error Rate,the channel can only provide6Mbps while the application asks for24Mbps).As a consequence,the user is unable to correctly decode the received packets during the current time unit and suffers a rate outage.Based on the OT philosophy to deliver service to the users,the serving SNR value is the maximum SNR over the active users in the ing the SNR cumulative distribution function(cdf)of plex Gaussian channels[14]F(x)=1−e−(x·σ2)(6) and as the serving SNR is the maximum over all the users’SNR values,then the servingSNR cdf states asF F(x)=(F(x))N=1−e−(x·σ2)N(7)Taking into account the cdf of the serving rate,the minimum required rate snr th is obtained for each user with a predefined rate outageξrate asξrate=1−e−(snr th·σ2)N(8)1Along the paper,all the users are assumed to have the same average channel characteristics,and showing the same distribution for the maximum SNR value,so that each user has the same probability to be selected. If this is not the case(e.g.heterogeneous users distribution in the cell,with some users far from the BS),then a channel normalization(e.g.division by the path loss)can be accomplished for such a scenario.Cross Layer QoS Guarantees in Multiuser WLAN Systems555 The values of snr th andξrate can be computed to meet any system objectives under the number of users N.It is worth noting that the minimum SNR value guarantees that the user’s decoding process will be successful.In that case a unit step function is used for the detection procedure,making the Packet Success Rate(PSR)to relate to snr th asP S R=1if SN R≥snr th0if SN R<snr th(9)where the direct relation toξrate is shown.With further manipulations,the snr th from Eq.8 can be expressed assnr th=1λσ2log211−N√ξrate(10)where the effect of all the involved parameters is shown,withλ=log2(e)=1.4427.Equa-tion10shows the rate limits of the system,indicating that high snr th requirements induce high outageξrate in the system.Negative values for snr th indicate infeasibility of the requested rate.5.3Outage of the SystemAs previously explained,the OT scheme is controlled by two different outage measures,but the total system performance has to be defined through a single parameter.Notice that the two discussed kinds of outage are totally independent,as the user’s access to the channel happens when its SNR is the maximum over all the other users,but being the user with largest SNR value does not guarantee that this SNR is larger than an application predefined threshold snr th.Therefore,the total outageξout is defined asξout=1−(1−ξaccess)·(1−ξrate)(11) standing as the global measure of system outage.5.4Maximum Scheduling DelayIn point to point scenarios,the queueing delay is the dominant factor in the system delay [16]while in multiuser systems an additional delay factor is introduced,because the system resources are not all the time available to the same user.We name this additional delay factor as the scheduling delay in multiuser systems.In round robin based systems(e.g.TDMA)the user access to the channel is known in advance,so that its scheduling delay can be easily calculated.However,in opportunistic multiuser systems where the user with the best channel conditions is selected for transmission based on its instantaneous SNR,a user does not have any guarantee for being scheduled in a specific time,which increases its scheduling delay.In the context of this paper,we define the maximum scheduling delay as the time period from the instant that a user’s packet is available for transmission at the scheduler until the packet is correctly received at its destination.The difference with the access delay definition is the requirement of a rate threshold in order to guarantee the decoding without errors,as in Eq.9.Notice that this definition includes both the delay resulting from the scheduling process (i.e.the opportunistic selection)and the delay caused by the requirements to get a rate above to a minimum required threshold to be correctly received.Therefore,the maximum number of time slots to select a user with a total outageξout is equal to the K access slots(Eq.5), defining the maximum allowed scheduling delay.556N.Zorba et al.In order to avoid misleading conclusions for the reader,a brief numerical example is pre-sented.In a scenario with N=10users,a system bandwidth of B w=1MHz,K=25 required maximum scheduling delay,σ2=1and R=1.2Mbps minimum demanded rate for each user,it results thatξaccess=7.2%andξrate=4.2%are obtained.So that the access delay is25slots with an access outage of7.2%.But even though a user is selected, it may get a rate below its requirement with an outage probability of4.2%,so that theξrate must be introduced.Therefore,a wireless operator can guarantee to each user,the correct reception of its packet within a maximum scheduling delay of25slots and with a total outage ofξout=11.1%.As we consider the scheduling delay,both the buffer management and source statistics for arriving packets are not addressed[17];and the queues stability target[16]is neither regarded.Therefore,we assume a saturated system and we only consider the delay resulting from the scheduling process.The total delay(scheduling+queueing)will be tackled as a future work.5.5Minimum Guaranteed Throughput per User and per SlotObtaining the system throughput formulation is difficult as several processes are included in the communication procedure.The receiver decoding through the unit step function in Eq.9 simplifies the throughput formulation,as the effects of several steps in the communication process(e.g.coding)are avoided.In opportunistic multiuser scenarios,a user in not always served by the system,so that its throughput is zero for several time units.Therefore,a normalized minimum-guaranteed throughput per user over the time is required.Notice that such definition of throughput per user and per slot accounts for the user’s waiting time and hence,for its corresponding scheduling delay expression.Considering that the bandwidth of the system is B w,then the minimum-guaranteed throughput per user and per slot is denoted as T,in bits,and given asT=B w log(1−1/N)log21+1λσ2log211−N√ξratelog(ξaccess)(12)where the expression in Eq.10is used to provide a closed form solution for the minimum-guaranteed throughput per user,with all the operating variables.Notice that by increasingthe number of users N,the minimum guaranteed rate R goes up and as a consequence higherthroughput is obtained.On the other hand,larger N induces larger scheduling delay,increas-ing in this way the value of K,that drives lower throughput values.This shows a tradeoff onthe number of available users in the system,motivating a control over the N value to achievethe system QoS requirements,as will be shown in the next section.Note that the minimum-guaranteed throughput is the worst case awarded throughput tothe users,but it actually defines the throughput value that an operator can guarantee to itscustomers,obviously,with a given outageξout;where the guaranteed throughput per user is different from the concept of average throughput in Eq.2,previously presented.A very com-mon example in commercial systems for average throughput and the minimum guaranteedthroughput is seen in the ADSL service,where for example,an operator can provide its cos-tumers20Mbps(which is the value that appears in its advertisements),while the minimumguaranteed value for the user is2Mbps(National regulatory telecommunication agenciesoften ask for a guaranteed value of at least10%of the average value).Cross Layer QoS Guarantees in Multiuser WLAN Systems557Fig.2Dynamic queue length scheme6Data Link Control with Dynamic Queue LengthTwo important aspects to achieve QoS for the serviced users are extracted from the analytical study in the previous section:the impact of the number of available users and their exact QoS demands.To guarantee the different users’requirements and their sensitivity to delay and rate,a control on the DLC queue length L is proposed in this paper.The aim of this section is to provide a description of this proposal,performed through a cross-layer scheduling algo-rithm at the DLC layer of WLAN systems.The main idea of the proposed scheme is depicted in Fig.2.It can be seen that each IP packet is stored at the corresponding priority queue in the IP layer,before moving down to the DLC layer ers from higher priority IP queues are placed at the beginning of the DLC queue following by users with lower priorities traffic.At the Physical layer,the WLAN systems use different modulation levels,so that variable transmission rates depending on the channel conditions(measured through the received SNR) are obtained.The OT scheme is applied to select the user with the best channel conditions in order to maximize the system average throughput.Regarding the dynamic queue length mechanism,when the maximum allowed delay(or minimum allowed rate)in the delivery of the most delay sensitive application is smoothly satisfied,then the length of the queue can be increased so that more users can be placed in the DLC layer queue.As a consequence,the OT scheduler can select the user with the best channel conditions in a bigger pool of choices,increasing in this way the performance of the system in terms of the average throughput in Eq.2.On the other hand,when the maximum allowed delay requirements are hardly satisfied,then the length of the DLC queue is short-ened.Therefore,only packets form users within the higher priority classes can be available in the DLC layer queue,so that the OT scheduler can only select among these users.Likewise, the same procedure can be applied when the minimum guaranteed throughput per user is the considered QoS indicator.Note that the proposed dynamic adjustment in the size of the queue shows the tradeoff between the real-time users’QoS demands and the system average throughput in the net-work,where the best operating point depends on the network operator requirements.It has to be noted that very delay sensitive applications are in general characterized by short packets lengths,such as V oIP,that do not extract all the benefit from the throughput of the system. Tofind the best operating point,the dynamic queue length L(i.e.number of available users at the DLC layer)is maximized,subject to some system requirements in terms of the users’QoS demands.Taking into consideration the existence of outage in the QoS satisfaction, a proposed optimization procedure for the system performance can be stated as558N.Zorba et al.max Ls.t.1Prob{SN R i<snr th}≤ξrate∀i∈L(13)s.t.2Prob{D max<K i}≤ξaccess∀i∈Ls.t.3Prob{T i≥T min}≥1−ξout∀i∈Lwhere D max is the maximum allowed delay and T min is the minimum required throughput per user and per slot.Note that the previous scheme presents the dynamic queue length adjust-ment together with the QoS concepts(minimum allowed rate,maximum allowed delay and minimum guaranteed throughput),where the operator can choose among the QoS demands for the most appropriate ones for each scenario.7Performance EvaluationTo evaluate the performance of the proposed dynamic DLC queue mechanism,a heteroge-neous scenario is set up where users with four types of applications coexist in the network. The results are both presented through the theoretical analysis and with Monte Carlo simu-lations.For the later,a Mathworks Matlab simulator is employed where a total of N=20 users are available in the scenario withfive users for each service traffic class.The length of the packets for the classes1,2,3and4are100,512,1024and2312bytes respectively.Class 1has the highest priority,while class4is the lowest priority one.Class1can be V oIP and/or on-line gaming applications while Class4may represent an FTP download application.A saturated system is considered,where all users have at least one packet available for trans-mission.A total system bandwidth of20MHz and a slot service time of1ms are assumed in the simulations.An Indoor complex i.i.d.Gaussian channel with∼CN(0,1)entries is considered.A time scale of106channel visualizations are employed to display the channel continuous variations.The OT scheme is considered to select the user with the best channel conditions at each time instant.Table1shows how the SNR values for IEEE802.11legacy systems are mapped to the transmission rate,as stated in[18].The efficiency of our dynamic queue length scheme is compared with a Round Robin based scheduling scheme[19]where the channel conditions are not taken into consider-ation,and the users access to the channel are guaranteed atfixed intervals.This technique is implemented in TDMA systems(e.g.GSM)and it is shown to provide the lowest possibleTable1SNR values mapping toRate(Mbps)SNR value rate0<−86−8to12.5912.5to141214to16.51816.5to192419to22.53622.5to264826to2854>28。

相关文档
最新文档