一种动态自适应P2P实时流缓冲机制(IJISA-V3-N3-1)
信息中心网络中基于局部内容活跃度的自适应缓存算法

Ke y wo r d s I n f o m a r t i o n - c e n t r i c n e t wo r k s , Ca c h e r e p l a c e me n t s t r a t e g y , Ca c h e a l g o r i t h m, o n C t e n t a c t i v i t y
缓存算 法 AC A P, 使缓 存 内容按 照本地活跃度依次缓存在访 问路径 中。仿 真结 果表 明, L A U 策略提 高 了单节点缓 存
命 中率; A C AP相 比已有的路径缓存算法 , 具有较低 的服 务器命 中率和跳数 比。最后对该算 法适 用的缓存 结构和拓扑
结构 进 行 了讨 论 和 分析 。
c o n t e n t a c t i v i t y c a l l e d LAU. An d a n a d a p t i v e p a t h c a c h i n g a l g o r i t h m n a me d AC AP wa s p r o p o s e d。 S O t h a t t h e c o n t e n t s
Ab s t r a c t Af t e r mo d e l i n g a n a l y s i s , we f o u n d t h a t r e p l a c e me n t p o l i c y b a s e d o n g l o b a l c o n t e n t p o p u l a r i t y d o e s n o t s u i t f o r t h e d i s t r i b u t e d mo d e l o f i n f o r ma t i o n c e n t r i c n e t wo r k s . Th i s p a p e r p r e s e n t e d a c a c h e r e p l a c e me n t p o l i c y b a s e d o n l o c a l
一种结构化P2P网络中的动态协作缓存策略

致性 等方 面很难 满 足其要 求 。 在 非 结 构 化 P P 网络 中 , 存 机 制 主 要 是 用 2 缓
然而 在 P P网 络 中 , 2 当用 户 对 某 一 小 部 分 的
来 缓解 信息 传输 瓶 颈 , 少带 宽 损 耗 , 减 可分 为 三 种 络 , 2 提出一种 动态协作缓存策略 。此算法 以缓存引起 的收益 和损耗为标 准 , 决定是否 在该
节点缓存该资源 , 决了以往 算法 只考 虑单个节 点性 能而忽 略系统 整体负载 的问题 。仿真结果表 明 , 解 该算法能够很好 的降 低 系统 负载 , 减少节点寻找资源时的平均跳数 , 已有 的缓存策略 , 能有很 大提高。 较 性
( e g u Ar o e so a o lg ,Ch n d 6 1 3 ) Ch n d tPr f s i n lc l e e egu 1 4 0
A s r c I i p p r ad n mi c o eae b et c c i tu t rdP P n t r s a dDc ah a e np o b ta t nt s a e , y a c o p rtdo jcl a hn i sr cu e 2 e h gn wok me C c eh s e r — n b p s d whc eie eh r occ ete be t r o a e nt ec s a dls c u igb .D C c el e a e e v r o e , ihd c s d wh t e h jc o t s do t n t a s yi c ah v rg s h e— t a h o n b h o o n t e t o
基于结构化P2P网络的动态负载均衡算法研究

基于结构化P2P网络的动态负载均衡算法研究
张伟文;吴国新
【期刊名称】《计算机工程与设计》
【年(卷),期】2007(28)17
【摘要】针对结构化P2P网络中可能出现的查找"热点"问题,结合基于DHT的
P2P系统路由机制提出了ADLB(adaptive dy namic load balancing)算法,该算法充分利用原有Chord[4]协议的路由机制和P2P网络中各节点的异质性,通过动态控制节点加入来减轻重载节点的负载.此外还提出了一套动态监测控制节点负载的方法,最后通过性能仿真验证了算法的有效性.
【总页数】4页(P4152-4154,4168)
【作者】张伟文;吴国新
【作者单位】东南大学,计算机网络与通信集成教育部重点实验室,江苏,南
京,210018;东南大学,计算机网络与通信集成教育部重点实验室,江苏,南京,210018【正文语种】中文
【中图分类】TP393.02
【相关文献】
1.非结构化P2P网络资源搜索算法研究 [J], 孙战彪
2.结构化P2P网络资源搜索Chord算法研究与改进方案 [J], 马义忠;丁会燕;莫勇权;许良
3.一种结构化P2P网络动态负载均衡算法的研究 [J], 陆垂伟;李之棠;林怀清;黄庆凤;张冶江
4.非结构化P2P网络基于马尔科夫链的搜索算法研究 [J], 刘璇;于双元
5.非结构化P2P网络中基于激励的搜索算法研究 [J], 潘正军;王庆生
因版权原因,仅展示原文概要,查看原文内容请购买。
一种基于AAE的协同多播主动缓存方案

一种基于AAE的协同多播主动缓存方案刘鑫;王珺;宋巧凤;刘家豪【期刊名称】《计算机科学》【年(卷),期】2022(49)9【摘要】随着用户终端数量的激增和5G技术的发展,形成了宏基站和小基站并存的网络。
同时超高清视频、云VR/AR等应用对时延提出了更高的要求。
为了缩短5G网络中的时延,文中结合小基站协同、多播和用户行为可预测的特性,提出了一种基于对抗自动编码(Adversarial Autoencoders, AAE)的协同多播主动缓存方案(Collaborative Multicast Proactive Caching Scheme Based on Adversarial Autoencoders, CMPCAAE)。
该方案首先根据用户的特征信息将用户划分成偏好不同的用户组,然后通过AAE预测每个用户组可能请求的内容。
为了减少缓存内容的冗余,采用蚁群算法(Ant Colony, ACO)将预测的内容预先部署到各小基站以实现小基站间的协同。
在内容分发阶段,若分组中用户请求的是流行度高的内容,则以多播的方式将该内容主动缓存到分组中其他未发送请求的用户,否则以正常的方式进行分发。
仿真结果表明,CMPCAAE方案在系统的平均请求时延和丢失率方面均优于经典的缓存方案。
【总页数】8页(P260-267)【作者】刘鑫;王珺;宋巧凤;刘家豪【作者单位】南京邮电大学通信与信息工程学院【正文语种】中文【中图分类】TP393【相关文献】N网络中一种基于代理主动缓存的用户移动性支持方案2.一种基于主动网技术的VPN多播实现方案3.一种基于iSCSI的层次性多播缓存机制4.ANLMCC--一种基于主动网的分层多播拥塞控制方案5.一种基于两级缓存的协同缓存机制因版权原因,仅展示原文概要,查看原文内容请购买。
P2P系统中基于预连接动态资源下载算法的研究

二 、具 体实施 步骤
在具 体 实施上 ,采 取 以下步骤 实现 。 1 .数 据分 片块预下 载选择 策略
之前尽量完整 ;后者用来降低对等节点的性能差异 给数据 的及时到 达带来 的影 响。 从 整个系统 的体 系 结构上看 ,若 将 系统 自上 而 下分为应 用层 、文件 系统 层 、P P层 、网络层 ,前 2 者服务 于应 用层 ,中者服 务于文件 系统层 ,而后者
未 下载 、正 在下载三 种。 假设 当前正在 播放 的数 据 分 片 块 号为 bok l — c F W,资源 预下载 区大小 为 N,资 源 预 下载 区包 含 l O
收 稿 日期 :2 1.22 0 01-9
作者简介 :苏 逸 (9 4一 ) 18 ,男 ,四川南充 人 ,助教 ,从 事计 算机 科学 与技术 研究 ;凌 人 ,助教 ,从事计算机科学与技术研究 。
一
ห้องสมุดไป่ตู้
、
基 于预 连接 动态 资源下载算法
图 2 三种策略对应 的服务对 象区域图示
基 于预 连接 的动 态 资 源 下 载算 法 由三 部分 组 成 :数据 分片块预 下载选 择策略 、紧急数据 通道策 略 、基 于预连接 的对 等节 点选择策 略。三种 策略在 保证数据及 时到达率 上各 有分工 :前 者用来 降低数 据在 网络 中的健康程度 差异 给数据 的及 时到达 可能 带来 的影 响 ;中者保证 系统 在将数据投 送给播放 器
关键 词 :流媒 体 缓 冲 ;P P 2 ;预 连接 ; 紧急 数 据 通道 策略
中图 分 类号 :G 2 1 0
文献 标 志码 :A
文章 编 号 :17 -4 9 2 1 )30 7 -2 6250 (0 1 0 -140
一种基于TI TMS320 DSP的软件动态链接技术

(. cec n eh o g o u i t nI om t nS cryC nrl a oa r , i i 10 3 C ia 1 S i ea dTcn l yo C mm n ai n r ai eui o t L b r oy J x g3 4 3 , hn ; n o n c o f o t o t an
第2 0卷 第 1 期 1
Vo .O 1 2 No 1 .1
电子 设计 工程
El cr n c De in E gn e i e to i sg n i e r
21 0 2年 6月
Jn 2 1 u. 0 2
一
种 基 矛 T MS 2 S I T 3 0D P的软件动 态链接 技 术
中图分类号 : N 3 T 32
文献标识码 : A
文 章 编 号 :1 7 — 2 6 2 1 ) 1 0 6 — 4 64 6 3 (02 l— 17 0
A o t r y mi i k e ho s d o TITM ¥ 2 s f wa e d na c ln m t d ba e n 3 0 DSP
开 发 人 员 采用 了重 配 置 、 叠 (vr y 等技 术 来 实 现 通 信 协议 层 oe a ) l
点 。 此 后 的 D P开 发 过 程 中 会得 到 广 泛 的 成 流 程 P
T S 系列 开 发 工 具 包 为 开 发 人 员 提 供 了一 套 完 整 的 I P D 软 件 开 发 工具 链 , 括 CC + 译 器 , 编 器 , 接 器 等 p 利 包 ,+编 汇 链 J 。 用 这套开 发工具 包 .S D P开 发 人 员 可 以完 成 程 序 的 编 写 、 编 译 、 接 , 到 目标 代码 的生 成 。 中 ,/+ 编 译 器 将 C语 言 链 直 其 CC + 转 换 为 汇 编 语 言 . 编 器 汇 编 语 言 翻译 为 机 器语 言 。 C F 汇 以 OF 目标 文 件 的 形 式 输 出 给 链 接 器 。 链 接 器 将 汇 编 器 生 成 的 CF O F文 件 与 库 文 件 链 接 成 一 个 完 整 的 程 序 , 以 可 执 行 的 CF O F文 件 输 出 到 磁 盘 , 成 D P可 执 行 程 序 。 生 S
一种动态负载均衡的P2P应用层组播方案

一种动态负载均衡的P2P应用层组播方案
林龙新;周杰;张凌;叶昭
【期刊名称】《计算机科学》
【年(卷),期】2008(035)002
【摘要】基于结构化的P2P基础设施,给出一种动态负载均衡的应用层组播方案--DLBMS.利用Tapestry协议的路由和定位机制,设计了延迟优化的组播转发树结构,采用根节点复制的方法生成多棵不相交的组播转发树,根据负载的变化动态调节组播转发树数目以实现负载均衡和降低源到组成员节点的端到端延迟.通过模拟实验说明了此方案在平均控制负载和端到端平均延迟方面的有效性.
【总页数】4页(P23-26)
【作者】林龙新;周杰;张凌;叶昭
【作者单位】华南理工大学广东省计算机网络重点实验室,广州,510641;华南理工大学广东省计算机网络重点实验室,广州,510641;华南理工大学广东省计算机网络重点实验室,广州,510641;华南理工大学广东省计算机网络重点实验室,广
州,510641
【正文语种】中文
【中图分类】TP3
【相关文献】
1.一种分层分簇的应用层组播方案 [J], 张云;禹继国;任庆杰
2.一种自组织的应用层组播层次密钥管理方案 [J], 许建真;夏肖;王明
3.一种应用于PON接入网的应用层组播优化方案 [J], 马东超;鲍蕾蕾;陈雪;马礼
4.一种拓扑感知的应用层组播方案 [J], 张新常;王正;罗万明;阎保平
5.DOMulti:一种延迟优化的P2P应用层组播协议 [J], 林龙新;周杰;张凌;叶昭因版权原因,仅展示原文概要,查看原文内容请购买。
一种适用于P2P存储系统的索引管理算法

一种适用于P2P存储系统的索引管理算法
关中
【期刊名称】《计算机科学》
【年(卷),期】2008(35)6
【摘要】PB-link Tree通过哈希定位将B+树分布到多个节点上,解决了动态P2P 环境中索引的完整性和准确性问题.实验表明,即使节点频繁加入或离开系统,仍能保持数据的可靠性和一致性.而且,PB-link Tree较之传统DB-link Tree在查询过程中数据传输量更小,查询时间更短.
【总页数】3页(P139-140,144)
【作者】关中
【作者单位】广州城市职业学院,广州,510230
【正文语种】中文
【中图分类】TP3
【相关文献】
1.一种适用于P2P存储系统的自反馈故障检测算法 [J], 万亚平;冯丹;欧阳利军;刘立;杨天明
2.分片位图索引:一种适用于云数据管理的辅助索引机制 [J], 孟必平;王腾蛟;李红燕;杨冬青
3.一种适用于异构存储系统的缓存管理算法 [J], 李勇;王冉;冯丹;施展
4.一种P2P环境下的B+树索引管理算法 [J], 鞠大鹏;黎明;胡进锋;汪东升;郑纬民;
马永泉
5.一种适用于P2P-SIP框架的自适应搜索算法 [J], 耿福泉;高士坤;赵林亮;王光兴因版权原因,仅展示原文概要,查看原文内容请购买。
一种可避免流量扩散的P2P文件共享系统

一种可避免流量扩散的P2P文件共享系统
李松斌;王劲林
【期刊名称】《网络新媒体技术》
【年(卷),期】2008(029)012
【摘要】目前大部分P2P文件共享系统由于在设计时没有考虑实际网络的拓扑结构,导致巨大的P2P流量挤占了宝贵的骨干网资源,使P2P计算遭到网络运营商的排斥.为解决上述问题,本文从运营商的角度出发,在深入分析我国互联网络的基础上,给出了一种自适应多层节点聚簇构建P2P文件共享系统的方法,该方法可将绝大部分的P2P流量限制在本地网络内,阻止P2P流量向上层网络扩散.仿真结果表明,本文方法可将80%以上的P2P流量限制在运营商省级网络范围内.
【总页数】7页(P27-33)
【作者】李松斌;王劲林
【作者单位】中国科学院声学研究所,北京,100190;中国科学院研究生院,北
京,100049;中国科学院声学研究所,北京,100190
【正文语种】中文
【中图分类】TP3
【相关文献】
1.一种P2P文件共享系统的QoS评价模型 [J], 吴吉义;张启飞;王桐;陈德人;章剑林
2.P2P文件共享系统中的一种基于商品市场模型的访问控制机制 [J], 李勇军;代亚
非
3.一种基于文件路由表的移动P2P文件共享系统 [J], 樊里略;苏文莉;陈佳
4.一种文件共享系统的P2P网络架构设计 [J], 郑晓健;郑子维
5.P2P文件共享系统中的一种双向并发信任机制 [J], 陈国强;苏静
因版权原因,仅展示原文概要,查看原文内容请购买。
一种流量自适应的iSLIP算法

一种流量自适应的iSLIP算法王景存;张晓彤;谢馨艾;刘兰军【期刊名称】《北京工业大学学报》【年(卷),期】2007(033)002【摘要】针对iSLIP(iterative round robin matching with slip)算法在处理突发业务时性能严重恶化的问题,在iSLIP算法的基础上提出了一种流量自适应的时隙问迭代算法TA-iSLIP(traffic adaptive iSLIP).该算法根据队列长度智能判断当前流量情况,采取不同的调度策略,充分利用已经匹配的资源,使系统的匹配开销尽可能减小.并给出了TA-iSLIP的算法描述和性能评价,与iSLIP算法、FIRM(fcfs in round-robin matching)算法进行了比较.仿真结果表明,TA-iSLIP在均匀和非均匀流量下都达到了较好的性能,在非均匀流量下的吞吐率达到97%以上.【总页数】6页(P219-224)【作者】王景存;张晓彤;谢馨艾;刘兰军【作者单位】北京科技大学,信息工程学院,北京,100083;武汉科技大学,信息科学与工程学院,武汉,430081;北京科技大学,信息工程学院,北京,100083;北京科技大学,信息工程学院,北京,100083;北京科技大学,信息工程学院,北京,100083【正文语种】中文【中图分类】TP301.6【相关文献】1.多媒体网络数据传输的一种自适应流量控制算法 [J], 肖蕾;向丹2.一种基于SVD和改进自适应算法的科氏流量计气体信号频率解算方法 [J], 任建新;边琦;张鹏;惠全民3.一种SDN中实现最优化媒体流量的自适应排队算法 [J], 郭其标4.一种基于改进自适应算法的科里奥利流量计频率解算方法 [J], 任建新;边琦;张鹏;惠全民5.一种基于改进自适应算法的科氏流量计频率解算新方法 [J], 杜明因版权原因,仅展示原文概要,查看原文内容请购买。
基于竞价模式和智能迭代模式的边缘协作缓存算法研究

基于竞价模式和智能迭代模式的边缘协作缓存算法研究李长敏琛;陆飞澎;王燕灵
【期刊名称】《现代电子技术》
【年(卷),期】2024(47)12
【摘要】为了让用户更快地获取内容,提出一种基于竞价模式和智能迭代模式的综合式边缘协作缓存算法。
在竞价算法部分,采用熵权法计算中间值、访问热度、缓存空间和内容热度的权重,并综合计算节点上内容的缓存分数,根据缓存分数进行缓存替换。
在迭代算法部分,将网络空间划分为多个子域,每个子域通过混合遗传算法进行周期性计算,并得出子域内内容缓存方案。
实验结果表明,所提算法在降低云服务器负载、减少请求平均跳数方面具有明显的优势。
【总页数】5页(P165-169)
【作者】李长敏琛;陆飞澎;王燕灵
【作者单位】杭州电子科技大学;杭州新中大科技股份有限公司
【正文语种】中文
【中图分类】TN919-34;TP311
【相关文献】
1.基于边缘计算模式的工业物联网智能制造方案
2.基于人工智能RL算法的边缘服务器智能选择模式研究
3.自主式水下航行器:基于人工智能协作模式的设计形态研究
4.基于智能化协作学习模式的高校教学活动组织与实施研究
5.基于SI-SB系统安全模型的多层级边缘智能管控模式
因版权原因,仅展示原文概要,查看原文内容请购买。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
I.J. Intelligent Systems and Applications, 2011, 3, 1-10Published Online May 2011 in MECS (/)A Dynamic Self-Adjusted Buffering Mechanismfor Peer-to-Peer Real-Time StreamingJun-Li Kuo, Chen-Hua Shih and Yaw-Chung ChenComputer Science, National Chiao Tung University, HsinChu, TaiwanEmail: estar.cs95g@.tw; shihch@.tw; ycchen@.twAbstract—Multimedia live stream multicasting and on-line real-time applications are popular recently. Real-time multicast system can use peer-to-peer technology to keep stability and scalability without any additional support from the underneath network or a server. Our proposed scheme focuses on the mesh architecture of peer-to-peer live streaming system and experiments with the buffering mechanisms. We design the dynamic buffer to substitute the traditional fixed buffer.According to the existing measurements and our simulation results, using the traditional static buffer in a dynamic peer-to-peer environment has a limit of improving quality of service. In our proposed method, the buffering mechanism can adjust buffer to avoid the frozen or reboot of streaming based on the input data rate. A self-adjusted buffer control can be suitable for the violently dynamic peer-to-peer environment. Without any support of infrastructure and modification of peer-to-peer protocols, our proposed scheme can be workable in any chunk-based peer-to-peer streaming delivery. Hence, our proposed dynamic buffering mechanism varies the existing peer-to-peer live streaming system less to improve quality of experience more.Index Terms—multimedia, live streaming, real-time service, peer-to-peer, buffer control, IPTVI.I NTRODUCTIONWith advancement and development of network technology, more and more on-line applications appended to the Internet are popular among many people in the recent years. In addition, some services limited to the bandwidth challenges become practicable and feasible because of an increase of broadband access. For example, the service of multimedia multicast satisfies hundreds or thousands of users simultaneously. More and more people enjoy multimedia services easy from youTube or many Web 2.0 websites. It is easy that users just click a mouse to access Internet as well as press a remote controller to watch Internet television.The Internet television owns more outstanding advantages than traditional television, such as rich and various programs as well as watching television anytime and everywhere via wireless mobile device. Even a successful IPTV (Internet Protocol TeleVision) system can allow arbitrary users to create the user-generated content. Several researchers forecast and expect that IPTV will be the television in the next generation [1].However, the traditional server-client system cannot afford the high bandwidth of multimedia streaming for IPTV; hence, many researchers considered that peer-to-peer (P2P) architecture should be a good network overlay to broadcast multimedia streaming based on four reasons: (1) P2P technology has been developed successfullyfor file sharing and has been demonstrated to healwith the accessed utilization for thousands ofusers simultaneously. P2P technology also ownsthe scalability, such as BitTorrent and eDonkey. (2) P2P can use the resource of each peer adequately,especially bandwidth resource. Peers cancooperate with each other to share availablebandwidth for the scalable applications.(3) P2P content distribution uses an application layermulticast without any additional support from theunderneath network.(4) P2P servicing model greatly reduces server cost.In summary, using P2P overlay has become anincreasingly popular approach for live streamingover the Internet.Presently, the live media distribution is called “P2P live streaming” and is different from video on demand (VoD). P2P live streaming service usually provides in-time programs, such as live ball game, first-hand stock information, or the latest news. On the other hand, VoD service often provides the rebroadcasted movies or series. Obviously, the limit of real-time matters must be considered more in live streaming than in VoD. Therefore, various kinds of discussed issues in the P2P live streaming system were derived and referred to quality of service (QoS). In this paper, we consider in the users’ opinions instead of the developers’ opinions.The structures of P2P live streaming system can be roughly differentiated between tree-based structure and mesh-based structure. According to the data-driven way (i.e. how to acquire data), we can call them tree-push and mesh-pull respectively. Tree-push means that the ancestors usually push available source to their descendants. On the contrary, mesh-pull means that the peer collects available source by itself and competes against other peers. Additionally, many recent studies addressed the concept about hybrid; hence, push and pull already have combined each other or worked together [2]. In our practice and experience, tree and mesh own their2 A Dynamic Self-Adjusted Buffering Mechanism for Peer-to-Peer Real-Time Streamingindividual advantages respectively, maybe the programming developers take trade-off into account to select one structure among them to implement a P2P live streaming system:(1) Single-tree structure is simple and efficient butvulnerable to dynamics;(2) Multiple-tree structure is more resilience but morecomplex than single tree;(3) Mesh structure is more robust but occurs to longerdelay and control overhead.So far, most of the previous research modifies application multicast to improve the quality of experience (QoE) [12] for the audiences and almost probes into (1) Overlay structure, (2) Peer adaptation, (3) Data scheduling, and (4) Hybrid reconstruction.(1) Overlay structure. Briefly, overlay structureincludes basic architecture and data deliverystrategy. Basic architecture of mesh consists ofswarming and gossip. Swarming originates fromthe swarm of BitTorrent, means the group of peersthat all share the same torrent. Therefore,swarming is like as the BT-liked protocol. Besides,gossip is similar with swarming, and peerselection of gossip protocol is freer than swarming.Data delivery strategy varies according to above-mentioned basic architecture. In general, thecentralized delivery strategy has the higherefficiency but the less scalability than thedistributed delivery strategy. The developers try tofind the most suitable overlay structure for P2Plive streaming.(2) Peer adaptation. Peers join or leave arbitrarily,also called peer churn. No matter what overlaystructure is used, P2P streaming system needs arecovery mechanism to ensure a reliable workafter peer churn. The overlay is reconstructed tomaintain an adequate amount of source for allpeers. In addition, peers can change partners to getthe better QoS. These behaviors are generallycalled peer adaptation.(3) Data scheduling. In mesh-pull scheme, asuccessful download must decide who to berequested, which one chunk1 to get, and how toget available chunk. A good data schedule canselect a workable peer to get a useful chunk tomaintain QoS.The well-known data scheduling includes rarestfirst and optimistic unchoke in BitTorret, which ismuch efficient and nice in P2P file sharing.However, the data schedule of BitTorret is notsuitable for live streaming service. A dataschedule for P2P live streaming takes basically theprompt delivery and buffered queue into account.1 A chunk is a unit of video data block.(4)Hybrid reconstruction. Several inherentshortcomings of mesh overlay are difficult toovercome, so some researchers consider that anintegration of mesh and tree is a resolution for thechallenges from mesh [11].Making these four alterations mentioned above in an old and an existing system is resource-consuming because it is necessary to modify the original multicast, to alter the real-time protocol, or to increase the overhead of each peer. Hence, these issues can only be considered when developing a new P2P streaming system. Furthermore, so far, previous studies never aimed at buffer control to improve QoS.To vary the existing peer-to-peer live streaming system less but improves quality of experience more, our proposed scheme focuses on the mesh architecture of P2P live streaming system and experiments with the buffering mechanisms. P2P swarming2 based on mesh must have a component to buffer video data for the smoothness. We enlarge or shorten the buffer size with the variation of data rate to prevent from the unstable source due to peer churn. Only buffer process of client is modified to improve the global efficiency of streaming system without any additional overhead and modification.The rest of this paper is organized as follows. In Section 2, we present the buffer introduction of PPLive and Cool-Streaming. In Section 3, we discuss our proposed scheme in detail. Simulation results are presented in Section 4, and we conclude the work in Section 5.II.R ELATED W ORKBuffer control is indispensable to a P2P network, and itis also one component of P2P live streaming system to avoid jitter, especially in swarming delivery under randomly mesh architecture. The buffer of P2P live streaming is smaller than 10 MB of each peer accordingto the measurement, and the buffer size equally reserves the video data for 200 seconds around [3]. Two reasons why mesh structure use buffer mechanism are necessary. First, buffer keeps video playing smoothly. As a result, every peer must buffer a sufficient quantity of video data and ensure the stable source to begin playing back. Second, buffer mitigates the possible challenges becauseof P2P churns. In general, peer churn often causes an interruption or a break of data delivery; hence, the buffered data can deal with the emergency before the overlay recovery.A.The Buffer Observation of PPLivePPLive is a popular Internet application for P2P streaming, but it is a pity that PPLive software is not opento academia. Therefore, we only deduce its principles from the monitoring measurements [3].2 Swarming means peers all cooperate to get the same work in a group.Depending on Figure 1, PPLive starts to establish and set up buffer after opening to execute. And then PPLive program requires the memory space to buffer data, the initial buffer size equals approximately 100 chunks. Buffer size grows with time until up to the size of 1000 chunks, and then it stops growing or grows slowly. The reason of growing buffer size slowly is that 1000 chunks are enough sufficient to play back; thus, enlarging buffer space is not important to do immediately. The coordination with Figure 1 and Figure 2, we can know that buffer size stops growing entirely when it is the size of 1100 chunks. In addition, the change of network parameters will not vary the buffer size; thus, the buffercontrol of PPLive is the type of static fixed buffer.Figure 1. Buffer size vs. buffer saturation at initial timeHowever, Figure 2 reveals a drawback in fixed buffer mechanism. When the incoming streaming is less or unstably provided to buffer, the buffer still holds a fixed size. This inflexible method may lead to video frozen, video skipping, and program rebooting, and then further impact the users’ QoE. These three situations all result from a lack of buffered data; in other words, the chunks cannot arrive at the destination before the playbackdeadline.Figure 2. Buffer variationAs Figure 3 illustrates, the occasional nulls cause the small gaps of the in-order buffer, and the gaps result in video skipping. If a block of chunks cannot be collected but the communication of peers still keep well, the video player is frozen until the available chunks meet the playable region of buffer. The reason of video frozen is the less availability of stringent chunks due to thenetwork congestion or a bad algorithm of chunk selection. However, if peer leaving leads to a starvation and peer adaptation cannot handle the trouble immediately, theuser’s streaming programming cannot but reboot.PlayingBuffernowInput streaming(a) Skipping(b) Frozen(c) RebootAvailable chunk Null (unavailable chunk)Figure 3. Buffer accidental eventsThree accidental events are discussed via the buffer situation in some paper, i.e. buffer occupancy decides to enter which one mode as Figure 3 shown. In our research, we also simplify the subject by a discussion of buffer occupancy. Buffer is like a bridge between peer connections and application player. When QoS of network starts to degrade or some peer detects insufficient resource, client-program should adjust the data scheduling or execute peer adaptation early to deal possibly with the imminent predicament.B. The Buffer Introduction of CoolStreamingCoolStreaming is one successful, famous, and open P2P live streaming system. The history of CoolStreaming as well as its basic architecture and designing concepts are in the academic report [4]. On the other hand, one paper about CoolStreaming [5] specifies obviously how to design components of its system and procedures of its algorithm in detail. In addition, another paper about CoolStreaming [6] illustrates the mathematical and numerical analysis, the simulative and evaluative results, and the test in real world. Especially, buffer mechanism is mentioned and described in the article about theory of CoolStreaming as Figure 4 and Figure 5 shown [6]. Importantly, CoolStreaming still lives, so it keeps updating, correcting, and modifying. The latest design for P2P live streaming and the investigation report of users’ true experience are introduced: Coolstreaming+ system modifies buffer management and data scheduling to improve the efficiency of hybrid pull-push and multiple sub-streams [7], and analyzes online statistics to find a way for NAT traversal [8].As Figure 4 shown, a video stream is encoded to a division into many chunks. Every chunk has the sequence number to identify itself and buffer can sort the chunk in order according to their sequence numbers. A video stream is decomposed into many sub-streams to avoid a point failure. All sub-streams combine to the complete video stream. When some one delivery path (sub-stream)is disconnected, peer can still receive other sub-streams to combine a part of video stream.As Figure 5 shown, the structure of CoolStreaming buffer includes synchronization buffer and cache buffer. Synchronization buffer receives incoming chunks from every sub-stream to synchronize these chunks in the correct memory space and then push them to cache buffer. Cache buffer sort chunks with sequence numbers in order to prepare for playback.The buffer of CoolStreaming is also fixed as like as PPLive, and there is not any mechanism for dealing with emergency. The designers considered that if peers or neighbors are selected well, consequently the stable and sufficient data source can be provided well. Focusing on the optimal overlay construction or peer adaptation is a current method popularly.Single stream of chunks with sequence numberSub-streams {S1, S2, S3, S4}12349S1S2S3S412345678910……………CombineÅÆDecomposeFigure 4. Example of stream decompositionS1S2S3S4Synchronization bufferFigure 5. Structure of buffers and chunk combination process in a peerIII.P ROPOSED S CHEMEA.Algorithm for Proposed SchemeAs we mentioned above, the partner leaving can breaka delivery path, and a lack of data resource temporarilyleads to a reboot. Once the reboot happens, the user becompelled to execute peer selection again and waits for along time to watch the frames again. The users mustcomplain about the long waiting time. However, thereboot can be avoided via the method that peer monitorsthe variation of its data rate, and adjusts the buffer sizebased on the data rate. The decrease of data rate oftenindicates a warning of partner leaving. When the data rateof the peer decreases, the size of buffer is enlarged. Andthus the buffer early reserves some un-continuous data. Inthis duration of overlay recovery, collecting sequentialdata is not the top priority in the schedule of our proposedalgorithm. A lack of data source must result in bad QoSduring recovery time. Although users watch the un-continuous frames, the matter avoids the reboot. Insummary, we want to sacrifice playback quality of videoto decrease the probability or frequency of reboot.The decrease of data rate also indicates a sudden jitteror bottleneck of network bandwidth. This situation isdifferent from the above peer leaving. Peers still keep andmaintain the delivery paths or multicast streams to sharedata chunks between each others. In this situation, peersdo not need to recover overlay, peers can choose to waitfor a quality restoration passively or handle peeradaptation actively. The size of buffer is enlarged to passthe corner for unstable network bandwidth. After peerchurn, peer adaptation, overlay recovery, or etc., thebuffer shortens to the original size when coming back inthe stable quality.Figure 6 illustrates the final states of a peers’ life cyclein the mesh scheme. The black lines denote the regularand success processes, and the grey dotted lines denotethe accidental failures with error exceptions. A peercorresponds directly with the server when new joining. Innew joining process, the server gives new peer anidentification and executes the authentication. Then thesecond process is an initiation of preconditionedparameters, including the playback time synchronization,a set of candidate partners, the information of TVprograms, other arguments of player, etc. Next, after theinitiation, start-up process is responsible for selecting thepartners and initializing the buffer to start gathering thevideo data.New joiningInitiationStart-upRunPlayPeer adaptationBuffer managementWaitSkipPeer re-selectionBuffer checkStopRebootFigure 6. Final states of peers’ life cycleThe importance of our algorithm put emphasis on threethreads including video play, peer adaptation, and buffermanagement in the run process. The threads executenonstop in the background. Buffer management of ouralgorithm monitors and computes received data rate tojudge whether buffer must be enlarged or shortened.When received data rate decreases, buffer size is enlargedto gather and reserve non-continuous chunks beforehandto reduce the probability of reboot. However, QoS degradation is inevitable consequently because the gathered chunks are non-continuous.As Figure 6 illustrated, when a problem happens in playback, player is not able to show any image and then it enters into wait process. The player waits for available video data from buffer to play frames again. In wait process, the system can judge how to deal with the problem. In general, the controller of system can check buffer to decide that the video should be frozen, skipped, or rebooted as Figure 3 shown. The worst case is rebooting due to player system will return to the initiation state after rebooting; hence, the expected time of playing video again is very much long.B. Client Program FlowIn summary, we list these processes and theirresponsibilities in Table I:TABLE I T HE R ESPONSIBILITIES AND F UNCTIONS OF C LIENT ’S P ROCESS IN O URP ROPOSED S CHEMEProcess Responsibility Functions New join IdentificationBind with server Ensure identification and authenticationInitiation Configuration Setup network parameters Load and link media playerSynchronize playback informationGet candidate partners from serverStart-up Preparation Build local P2P overlayInitialize bufferOpen decode streamRunPlayGet data from buffer to playback Execute decode streamUpdate information to server Interact with user Peeradaptation Reselect partnersMaintain local P2P overlayBuffermanagement Monitor and compute incoming bit rate Enlarge or shorten size of buffer Schedule chunk selection WaitInterruptionStop to playback Buffer check Decide playback skip PeerreselectionReselect partnersReboot RenewalClose decode stream and clear buffer Close connections with peersPrepare for initiation and start-up again Stop ExitExit P2P network Close media playerDisconnect all connections1. New join: A peer connects to the server and logins for identification and authentication in the first process. And then the peer binds the server to keep alive.2. Initiation: A peer must configure the software to worksuccessfully in P2P network. This configuration includes two parts, one is about network, and another is about media application. Under the asymmetrical incoming/outgoing bandwidth, each peer sets up network parameters to adapt itself in P2P network. Peer gets the information of the playback time synchronization and a set of candidate partners from server. Media player is linked and loaded in this process.3. Start-up: After the processes of new-join and initiation, peer starts up the preparation for smoothness before playback. It selects partners from candidates to build local P2P overlay. It also initializes the buffer and opens decode stream. These actions are done for starting to gather the video data. The duration since start-up process to run process is called as start-up delay. Users are always intolerable for long start-up delay. The good P2P overlay, good data scheduling, good network capacity, and good buffer mechanism can shorten the start-up delay to improve QoE.4. Run: This process is the most important part of our proposed scheme. It includes video play, peer adaptation, and buffer management; buffermanagement is the most significant contribution among them. In traditional schemes, buffer management just gives assistance to data schedulingand sorts data to extract from buffer for playback. Inour proposed scheme, besides working above jobs, buffer management monitors and computes incomingdata bitrates. According to the incoming data bitrates, buffer size and data schedule can be adjusted into dynamic P2P environment. Peer adaptation always finds and evaluates suitable candidate partners to improve P2P efficiency of data delivery, i.e. to build a robust overlay for locality. In other words, peer adaptation can reselect partners to maintain local P2P overlay. Peer adaptation handles the assignments of lower network layer, and play handles the assignments of higher application layer to interact with users. Its important work is the media playback including decoding and screening, play must update and report relative information to server periodically. 5. Wait: When any error interrupts playback leading to that player shows incorrect images, the program flows into wait process. We focus the kind of P2P network error, such that wait triggers interruption, buffer check, and peer reselection sequentially. First, interruption stops playing to avoid decode error of play in run process (usually means the null error). Interruption notices the player waits for available video data from buffer to play again. Second, buffer is checked to decide that the video should be skipped, frozen, or rebooted. This decision influences peer reselection. Third, peer reselection is executed if it is necessary.Peer reselection is similar with peer adaptation, but reselection has two differences from adaptation. (1) Peer reselection works in play-off time, it builds new connections between other peers; peer adaptation works in play-on runtime, it maintains old connectionsfor local overlay. (2) Peer reselection can require server to fetch a new list of candidate partners for overlay recovery; peer adaptation knows new partners by itself via exchanging overlay information with each other.6. Reboot: This process deals with renewal and reset to prepare to initialize and start up again. Therefore, reboot needs to close video stream and transmission stream, and clear buffer.7. Stop: Users end up this application, and thus clients disconnect all transmissions and streams to exit P2P network.We can discover this client program flow influence the performance of P2P system. For examples, start-up process produces start-up delay; run process produces playback delay; wait process produces recovery time. However, these performance metrics are subjective. An objective discussion is in next section.C.Parameters of System PerformanceIn P2P applications for file sharing or VoD service, the utilization of download bandwidth is as high as possible. However, for live streaming, the incoming bit rate equals to the encoding rate of playback quality. Because the users (peers) cannot get the future data and do not need the past data in the live shows. If the incoming bit rate is stable and approximate to the playback quality, the users should enjoy the smooth playback. We define an efficiency ratio(R) of the incoming bit rate to the playback quality rate (q).Efficiency ratio (R) =Incoming bit rate (kbps) Playback quality rate (kbps)(1)If R> 1, the packet duplication or the impermanent remedy for packet loss happens.If R = 1, the user can watch smoothly.If R < 1, the available data is insufficient to play.We also define R t as the efficiency ratio of some time t, so the average throughput of an incoming stream is surely ∫t R t q dt. A lower efficiency ratio usually indicates a more dynamic overlay with frequent peer churn (leaving). We also define a chunk available rate meant that the number of the collected availably chunks is divided by the number of the total accessible chunks. The chunk available rate is higher, and the source has the higher availability to delivery efficiently. The chunk available rate is always smaller than the efficiency ratio; however, the chunk available rate converges toward the efficiency ratio approximately in long time.Chunk available rate = # of available chunks# of total chunks≤1.0(2)A QoE criterion is evaluated by the smoothness of playback called playable rate, also called continuity [5]. The playable rate is defined as the playable time over the total playback duration. In another opinion, the playable rate equals to the number of video chunks that arrive before playback deadlines over the total number of video chunks. Of course, the playable rate is approximate to the chunk available rate.Playable rate =# of chunks in buffer before deadline# of total chunksPlayable timeTotal playback duration=(3) We can deduce that the average throughput of an incoming stream should equal to the total number of chunks, i.e. chunk available rate × t. In general, efficiency ratio is often smaller than 1.0, and efficiency ratio ≥chunk available rate ≥ playable rate, because the challenges of peer competition, end-to-end delay, chunk unavailability, in-time deadline, overlay reconstruction, etc. exist in P2P network. While the playback duration is very large, R = efficiency ratio chunk available rate≒≒playable rate. In addition, if the playable rate decreases, the probability of reboot increases. If the reboot happens when a continuous nulls n in buffer, the probability of reboot is (1 – R)n. We call n the threshold of reboot, represented how long the frozen time the streaming service can tolerate.IV.S IMULATIONA.Simulation EnvironmentWe implement a media simulator including server-program, swarm-program, and client-program in Java language. The server-program plays a data source server in a swarm to provide video data, and the swarm-program creates five threads to simulate five peers individually in a swarm. The peers sometimes cooperate to complement the media chunks, but sometimes compete for rare source. When providing data to clients, swarm-program simulates the peer competition to bring chunks into clients’ buffer out-of-order and the peer churn to vary the incoming bit rate violently. Peer churn breaks the stable incoming stream and leads to a degradation of quality. The client-program plays video for users and gathers statistics of packet level to estimate QoE. We can observe all simulation results from the client-program with many dynamic network metrics.Figure 7 provides an overview of our simulation system. Server-program can open a video and multicast to client through swarming. At the same time of multicasting, the experiment s can handle the rate variation via the swarm-program. In addition, server and client both have a media player to observe three delays (i.e. the start-up delay, end-to-end delay, and buffer preparation latency between server and client) leading to the lag of IPTV. On the screen of the client-program, users can experience the quality via their real eyes, this describes the subjective QoE. On the other hand, under the background of the client-program, we can get statistics from packet level via the occupation of buffer.。