storm集群的自适应调度算法

合集下载

分布式系统中的任务调度算法

分布式系统中的任务调度算法

分布式系统中的任务调度算法1. 轮询调度算法(Round Robin):将任务按顺序分配给所有可用的计算节点,每个节点依次接收任务直到全部节点都接收到任务,然后重新开始分配。

这种调度算法简单易实现,但不能根据节点负载情况做出合理调度决策。

2. 随机调度算法(Random):随机选择一个可用的计算节点,将任务分配给它。

这种调度算法简单高效,但不能保证节点的负载平衡。

3. 加权轮询调度算法(Weighted Round Robin):为每个计算节点设置一个权重值,根据权重值的大小将任务分配给相应的计算节点。

这种调度算法可以根据节点的性能和资源情况进行灵活调整,实现负载均衡。

4. 最小任务数优先算法(Least Task First):选择当前任务最少的计算节点,将任务分配给它。

这种调度算法可以实现最小负载优先策略,但不能考虑计算节点的性能差异。

1. 最短任务时间优先算法(Shortest Job First):根据任务的处理时间,选择处理时间最短的计算节点,将任务分配给它。

这种调度算法可以最小化任务的执行时间,但无法适应节点负载波动的情况。

2. 最靠近平均负载算法(Nearest Load First):选择负载最接近平均负载的计算节点,将任务分配给它。

这种调度算法可以实现负载均衡,但每次任务调度需要计算计算节点的负载,并更新平均负载值,造成一定的开销。

3. 动态加权轮询调度算法(Dynamic Weighted Round Robin):根据各个计算节点的负载情况动态调整其权重值,实现负载均衡。

这种调度算法能够根据系统负载情况作出灵活调度决策,并适应系统负载波动的情况。

4. 自适应任务调度算法(Adaptive Task Scheduling):根据任务的执行状态动态调整任务分配策略。

这种调度算法可以根据任务执行情况实时调整任务分配,提高系统的性能和吞吐量。

1.基于遗传算法的任务调度算法:将任务调度问题建模为一个优化问题,并使用遗传算法等优化算法进行求解。

stormproxies的使用方法

stormproxies的使用方法

stormproxies的使用方法(实用版3篇)目录(篇1)1.引言2.StormProxies 的概念和背景3.StormProxies 的使用方法3.1 创建 StormProxies 实例3.2 配置 StormProxies3.3 启动 StormProxies3.4 关闭 StormProxies4.StormProxies 的应用场景5.总结正文(篇1)一、引言随着互联网的发展,数据挖掘和分析的需求越来越大。

在大数据时代,分布式计算框架应运而生,其中 Storm 是一种实时的大数据处理系统。

为了使 Storm 处理速度更快,StormProxies 应运而生。

本文将介绍StormProxies 的使用方法。

二、StormProxies 的概念和背景StormProxies 是 Netflix 开发的一个用于加速 Storm 计算的代理应用。

它可以在 Storm 集群中代替 Nimbus 和 Supervisor,从而提高整个集群的性能。

StormProxies 通过代理 Nimbus 和 Supervisor 的通信,减少了集群中的网络延迟和负载,使得 Storm 处理速度更快。

三、StormProxies 的使用方法1.创建 StormProxies 实例要使用 StormProxies,首先需要创建一个 StormProxies 实例。

可以通过以下命令创建一个 StormProxies 实例:```java -jar stormproxies.jar```2.配置 StormProxies在创建 StormProxies 实例后,需要对其进行配置。

可以通过修改stormproxies.jar 中的 resources 文件夹下的配置文件进行配置。

配置文件名为 stormproxies.conf。

以下是一个配置示例:```imbus.host="localhost"imbus.port=6627supervisor.host="localhost"supervisor.port=10808```其中,nimbus.host 和 nimbus.port 分别表示 Nimbus 的 IP 地址和端口,supervisor.host 和 supervisor.port 分别表示 Supervisor 的 IP 地址和端口。

storm研究与介绍

storm研究与介绍
bolts使用OutputCollector来发射tuple,bolts必须要为它处理的每一个 tuple调用OutputCollector的ack方法,以通知Storm这个tuple被处理完成了, 从而通知这个tuple的发射者spouts。 一般的流程是: bolts处理一个输入tuple, 发射0个或者多个tuple, 然后调用ack通知storm自己已经处理过这个tuple了。 storm提供了一个IBasicBolt会自动调用ack。
Topology的定义是一个Thrift结构,并且Nimbus就是一个Thrift服务, 你可以提 交由任何语言创建的topology。上面的方面是用JVM-based语言提交的最简单的 方法。
7、Spout:
消息源spout是Storm里面一个topology里面的消息生产者。简而言之, Spout从来源处读取数据并放入topology。Spout分成可靠和不可靠两种;当 Storm接收失败时,可靠的Spout会对tuple(元组,数据项组成的列表)进行 重发;而不可靠的Spout不会考虑接收成功与否只发射一次。
运行一个topology很简单。首先,把你所有的代码以及所依赖的jar打进一个jar包。 然后运行类似下面的这个命令:
storm jar all-my-code.jar backtype.storm.MyTopology arg1 arg2
这个命令会运行主类: backtype.strom.MyTopology, 参数是arg1, arg2。这个类 的main函数定义这个topology并且把它提交给Nimbus。storm jar负责连接到 Nimbus并且上传jar包。
Topology示意图:
1、 Nimbus主节点: 主节点通常运行一个后台程序 —— Nimbus,用于响应分布在

storm的用法

storm的用法

storm的用法一、了解Storm大数据处理框架Storm是一个用于实时流数据处理的分布式计算框架。

它由Twitter公司开发,并于2011年发布。

作为一个开源项目,Storm主要用于处理实时数据,比如实时分析、实时计算、流式ETL等任务。

二、Storm的基本概念及特点1. 拓扑(Topology):拓扑是Storm中最重要的概念之一。

它代表了整个计算任务的结构和流程。

拓扑由一系列组件组成,包括数据源(Spout)、数据处理节点(Bolt)以及它们之间的连接关系。

2. 数据源(Spout):Spout负责从外部数据源获取数据,并将其发送给Bolt进行处理。

在拓扑中,通常会有一个或多个Spout进行数据输入。

3. 数据处理节点(Bolt):Bolt是对数据进行实际处理的模块。

在Bolt中可以进行各种自定义的操作,如过滤、转换、聚合等,根据业务需求不同而定。

4. 流组(Stream Grouping):Stream Grouping决定了从一个Bolt到下一个Bolt 之间的任务调度方式。

Storm提供了多种Stream Grouping策略,包括随机分组、字段分组、全局分组等。

5. 可靠性与容错性:Storm具有高可靠性和容错性的特点。

它通过对任务状态进行追踪、失败重试机制和数据备份等方式,确保了整个计算过程的稳定性。

6. 水平扩展:Storm可以很方便地进行水平扩展。

通过增加计算节点和调整拓扑结构,可以实现对处理能力的无缝提升。

三、Storm的应用场景1. 实时分析与计算:Storm适用于需要对大规模实时数据进行即时分析和计算的场景。

比如金融领域中的实时交易监控、电商平台中用户行为分析等。

2. 流式ETL:Storm可以实现流式ETL(Extract-Transform-Load)操作,将源数据进行抽取、转换和加载到目标系统中,并实时更新数据。

3. 实时推荐系统:通过结合Storm和机器学习算法,可以构建快速响应的实时推荐系统。

slurm的原理

slurm的原理

slurm的原理Slurm是一种用于管理超级计算机集群的开源作业调度系统。

它的设计目标是在多用户、多任务的环境中高效地分配计算资源,以实现最佳的系统利用率和作业性能。

Slurm的核心原理是基于作业调度和资源管理。

它通过一个中央控制节点(controller)和多个计算节点(compute nodes)之间的协作,实现对作业的提交、调度和执行的管理。

在Slurm中,用户可以通过向控制节点提交作业描述文件来请求计算资源,包括指定需要的节点数量、运行时间、内存需求等。

控制节点根据预定义的调度策略和系统资源状况,将作业分配给计算节点进行执行。

Slurm的调度算法是其原理的核心部分。

它采用了先进的资源分配算法,如Backfilling和负载平衡算法,以最大程度地减少作业的等待时间和系统的负载不均衡。

Backfilling算法允许较短的作业在等待队列中插队执行,以便更好地利用系统资源。

负载平衡算法则根据节点的负载情况,动态地将作业分配给最适合的节点,以实现整个集群的负载均衡。

Slurm还具有高可用性和容错性的特性。

它支持多个控制节点的冗余配置,以防止单点故障导致的系统中断。

当一个控制节点失效时,其他节点会接管其功能,保证系统的持续运行。

此外,Slurm还提供了详细的日志记录和错误处理机制,以便管理员对系统进行监控和管理。

除了基本的作业调度和资源管理功能,Slurm还提供了丰富的扩展功能和插件机制。

用户可以通过自定义插件来扩展Slurm的功能,如添加新的调度策略、资源限制规则等。

这使得Slurm能够适应不同的应用场景和需求,满足各种复杂的计算任务的要求。

Slurm作为一种高效灵活的作业调度系统,通过合理的资源分配和调度算法,实现了对超级计算机集群的有效管理。

它的原理基于作业调度和资源管理,通过中央控制节点和计算节点的协作,实现作业的提交、调度和执行。

同时,Slurm还具有高可用性和容错性的特性,支持插件扩展,使其适用于各种复杂的计算任务。

Storm(三)Storm的原理机制

Storm(三)Storm的原理机制

Storm(三)Storm的原理机制⼀.Storm的数据分发策略1. Shuffle Grouping随机分组,随机派发stream⾥⾯的tuple,保证每个bolt task接收到的tuple数⽬⼤致相同。

轮询,平均分配2. Fields Grouping按字段分组,⽐如,按"user-id"这个字段来分组,那么具有同样"user-id"的 tuple 会被分到相同的Bolt⾥的⼀个task,⽽不同的"user-id"则可能会被分配到不同的task。

3. All Grouping⼴播发送,对于每⼀个tuple,所有的bolts都会收到4. Global Grouping全局分组,把tuple分配给task id最低的task 。

5. None Grouping不分组,这个分组的意思是说stream不关⼼到底怎样分组。

⽬前这种分组和Shuffle grouping是⼀样的效果。

有⼀点不同的是storm会把使⽤none grouping的这个bolt放到这个bolt的订阅者同⼀个线程⾥⾯去执⾏(未来Storm如果可能的话会这样设计)。

6. Direct Grouping指向型分组,这是⼀种⽐较特别的分组⽅法,⽤这种分组意味着消息(tuple)的发送者指定由消息接收者的哪个task处理这个消息。

只有被声明为 Direct Stream 的消息流可以声明这种分组⽅法。

⽽且这种消息tuple必须使⽤ emitDirect ⽅法来发射。

消息处理者可以通过TopologyContext 来获取处理它的消息的task的id (OutputCollector.emit⽅法也会返回task的id)7. Local or shuffle grouping本地或随机分组。

如果⽬标bolt有⼀个或者多个task与源bolt的task在同⼀个⼯作进程中,tuple将会被随机发送给这些同进程中的tasks。

Docker Swarm集群调度策略选取与优化建议

Docker Swarm集群调度策略选取与优化建议

Docker Swarm集群调度策略选取与优化建议简介Docker Swarm是一个用于管理和编排Docker容器的工具。

随着容器化技术的快速发展,Docker Swarm在实现高可用性和负载均衡方面扮演着重要角色。

本文将探讨Docker Swarm集群调度策略的选取与优化建议,以帮助提升集群性能和资源利用效率。

一、调度策略选取1.1 基础调度策略Docker Swarm提供了多种基础调度策略,包括`spread`、`binpack`和`random`。

`spread`将容器均匀分配到集群中的节点上,适用于需要保持均衡的场景。

`binpack`将尽可能多的容器部署到一个节点上,以节省资源的同时提高部署效率。

`random`则随机选择一个节点进行容器部署。

对于不同的应用场景,选择合适的基础调度策略非常重要。

1.2 自定义调度策略除了基础调度策略,Docker Swarm还支持自定义调度策略。

为了满足特定的业务需求,可以开发自定义调度程序来扩展Swarm的调度能力。

自定义调度策略可以根据容器的特性、负载情况、硬件资源等因素来进行调度决策,从而更好地实现负载均衡和资源优化。

1.3 混合调度策略在实际应用中,往往需要根据业务场景的不同选择混合调度策略。

例如,可以将基础调度策略与自定义调度策略相结合,根据不同的容器属性和需求来灵活调度,以更好地适应复杂多变的应用环境。

二、调度策略优化建议2.1 节点选择与标签为了提升调度策略的效率,可通过节点选择和标签的方式来优化。

可以根据容器的硬件需求、网络要求等因素,设置节点的标签,并在调度策略中进行选择。

这样一来,容器将会被分配到符合其需求的节点上,有效提升性能和资源利用效率。

2.2 负载均衡负载均衡是提升集群性能的关键。

通过合理的负载均衡策略,可以确保容器在集群中均匀分布,避免节点资源过载或资源空闲。

可使用常见的负载均衡算法,如轮询、加权轮询、最少连接等,在不同节点上均匀分配容器。

改进蚁群算法的Storm任务调度优化

改进蚁群算法的Storm任务调度优化

改进蚁群算法的Storm任务调度优化王林;王晶【摘要】Apache Storm默认任务调度机制是采用Round-Robin(轮询)的方法对各个节点平均分配任务,由于默认调度无法获取集群整体的运行状态,导致节点间资源分配不合理;针对该问题,利用蚁群算法在NP hard问题上的优势结合Storm本身拓扑特点,提出了改进蚁群算法在Storm任务调度中的优化方案;通过大量实验找到了启发因子α与β的最佳取值,并测得改进后蚁群算法在Storm任务调度中的最佳迭代次数;引入Sigmoid函数改进了挥发因子ρ,使其可以随着程序运行自适应调节;从而降低了各个节点CPU的负载,同时提高了各节点之间负载均衡,加快了任务调度效率;实验结果表明改进后的蚁群算法和Storm默认的轮询调度算法在平均CPU负载上降低了26%,同时CPU使用标准差降低了3.5%,在算法效率上比Storm默认的轮询调度算法提高了21.6%.【期刊名称】《计算机测量与控制》【年(卷),期】2019(027)008【总页数】5页(P236-240)【关键词】Storm;任务调度;蚁群算法;负载均衡【作者】王林;王晶【作者单位】西安理工大学自动化与信息工程学院,西安 710048;西安理工大学自动化与信息工程学院,西安 710048【正文语种】中文【中图分类】TP306.20 引言随着大数据时代的到来和互联网的飞速发展,我们生活中的一切都被数据所记录,这些数据量已经超过了我们的想象。

为了分析和处理这些数据,学者们提出了许多种类的分布式框架,如Hadoop中的MapReduce框架等,但是大多数框架都是批处理框架。

尽管这些框架擅长批处理,但其处理速度都难以保证延时低于5秒。

这类批处理框架无法满足许多项目当中对结果需要实时反馈的要求,这时Storm流计算框架应运而生。

Storm是一个免费开源、分布式、高容错的实时计算系统,可以使持续不断的流计算变得容易,弥补了Hadoop批处理所不能满足的实时要求;Storm经常用于实时分析、在线机器学习、持续计算、分布式远程调用和ETL等领域;Storm的部署管理简单,而且Storm对于流数据处理的优势也是其它框架无法比拟的。

storm架构及原理

storm架构及原理

storm架构及原理storm 架构与原理1 storm简介1.1 storm是什么如果只⽤⼀句话来描述是什么的话:分布式 && 实时计算系统。

按照作者的说法,storm对于实时计算的意义类似于hadoop对于批处理的意义。

Hadoop(⼤数据分析领域⽆可争辩的王者)专注于批处理。这种模型对许多情形(⽐如为⽹页建⽴索引)已经⾜够,但还存在其他⼀些使⽤模型,它们需要来⾃⾼度动态的来源的实时信息。为了解决这个问题,就得借助 Nathan Marz 推出的 storm(现在已经被Apache 孵化)storm 不处理静态数据,但它处理连续的流数据。

1.2 storm 与传统的⼤数据storm 与其他⼤数据解决⽅案的不同之处在于它的处理⽅式。

Hadoop 在本质上是⼀个批处理系统。数据被引⼊ Hadoop ⽂件系统(HDFS) 并分发到各个节点进⾏处理。当处理完成时,结果数据返回到 HDFS 供始发者使⽤。

storm ⽀持创建拓扑结构来转换没有终点的数据流。

不同于 Hadoop 作业,这些转换从不停⽌,它们会持续处理到达的数据。

Hadoop 的核⼼是使⽤ Java™ 语⾔编写的,但⽀持使⽤各种语⾔编写的数据分析应⽤程序。

⽽ Twitter Storm 是使⽤ Clojure语⾔实现的。

Clojure 是⼀种基于虚拟机 (VM) 的语⾔,在 Java 虚拟机上运⾏。

但是,尽管 storm 是使⽤ Clojure 语⾔开发的,您仍然可以在 storm 中使⽤⼏乎任何语⾔编写应⽤程序。所需的只是⼀个连接到 storm 的架构的适配器。

已存在针对 Scala,JRuby,Perl 和 PHP 的适配器,但是还有⽀持流式传输到 Storm 拓扑结构中的结构化查询语⾔适配器。

2 Hadoop 架构的瓶颈Hadoop是优秀的⼤数据离线处理技术架构,主要采⽤的思想是“分⽽治之”,对⼤规模数据的计算进⾏分解,然后交由众多的计算节点分别完成,再统⼀汇总计算结果。

大数据CDA考试(习题卷3)

大数据CDA考试(习题卷3)

大数据CDA考试(习题卷3)说明:答案和解析在试卷最后第1部分:单项选择题,共47题,每题只有一个正确答案,多选或少选均不得分。

1.[单选题]QQ图可以用来检验( )A)正态性B)共线性C)同方差D)过拟合2.[单选题]Flink 的数据转换操作在以下哪些环节中完成()?A)channelB)TransformationC)sinkD)source3.[单选题]以下命令组成错误的是()A)vim /etc/profileB)source/etc/profileC)hadoop namenode-formatD)bin/hadoop fs- cat/hadoopdata/y/txt4.[单选题]在MapReduce中,()组件是用户不指定也不会有默认的。

A)CombinerB)OutputFormatC)PartitionerD)InputFormat5.[单选题]以下关于Zookeeper 关键特性中的原子说法正确的是?A)客户端发送的更新会按照他们被 发送的顺序进行应用B)更新只能全部完成或失败,不会部 分完成C)一条消息被一个server 接收,将 被所有server 接收D)集群中无论哪台服务器,对外示均 是同6.[单选题]Spark是用以下哪种编程语言实现的()?A)CB)C++C)JAVAD)Scala7.[单选题]某专业毕业的研究生年薪的标准差大约为2000美元,现在想要估计这个专业毕业研究生年薪95%的置信区间,并要求误差为100美元,应抽取多大的样本量?( ) z/2=1.96A)182B)98C)1537D)6348.[单选题]使用Hbase 客户端批量写入10条数据,某个Hregionserver 节点上包含该表的 2 个Region,分别为A 和B,10条数据中有6条属于A,4条属于B,请问写入这10条 数据需要向该Hregion Server 发送几次RPC 请求?A)10B)6C)2D)19.[单选题]以下哪个关键字可以用来为对象加互斥锁?A)transientB)staticC)serializeD)synchronized10.[单选题]以下关于Hive操作描述不正确的是()。

storm的用法和搭配

storm的用法和搭配

storm的用法和搭配Storm 是一个开源的、分布式的实时计算系统,具有高容错性、可伸缩性和低延迟的特点。

它在处理大规模数据流的实时计算任务中得到了广泛应用。

本文将介绍 Storm 的用法和搭配,包括 Storm 的基本概念、组件及其相互关系,并阐述Storm 与其他相关技术的结合。

一、Storm 简介与基本概念1.1 Storm 简介Storm 是一个开源分布式实时大数据处理框架,由 Twitter 公司开发并于 2011年开源发布。

它是一种高度可靠且可伸缩的实时流处理系统,可以在大规模数据集上执行复杂计算任务。

1.2 Storm 组成部分Storm 主要由以下几个组件组成:- Nimbus:负责资源分配和任务调度,是整个 Storm 集群的主节点。

- Supervisor:运行在集群工作节点上,负责启动和监控工作进程(Worker),并向 Nimbus 汇报状态信息。

- ZooKeeper:提供协调服务,用于管理Nimbus 和Supervisor 节点之间的通信。

- Topology:描述 Storm 的计算任务模型,包含多个 Spout 和 Bolt 组件构成的有向无环图。

二、Storm 的使用方法2.1 创建 Topology创建一个 Storm Topology 需要以下步骤:- 定义 Spout:Spout 是数据源,可以从消息队列、日志文件等地方获取实时数据流并发送给下游的 Bolt。

通常,你需要实现一个自定义的 Spout 类来满足应用的需求。

- 定义 Bolt:Bolt 是对接收到的数据流进行处理和转换的组件。

在 Bolt 中,你可以执行计算、过滤和聚合等操作,并将处理后的结果发送给其他 Bolt 或外部存储系统。

- 连接 Spout 和 Bolt:通过指定 Spout 和 Bolt 之间的连接关系,形成有向无环图。

2.2 配置 Topology在创建 Topology 时,还需要进行相关配置。

几种常见的智能调度算法

几种常见的智能调度算法

几种常见的智能调度算法智能调度算法是一种应用广泛的技术,它利用智能化的方法来对调度问题进行求解。

在计算机科学领域,调度问题是指在资源有限的情况下,如何合理地安排任务的执行顺序和资源的分配,以最大化系统的效率和性能。

智能调度算法通过建立数学模型、构建优化算法等手段来解决调度问题,从而提升系统的整体效率。

目前,有许多不同的智能调度算法被开发和应用于各种领域。

下面将介绍几种常见的智能调度算法。

1. 优先级调度算法:优先级调度算法是一种简单而常用的调度算法。

它根据任务的优先级来安排任务的执行顺序,优先级越高的任务越先执行。

这种算法主要用于实时系统中,可以确保高优先级的任务能够及时响应和完成,提高系统的实时性和可靠性。

2. 轮转调度算法:轮转调度算法是一种循环调度算法,它按照顺序分配一定的时间片给每个任务,当时间片用完后,将任务移到队列的末尾,继续对下一个任务进行调度。

这种算法适用于多任务系统,能够公平地分配资源,避免某些任务长时间占用系统资源而导致其他任务无法得到执行。

3. SJF调度算法:SJF(Shortest Job First)调度算法是一种根据任务的执行时间长度来进行调度的算法。

它假设任务的执行时间是已知的,选择执行时间最短的任务先执行,以减少平均等待时间和周转时间。

这种算法适用于任务的执行时间有较大差异的情况,可以提高系统的响应速度和执行效率。

4. 公平调度算法:公平调度算法旨在公平地分配资源给所有的任务,避免某些任务优先获得资源而导致其他任务无法得到合理的调度。

这种算法通过考虑任务的优先级、执行时间、执行顺序等因素来实现公平的资源调度,确保每个任务都能得到适当的执行机会。

5. 遗传算法调度算法:遗传算法调度算法是一种基于生物进化理论的启发式算法。

它模拟自然界中的进化过程,通过遗传算子(交叉、变异)对候选解进行操作,逐步优化调度方案,找到最佳的解。

这种算法具有较好的全局搜索能力和自适应性,适用于求解复杂的调度问题。

高性能计算中的任务调度算法设计

高性能计算中的任务调度算法设计

高性能计算中的任务调度算法设计任务调度在高性能计算领域中扮演着重要的角色,它决定了计算集群中各个任务的执行顺序、分配资源和优化整体性能的能力。

因此,设计高性能计算中的任务调度算法具有重要意义。

本文将探讨任务调度算法的设计原则、常用算法以及优化策略。

一、任务调度算法的设计原则1. 平衡负载:任务调度算法应能够将任务在计算集群中均匀地分配,避免某些节点过载而造成资源浪费,同时提高集群的整体性能。

2. 最小化延迟:任务调度算法应考虑任务的通信和数据传输时间,尽量将任务分配给距离近、网络延迟低的节点,以减少整体计算时间。

3. 考虑资源限制:任务调度算法应考虑计算集群中的资源限制,如处理器数量、内存容量和带宽等,以避免资源竞争和瓶颈现象的发生。

4. 动态适应性:任务调度算法应能够根据实时的计算状态进行动态调整,例如根据节点负载情况、任务的优先级和资源需求等来进行任务分配和调度。

二、常用的任务调度算法1. 公平性优先算法(Fairness-First):该算法基于公平性原则,将任务等分为多个时间片,按照任务的优先级和剩余执行时间来调度任务。

公平性优先算法可以避免某些任务长时间占用资源,从而实现负载均衡。

2. 资源需求感知算法(Resource-Aware):该算法考虑任务对资源的需求和可用资源之间的匹配,以提高资源利用率。

资源需求感知算法可以根据任务的资源需求和节点的资源可用情况来进行任务调度,从而避免资源瓶颈。

3. 成本感知算法(Cost-Aware):该算法考虑任务的执行时间和资源消耗等成本指标,以优化整体的性能。

成本感知算法可以根据任务的成本指标来优先调度执行时间较短、资源消耗较少的任务,以减少整体的计算时间和资源消耗。

4. 预测性调度算法(Predictive Scheduling):该算法通过对任务和资源的历史数据进行分析和预测,以提前将任务分配给最适合的节点。

预测性调度算法可以减少任务的等待时间和资源竞争,提高整体的计算性能。

storm 知识点

storm 知识点

Storm 知识点Storm 是一款开源的分布式实时计算系统,它能够处理海量的实时数据,并以高效、可靠的方式进行大规模的实时数据处理。

本文将从基础概念、架构、使用场景和案例等方面逐步介绍 Storm 的知识点。

1. Storm 简介Storm 是由 Twitter 公司开发并开源的一款分布式实时计算系统,它提供了高性能的数据流处理能力。

Storm 的设计目标是处理实时数据流,并能够保证数据的低延迟和高可靠性。

2. Storm 架构Storm 的架构中包含以下几个核心组件:2.1 NimbusNimbus 是 Storm 集群的主节点,它负责协调集群中的各个组件,并进行任务的分配和调度。

Nimbus 还负责监控集群的状态,并处理故障恢复等操作。

2.2 SupervisorSupervisor 是 Storm 集群的工作节点,它负责运行实际的计算任务,并按照Nimbus 的指示进行数据的处理和传输。

每个 Supervisor 节点可以运行多个Worker 进程,每个 Worker 进程负责一个具体的计算任务。

2.3 TopologyTopology 是 Storm 中的一个概念,它表示实际的数据处理流程。

Topology 中包含了 Spout 和 Bolt 两种组件,Spout 负责数据的输入,Bolt 负责对输入数据进行处理和转换。

2.4 ZooKeeperZooKeeper 是一个分布式协调服务,Storm 使用 ZooKeeper 来管理集群中的各个组件。

ZooKeeper 负责维护集群的状态信息,并提供分布式锁等功能,用于实现Storm 的高可靠性和容错能力。

3. Storm 使用场景Storm 在实时数据处理领域有着广泛的应用场景,以下是一些常见的使用场景:3.1 实时数据分析Storm 可以对实时数据进行分析和处理,帮助企业快速了解和响应数据的变化。

例如,可以利用 Storm 进行实时的用户行为分析,及时发现用户的偏好和趋势,并根据分析结果做出相应的调整。

flink vs storm 原理

flink vs storm 原理

Flink vs Storm 原理比较1. 引言Flink和Storm是当前流式计算领域最受欢迎的两个开源框架。

它们都提供了高效、可扩展的流处理能力,但在实现原理和设计理念上有一些区别。

本文将对两者进行详细比较,以便更好地理解它们的原理和特点。

2. FlinkFlink是一个分布式流处理框架,旨在提供高吞吐量、低延迟的实时数据处理。

它的核心原理是基于事件时间(Event Time)的流处理模型。

2.1 流处理模型Flink的流处理模型基于有向无环图(DAG),将数据流划分为无限的事件流,将操作(算子)应用于这些事件流上。

流处理任务由一系列算子组成,每个算子接收输入事件流,经过处理后产生输出事件流。

这种模型可以实现端到端的一致性,即每个算子都能处理事件流的每个事件,保证了数据的完整性和一致性。

2.2 事件时间处理Flink的一个重要特性是对事件时间的支持。

事件时间是事件实际发生的时间,与数据产生的时间和处理的时间无关。

Flink使用事件时间来解决数据乱序、延迟等问题,并提供了一套机制来处理乱序事件流。

Flink通过水位线(Watermark)来处理乱序事件。

水位线是一种逻辑时钟,用于衡量事件时间的进展。

Flink根据水位线来判断是否可以触发窗口操作,以及何时可以将窗口中的结果输出。

2.3 状态管理Flink使用状态(State)来维护处理过程中的中间结果。

状态可以是键值对、列表、计数器等形式。

Flink提供了多种状态管理机制,包括内存状态、文件系统状态、RocksDB状态等。

Flink的状态管理机制允许在故障恢复时保持一致性。

当任务失败或发生重启时,Flink可以从检查点(Checkpoint)中恢复状态,并继续处理数据。

3. StormStorm是一个分布式实时计算系统,用于处理大规模实时数据流。

它的核心原理是基于元组(Tuple)的流处理模型。

3.1 流处理模型Storm的流处理模型是一个有向无环图,由一系列的Spout和Bolt组成。

并行计算中的任务调度算法

并行计算中的任务调度算法

并行计算中的任务调度算法在并行计算中,任务调度算法起着至关重要的作用。

任务调度算法决定了任务在多个处理器或计算节点上的调度顺序和分配方式,直接影响着系统的性能和效率。

本文将就并行计算中常见的任务调度算法进行探讨,包括静态调度算法、动态调度算法和混合调度算法。

1. 静态调度算法静态调度算法是指提前预定任务的调度顺序和分配方式,一旦任务开始执行后就不再改变。

最常见的静态调度算法包括负载均衡调度、最小通信代价调度和最小执行时间调度。

负载均衡调度算法的目标是保持各个处理器的负载相对平衡,避免出现某个处理器负载过重而导致系统性能下降的情况。

常用的算法有最早可用处理器调度算法(Earliest Available Processor,EAP)和最小任务数调度算法(Minimum Task Number,MTN)等。

最小通信代价调度算法的目标是尽量减少任务之间的通信开销。

该算法通过考虑任务之间的通信关系来确定任务的调度顺序和分配方式,以降低通信延迟和传输带宽的消耗。

其中,网络拓扑相关的调度算法,如最短路径调度算法(Shortest Path,SP)和最小度调度算法(Minimum Degree,MD),具有一定的应用优势。

最小执行时间调度算法的目标是使整个并行计算完成的时间最短。

该算法通常将任务的执行时间作为优化指标,选取执行时间较短的任务进行调度和分配。

其中,最大最小调度算法(Max-Min,MM)和最佳速度比调度算法(Best-Speedup,BS)是比较经典的最小执行时间调度算法。

2. 动态调度算法动态调度算法是指在运行过程中根据实时信息进行任务的调度,根据当前系统状态进行动态地任务分配。

主要包括最佳任务调度算法、最小加权延迟调度算法和自适应调度算法。

最佳任务调度算法的目标是根据实时的负载情况和任务执行状态,选择最佳的任务进行调度和分配,以达到负载均衡和系统性能最佳化。

典型的最佳任务调度算法包括最佳静态分配算法(Best Static Assignment,BSA)、最佳动态分配算法(Best Dynamic Assignment,BDA)和最佳静态分配与动态分配混合算法(Best Static-Dynamic Assignment,BSDA)等。

框架下自适应动态实时调度算法研究

框架下自适应动态实时调度算法研究

框架下自适应动态实时调度算法研究一、引言随着计算机技术的不断更新和发展,计算机性能越来越强大,但是在某些领域中我们仍然需要更好的调度算法来提高计算机的效率和性能。

在框架下自适应动态实时调度算法的研究中,我们将通过分析算法的核心原理,深入剖析其适用环境及其优缺点,以期为研究和实践提供帮助和借鉴。

二、框架下自适应动态实时调度算法的定义框架下自适应动态实时调度算法指的是在一定的计算框架下,通过动态化的方法来实现调度任务的分配和协调,满足实时性的需求。

框架通常包含前端计算节点,中间调度节点和后端存储节点,通过各节点之间的通信协调来实现整个计算系统的完整性和稳定性。

三、框架下自适应动态实时调度算法的原理动态实时调度算法的核心原理是根据任务的类型和负载情况来动态分配计算资源,在保证实时性的前提下,尽可能地提高计算资源的利用率。

算法的实现通常需要进行以下几个步骤:1.任务分类区分任务数量和负载情况,对任务进行合理的分类,充分利用调度算法在任务运行时间与负载情况上的灵活性,从而实现最佳的资源调度效果。

2.资源评估通过评估各计算节点的可用资源及空闲状态来动态调整资源分配和负载均衡,从而使系统资源的最优化使用。

3.任务分配通过对任务的分类和资源评估来进行任务分配,从而实现计算资源的动态分配和负载均衡,满足任务的实时性和质量。

四、适用环境及优缺点框架下自适应动态实时调度算法适用于数据处理、分析和应用等领域,在大规模集群环境中具有良好的应用前景。

这种调度算法的优点主要体现在以下几个方面:1.调度灵活性强通过动态调整资源的使用和分配,使得系统的计算效率和可靠性得到了显著的提高,从而提高了系统的整体运行效率。

2.负载均衡性高通过对计算节点的负载情况进行划分和调整,使得计算节点的负载均衡度越来越高,从而使整个计算系统得到了充分的利用和优化。

3.实时性高通过对系统的资源和任务进行动态调度和实时监控,实现了对系统操作的即时响应和调整,使得计算系统实现了更高的实时性能。

dstorm原理

dstorm原理

dstorm原理
DSTORM(Distributed Storm System)是一种分布式实时计算系统,
它允许用户在多个计算节点上运行多个并行任务,并且能够有效地处
理大量数据流。

DSTORM 的原理主要包括以下几个方面:
1. 分布式架构:DSTORM 是一个分布式系统,它可以将计算任务分布
在多个计算节点上,从而提高计算能力和可扩展性。

DSTORM 使用Apache Zookeeper 或类似工具来协调和管理各个节点的状态和任务分配。

2. 流处理:DSTORM 是一种流处理平台,它能够实时处理大量数据流。

与传统的批量处理系统不同,DSTORM 允许数据在进入系统时直接被处理,而不需要将数据存储在本地或远程存储系统中的批量数据集。

3. 容错和恢复:DSTORM 提供了强大的容错和恢复功能,以确保系统
的高可用性和可靠性。

当一个节点出现故障时,DSTORM 可以自动重新
分配任务到其他健康的节点,从而保持系统的正常运行。

此外,DSTORM 还提供了快照恢复功能,以便在系统发生故障时能够快速恢复
到之前的状态。

4. 模块化设计:DSTORM 采用模块化设计,将不同的功能划分为不同
的模块,并允许用户根据需要选择和组合不同的模块。

这种设计使得DSTORM 更加灵活和可定制,能够适应不同的应用场景和需求。

总之,DSTORM 的原理是基于分布式架构、流处理、容错和恢复以及模
块化设计,旨在提供一种高效、可靠、可扩展的实时计算平台,适用
于各种大规模数据流处理应用场景。

自适应调度算法

自适应调度算法

一、自适应调度算法定义
自适应调度算法是一种优化调度算法,它能够根据当前系统的负载情况和资源利用率,动态地调整任务调度方案,以达到最优的资源利用效果。

二、自适应调度算法步骤
自适应调度算法通常包括以下步骤:
1.监测系统负载情况和资源利用率。

自适应调度算法需要实时监测系统的负载情况和资源利用率,以便根据当前情况进行调度决策。

2.根据监测结果调整调度方案。

自适应调度算法根据监测结果,动态地调整任务调度方案。

例如,当系统负载较高时,算法可以将一些任务延迟执行,以避免系统过载;当系统资源利用率较低时,算法可以将一些任务提前执行,以充分利用系统资源。

3.评估调度效果。

自适应调度算法需要对调度效果进行评估,以确定是否需要进一步调整调度方案。

评估调度效果的方法可以是实时监测系统性能指标,如响应时间、吞吐量等,也可以是定期进行性能测试和评估。

三、常见的自适应调度算法包括:
1.基于反馈的调度算法。

该算法根据系统性能指标的反馈信息,实时调整调度方案,以达到最优的资源利用效果。

2.基于遗传算法的调度算法。

该算法将调度问题看作一个优化问题,通过遗传算法进行求解,以获得最优的调度方案。

3.基于模糊逻辑的调度算法。

该算法将任务调度问题看作一个模糊控制问题,通过模糊逻辑进行建模和求解,以获得较好的调度效果。

自适应调度算法可以应用于各种任务调度场景,如计算机系统任务调度、生产调度等。

slurm的原理

slurm的原理

slurm的原理Slurm的原理Slurm是一个开源的集群管理器,用于在大规模计算集群中调度和管理作业。

它被广泛应用于科学计算、工程模拟、数据分析等领域。

Slurm的设计目标是提供高效、可扩展和可靠的作业调度服务,以最大限度地提高集群的利用率和性能。

Slurm的原理可以分为四个主要方面:作业提交、作业调度、资源管理和作业执行。

作业提交是指用户将作业提交到Slurm集群的过程。

用户可以使用命令行工具或脚本来提交作业,并通过指定作业的资源需求和执行命令来描述作业的要求。

作业提交后,Slurm会为其分配一个唯一的作业ID,并将其加入作业队列中等待调度。

作业调度是Slurm的核心功能。

它负责根据作业的资源需求和集群的可用资源情况,决定将哪些作业分配给哪些计算节点执行。

Slurm使用一种高效的调度算法来平衡负载、最大化吞吐量和优化作业的完成时间。

它考虑了作业的优先级、资源需求、节点状态以及其他调度策略,以尽量公平地分配资源并满足用户的需求。

资源管理是Slurm的另一个重要组成部分。

它负责监控和管理集群中的计算节点、内存、存储等资源。

Slurm会周期性地检查节点的状态和资源使用情况,并将这些信息存储在一个中央数据库中。

这些信息可以帮助作业调度器做出更准确的调度决策,并确保作业在执行时能够获得所需的资源。

作业执行是Slurm的最后一个环节。

一旦作业被调度到计算节点上,Slurm会自动启动作业所需的进程,并将输入数据、环境变量等传递给作业。

同时,它还会监控作业的运行状态,以及作业所使用的资源和执行时间。

一旦作业完成,Slurm会将结果返回给用户,并释放所占用的资源。

除了以上四个方面,Slurm还提供了一些其他的功能和特性,如作业优先级控制、作业数组、资源限制、容错机制等。

它还支持多种作业调度策略和插件,用户可以根据自己的需求进行定制和扩展。

Slurm是一个功能强大、灵活可扩展的集群管理器,通过合理地调度和管理作业,可以充分利用集群的资源,提高计算效率和性能。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

Adaptive Online Scheduling in StormLeonardo Aniello aniello@dis.uniroma1.itRoberto Baldonibaldoni@dis.uniroma1.itLeonardo Querzoniquerzoni@dis.uniroma1.itResearch Center on Cyber Intelligence and Information Security and Department of Computer,Control,and Management Engineering Antonio RubertiSapienza University of RomeABSTRACTToday we are witnessing a dramatic shift toward a data-driven economy,where the ability to efficiently and timely analyze huge amounts of data marks the difference between industrial success stories and catastrophic failures.In this scenario Storm,an open source distributed realtime com-putation system,represents a disruptive technology that is quickly gaining the favor of big players like Twitter and Groupon.A Storm application is modeled as a topology,i.e.a graph where nodes are operators and edges represent data flows among such operators.A key aspect in tuning Storm performance lies in the strategy used to deploy a topology, i.e.how Storm schedules the execution of each topology component on the available computing infrastructure.In this paper we propose two advanced generic schedulers for Storm that provide improved performance for a wide range of application topologies.Thefirst scheduler works offline by analyzing the topology structure and adapting the de-ployment to it;the second scheduler enhance the previous approach by continuously monitoring system performance and rescheduling the deployment at run-time to improve overall performance.Experimental results show that these algorithms can produce schedules that achieve significantly better performances compared to those produced by Storm’s default scheduler.Categories and Subject DescriptorsD.4.7[Organization and Design]:Distributed systemsKeywordsdistributed event processing,CEP,scheduling,Storm1.INTRODUCTIONIn the last few years we are witnessing a huge growth in information production.IBM claims that“every day,we create2.5quintillion bytes of data-so much that90%of the data in the world today has been created in the last two Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on thefirst page.To copy otherwise,to republish,to post on servers or to redistribute to lists,requires prior specific permission and/or a fee.DEBS’13,June29–July3,2013,Arlington,Texas,USA.Copyright2013ACM978-1-4503-1758-0/13/06...$15.00.years alone”[15].Domo,a business intelligence company, has recently reported somefigures[4]that give a perspective on the sheer amount of data that is injected on the internet every minute,and its heterogeneity as well:3125photos are added on Flickr,34722likes are expressed on Facebook, more than100000tweets are done on Twitter,etc.This apparently unrelenting growth is a consequence of several factors including the pervasiveness of social networks,the smartphone market success,the shift toward an“Internet of things”and the consequent widespread deployment of sensor networks.This phenomenon,know with the popular name of Big Data,is expected to bring a strong growth in economy with a direct impact on available job positions;Gartner says that the business behind Big Data will globally create4.4 million IT jobs by2015[1].Big Data applications are typically characterized by the three V s:large volumes(up to petabytes)at a high veloc-ity(intense data streams that must be analyzed in quasi real-time)with extreme variety(mix of structured and un-structured data).Classic data mining and analysis solutions quickly showed their limits when faced with such loads.Big Data applications,therefore,imposed a paradigm shift in the area of data management that brought us several novel approaches to the problem represented mostly by NoSQL databases,batch data analysis tools based on Map-Reduce, and complex event processing engines.This latter approach focussed on representing data as a real-timeflow of events proved to be particularly advantageous for all those appli-cations where data is continuously produced and must be analyzed on theflplex event processing engines are used to apply complex detection and aggregation rules on intense data streams and output,as a result,new events.A crucial performance index in this case is represented by the average time needed for an event to be fully analyzed,as this represents a goodfigure of how much the application is quick to react to incoming events.Storm[2]is a complex event processing engine that,thanks to its distributed architecture,is able to perform analytics on high throughput data streams.Thanks to these character-istics,Storm is rapidly conquering reputation among large companies like Twitter,Groupon or The Weather Chan-nel.A Storm cluster can run topologies(Storm’s jargon for an application)made up of several processing components. Components of a topology can be either spouts,that act as event producers,or bolts that implement the processing logic.Events emitted by a spout constitute a stream that can be transformed by passing through one or multiple bolts where its events are processed.Therefore,a topology repre-sents a graph of stream transformations.When a topology is submitted to Storm it schedules its execution in the clus-ter,i.e.,it assigns the execution of each spout and each bolt to one of the nodes forming the cluster.Similarly to batch data analysis tools like Hadoop,Storm performance are not generally limited by computing power or available memory as new nodes can be always added to a cluster.In order to leverage the available resources Storm is equip-ped with a default scheduler that evenly distributes the exe-cution of topology components on the available nodes using a round-robin strategy.This simple approach is effective in avoiding the appearance of computing bottlenecks due to resource overusage caused by skews in the load distribution. However,it does not take into account the cost of moving events through network links to let them traverse the correct sequence of bolts defined in the topology.This latter aspect heavily impacts the average event processing latency,i.e., how much time is needed for an event injected by a spout to traverse the topology and be thus fully processed,a funda-mental metric used to evaluate the responsiveness of event processing applications to incoming stimuli.In this paper we target the design and implementation of two general purpose Storm schedulers that,like the default one,could be leveraged by applications to improve their per-formance.Differently from the default Storm scheduler,the ones introduced in this paper aim at reducing the average event processing latency by adapting the schedule to specific application characteristics1.The rationale behind both schedulers is the following:(i) identify potential hot edges of the topology,i.e.,edges tra-versed by a large number of events,and(ii)map an hot edge to a fast inter-process channel and not to a slow network link,for example by scheduling the execution of the bolts connected by the hot edge on a same cluster node.This rationale must take into account that processing resources have limited capabilities that must not be exceeded to avoid an undesired explosion of the processing time experienced at each topology component.Such pragmatic strategy has the advantage of being practically workable and to provide better performances.The two general purpose schedulers introduced in this pa-per differ on the way they identify hot edges in the topol-ogy.Thefirst scheduler,named offline,simply analyzes the topology graph and identifies possible sets of bolts to be scheduled on a same node by looking at how they are con-nected.This approach is simple and has no overhead on the application with respect to the default Storm scheduler(but for negligible increased processing times when the schedule is calculated),but it is oblivious with respect to the appli-cation workload:it could decide to schedule two bolts on a same node even if the number of events that will traverse the edge connecting them will be very small.The second sched-uler,named online,takes this approach one step further by monitoring the effectiveness of the schedule at runtime and re-adapting it for a performance improvement when it seesfit.Monitoring is performed at runtime on the sched-uled topology by measuring the amount of traffic among its components.Whenever there is the possibility for a new schedule to reduce the inter-node network traffic,the sched-uled is calculated and transparently applied on the cluster 1The source code of the two schedulers can be found at http://www.dis.uniroma1.it/~midlab/software/ storm-adaptive-schedulers.zip preserving the application correctness(i.e.no events are discarded during this operation).The online scheduler thus provides adaptation to the workload at the cost of a more complex architecture.We have tested the performance of our general purpose schedulers by implementing them on Storm and by comparing the schedules they produce with those produced by the default Storm scheduler.The tests have been conducted both on a synthetic workload and on a real workload publicly released for the DEBS2013Grand Challenge.The results show how the proposed schedulers consistently delivers better performance with respect to the default one promoting them as a viable alternative to more expensive ad-hoc schedulers.In particular tests performed on the real workload show a20%to30%performance im-provement on event processing latency for the online sched-uler with respect to the default one proving the effectiveness of the proposed topology scheduling approach.The rest of this paper is organized as follows:Section2 introduces the reader to Storm and its default scheduler; Section3describes the offline and online schedulers;Sec-tion4reports the experiments done on the two schedulers. Finally,Section5discusses the related work and Section6 concludes the paper.2.STORMStorm is an open source distributed realtime computa-tion system[2].It provides an abstraction for implementing event-based elaborations over a cluster of physical nodes. The elaborations consist in queries that are continuously evaluated on the events that are supplied as input.A compu-tation in Storm is represented by a topology,that is a graph where nodes are operators that encapsulate processing logic and edges model dataflows among operators.In the Storm’s jargon,such a node is called a component.The unit of in-formation that is exchanged among components is referred to as a tuple,that is a named list of values.There are two types of components:(i)spouts,that model event sources and usually wrap the actual generators of input events so as to provide a common mechanism to feed data into a topol-ogy,and(ii)bolts,that encapsulate the specific processing logic such asfiltering,transforming and correlating tuples. The communication patterns among components are rep-resented by streams,unbounded sequences of tuples that are emitted by spouts or bolts and consumed by bolts.Each bolt can subscribe to many distinct streams in order to re-ceive and consume their tuples.Both bolts and spouts can emit tuples on different streams as needed.Spouts cannot subscribe to any stream,since they are meant to produce tu-ples ers can implement the queries to be computed by leveraging the topology abstraction.They put into the spouts the logic to wrap external event sources,then compile the computation in a network of interconnected bolts tak-ing care of properly handling the output of the computation. Such a computation is then submitted to Storm which is in charge of deploying and running it on a cluster of machines. An important feature of Storm consists in its capability to scale out a topology to meet the requirements on the load to sustain and on fault tolerance.There can be several in-stances of a component,called tasks.The number of tasks for a certain component isfixed by the user when it config-ures the topology.If two components communicate through one or more streams,also their tasks do.The way such a communication takes place is driven by the grouping chosenby the user.Let A be a bolt that emits tuples on a stream that is consumed by another bolt B.When a task of A emits a new tuple,the destination task of B is determined on the basis of a specific grouping strategy.Storm provides several kinds of groupings•shuffle grouping:the target task is chosen randomly, ensuring that each task of the destination bolt receives an equal number of tuples•fields grouping:the target task is decided on the basis of the content of the tuple to emit;for example,if the target bolt is a stateful operator that analyzes events regarding customers,the grouping can be based on a customer-idfield of the emitted tuple so that all the events about a specific customer are always sent to the same task,which is consequently enabled to properly handle the state of such customer•all grouping:each tuple is sent to all the tasks of the target bolt;this grouping can be useful for implement-ing fault tolerance mechanisms•global grouping:all the tuples are sent to a designated task of the target bolt•direct grouping:the source task is in charge of decid-ing the target task;this grouping is different fromfields grouping because in the latter such decision is trans-parently made by Storm on the basis of a specific set offields of the tuple,while in the former such decision is completely up to the developer2.1Worker Nodes and WorkersFrom an architectural point of view,a Storm cluster con-sists of a set of physicalmachines called worker nodes whose structure is depicted in Figure1.Once deployed,a topol-ogy consists of a set of threads running inside a set of Java processes that are distributed over the worker nodes.A Java process running the threads of a topology is called a worker(not to be confused with a worker node:a worker is a Java process,a worker node is a machine of the Storm cluster).Each worker running on a worker node is launched and monitored by a supervisor executing on such worker node.Monitoring is needed to handle a worker failure. Each worker node is configured with a limited number of slots,that is the maximum number of workers in execution on that worker node.A thread of a topology is called execu-tor.All the executors executed by a worker belong to the same topology.All the executors of a topology are run by a specific number of workers,fixed by the developer of the topology itself.An executor carries out the logic of a set of tasks of the same component,tasks of distinct components live inside distinct executors.Also the number of executors for each component is decided when the topology is devel-oped,with the constraint that the number of executors has to be lower than or equal to the number of tasks,otherwise there would be executors without tasks to execute.Figure1 details what a worker node hosts.Requiring two distinct levels,one for tasks and one for ex-ecutors,is dictated by a requirement on dynamic rebalanc-ing that consists in giving the possibility at runtime to scale out a topology on a larger number of processes(workers) and threads(executors).Changing at runtime the numberG(V,T),wT opologyFigure1:A Storm cluster with the Nimbus pro-cess controlling several Worker nodes.Each worker node hosts a supervisor and a number of workers (one per slot),each running a set of executors.The dashed red line shows components of the proposed solution:the offline scheduler only adds the plugin to the Nimbus process,while the online scheduler also adds the performance log and monitoring pro-cesses.of tasks for a given component would complicate the re-configuration of the communication patterns among tasks, in particular in the case offields grouping where each task should repartition its input stream,and possibly its state, accordingly to the new configuration.Introducing the level of executors allows to keep the number of tasksfixed.The limitation of this design choice consists in the topology de-veloper to overestimate the number of tasks in order to ac-count for possible future rebalances.In this paper we don’t focus on this kind of rebalances,and to keep things simple we always consider topologies where the number of tasks for any component is equal to the number of executors,that is each executor includes exactly one task.2.2NimbusNimbus is a single Java process that is in charge of ac-cepting a new topology,deploying it over worker nodes and monitoring its execution over time in order to properly han-dle any failure.Thus,Nimbus plays the role of master with respect to supervisors of workers by receiving from them the notifications of workers failures.Nimbus can run on any of the worker nodes,or on a distinct machine.The coordination between nimbus and the supervisors is carried out through a ZooKeeper cluster[17].The states of nimbus and supervisors are stored into ZooKeeper,thus,in case of failure,they can be restarted without any data loss. The software component of nimbus in charge of deciding how to deploy a topology is called scheduler.On the basis of the topology configuration,the scheduler has to perform the deployment in two consecutive phases:(1)assign executors to workers,(2)assign workers to slots.2.3Default and Custom SchedulerThe Storm default scheduler is called EvenScheduler.It enforces a simple round-robin strategy with the aim of pro-ducing an even allocation.In thefirst phase it iterates through the topology executors,grouped by component,and allocates them to the configured number of workers in a round-robin fashion.In the second phase the workers are evenly assigned to worker nodes,according to the slot avail-ability of each worker node.This scheduling policy producesworkers that are almost assigned an equal number of execu-tors,and distributes such workers over the worker nodes at disposal so that each one node almost runs an equal number of workers.Storm allows implementations of custom schedulers in or-der to accommodate for users’specific needs.In the gen-eral case,as shown in Figure1,the custom scheduler takes as input the structure of the topology(provided by nim-bus),represented as a weighted graph G(V,T),w,and set of user-defined additional parameters(α,β,...).The custom scheduler computes a deployment plan which defines both the assignment of executors to workers and the allocation of workers to slots.Storm API provides the IScheduler in-terface to plug-in a custom scheduler,which has a single method schedule that requires two parameters.Thefirst is an object containing the definitions of all the topologies cur-rently running,including topology-specific parameters pro-vided by who submitted the topology,which enables to pro-vide the previously mentioned user-defined parameters.The second parameter is an object representing the physical clus-ter,with all the required information about worker nodes, slots and current allocations.A Storm installation can have a single scheduler,which is executed periodically or when a new topology is submitted. Currently,Storm doesn’t provide any mean to manage the movement of stateful components,it’s up to the developer to implement application-specific mechanism to save any state to storage and properly reload them once a rescheduling is completed.Next section introduces the Storm custom scheduler we designed and implemented.3.ADAPTIVE SCHEDULINGThe key idea of the scheduling algorithms we propose is to take into account the communication patterns among execu-tors trying to place in the same slot executors that communi-cate each other with high frequency.In topologies where the computation latency is dominated by tuples transfer time, limiting the number of tuples that have to be sent and re-ceived through the network can contribute to improve the performances.Indeed,while sending a tuple to an executor located in the same slot simply consists in passing a pointer, delivering a tuple to an executor running inside another slot or deployed in a different worker node involves much larger overheads.We developed two distinct algorithms based on such idea. One looks at how components are interconnected within the topology to determine what are the executors that should be assigned to the same slot.The other relies on the mon-itoring at runtime of the traffic of exchanged tuples among executors.The former is less demanding in terms of required infrastructure and in general produces lower quality sched-ules,while the latter needs to monitor at runtime the cluster in order to provide more precise and effective solutions,so it entails more overhead at runtime for gathering performance data and carrying out re-schedulings.In this work we consider a topology structured as a di-rected acyclic graph[8]where an upper bound can be set on the length of the path that any input tuple follows from the emitting spout to the bolt that concludes its process-ing.This means that we don’t take into account topologies containing cycles,for example back propagation streams in online machine learning algorithms[10].A Storm cluster includes a set N={n i}of worker nodes (i=1...N),each one configured with S i available slots(i= 1...N).In a Storm cluster,a set T={t i}of topologies are deployed(i=1...T),each one configured to run on at most W i workers(i=1...T).A topology t i consists of a set C i of interconnected components(i=1...T).Each component c j (j=1...C i)is configured with a certain level of parallelism by specifying two parameters:(i)the number of executors, and(ii)the number of tasks.A component is replicated on many tasks that are executed by a certain number of executors.A topology t i consists of E i executors e i,j(i= 1...T,j=1...E i).The actual number of workers required for a topology t i is min(W i,E i).The total number of workers required to run all the topologies isTi=1min(W i,E i).A schedule is possible if enough slots are available,that isNi=1S i≥ Ti=1min(W i,E i).Both the algorithms can be tuned using a parameterαthat controls the balancing of the number of executors as-signed per slot.In particular,αaffects the maximum num-ber M of executors that can be placed in a single slot.The minimum value of M for a topology t i is E i/W i ,which means that each slot roughly contains the same number of executors.The maximum number of M corresponds to the assignment where all the slots contain one executor,except for one slot that contains all the other executors,so its value is E i−W i+1.Allowed values forαare in[0,1]range and set the value of M within its minimum and maximum: M(α)= E i/W i +α(E i−W i+1− E i/W i ).3.1Topology-based SchedulingThe offline scheduler examines the structure of the topol-ogy in order to determine the most convenient slots where to place executors.Such a scheduling is executed before the topology is started,so neither the load nor the traf-fic are taken into account,and consequently no constraint about memory or CPU is considered.Not even the stream groupings configured for inter-component communications are inspected because the way they impact on inter-node and inter-slot traffic can be only observed at runtime.Not taking into account all these points obviously limits the ef-fectiveness of the offline scheduler,but on the other hand this enables a very simple implementation that still provides good performance,as will be shown in Section4.A partial order among the components of a topology can be derived on the basis of streams configuration.If a component c i emits tuples on a stream that is consumed by another com-ponent c j,then we have c i<c j.If c i<c j and c j<c k hold, then c i<c k holds by transitivity.Such order is partial be-cause there can be pairs of components c i and c j such that neither c i>c j or c i<c j hold.Since we deal with acyclic topologies,we can always determine a linearizationφof the components according to such partial order.If c i<c j holds, then c i appears inφbefore c j.If neither c i<c j nor c i>c j hold,then they can appear inφin any order.Thefirst element ofφis a spout of the topology.The heuristic em-ployed by the offline scheduler entails iteratingφand,for each component c i,placing its executors in the slots that already contain executors of the components that directly emit tuples towards c i.Finally,the slots are assigned to worker nodes in a round-robin fashion.A possible problem of this approach concerns the possi-bility that not all the required workers get used because,at each step of the algorithm,the slots that are empty get ig-nored since they don’t contain any executor.The solution employed by the offline scheduler consists in forcing to use empty slots at a certain point during the iteration of the components inφ.When starting to consider empty slots is controlled by a tuning parameterβ,whose value lies in [0,1]range:during the assignment of executors for the i-th component,the scheduler is forced to use empty slots if i> β·C i .For example,if traffic is likely to be more intense among upstream components,thenβshould be set large enough such that empty slots get used when upstream components are already assigned.3.2Traffic-based SchedulingThe online scheduler produces assignments that reduce inter-node and inter-slot traffic on the basis of the commu-nication patterns among executors observed at runtime.The goal of the online scheduler is to allocate executors to nodes so as to satisfy the constraints on(i)the number of workers each topology has to run on(min(W i,E i)),(ii)the number of slots available on each worker node(S i)and(iii)the com-putational power available on each node(see Section3.2.1), and to minimize the inter-node traffic(see Section3.2.1). Such scheduling has to be performed at runtime so as to adapt the allocation to the evolution of the load in the clus-ter.Figure1shows the integration of our online scheduler within the Storm architecture.Notice that the performance log depicted in the picture is just a stable buffer space where data produced by monitoring components running at each slot can be placed before it gets consumed by the custom scheduler on Nimbus.The custom scheduler can be periodi-cally run to retrieve this data emptying the log and checking if a new more efficient schedule can be deployed.3.2.1MeasurementsWhen scheduling the executors,taking into account the computational power of the nodes is needed to avoid any overload and in turn requires some measurements.We use the CPU utilization to measure both the load a node is sub-jected to(due to worker processes)and the load generated by an ing the same metric allows us to make predictions on the load generated by a set of executors on a particular node.We also want to deal with clusters compris-ing heterogeneous nodes(different computational power),so we need to take into account the speed of the CPU of a node in order to make proper predictions.For example,if an executor is taking10%CPU utilization on a1GHz CPU, then migrating such executor on a node with2GHz CPU would generate about5%CPU utilization.For this reason, we measure the load in Hz.In the previous example,the executor generates a load of100MHz(10%of1GHz).We use L i to denote the load the node n i is subjected to due to the executors.We use L i,j to denote the load generated by executor e i,j.We use CP U i to denote the speed of the CPU of node n i(number of cores multiplied by single core speed).CPU measurements have been implemented by leveraging standard Java API for retrieving at runtime the CPU time for a specific thread(getThreadCpuTime(threadID)method of ThreadMXBean class).With these measures we can mon-itor the status of the cluster and detect any imbalance due to node CPU overloads.We can state that if a node n i ex-hibits a CPU utilization trend such that L i≥B i for more than X i seconds,then we trigger a rescheduling.We refer to B i as the capacity(measured in Hz)and to X i as the time window(measured in seconds)of node n i.One of the goals of a scheduling is the satisfaction of some constraints on nodes load.We don’t consider the load due to IO bound operations such as reads/writes to disk or network communications with external systems like DBMSs.Event-based systems usually work with data in memory in order to avoid any possible bottleneck so as to allow events toflow along the op-erators network as fast as possible.This doesn’t mean that IO operations are forbidden,but they get better dealt with by employing techniques like executing them on a batch of events instead of on a single event and caching data in main memory to speed them up.In order to minimize the inter-node traffic,the volumes of tuples exchanged among executors have to be measured. We use R i,j,k to denote the rate of the tuples sent by ex-ecutor e i,j to executor e i,k,expressed in tuples per second (i=1...T;j,k=1...E i;j=k).Summing up the traffic of events exchanged among executors deployed on distinct nodes,we can measure the total inter-node traffic.Once ev-ery P seconds,we can compute a new scheduling,compare the inter-node traffic such scheduling would generate with the current one and,in case a reduction of more than R% in found,trigger a rescheduling.3.2.2FormulationGiven the set of nodes N={n i}(i=1...N),the set of workers W={w i,j}(i=1...T,j=1...min(E i,W i))and the set of executors E={e i,j}(i=1...N,j=1...E i),the goal of load balancing is to assign each executor to a slot of a node.The scheduling is aimed at computing(i)an allocation A1:E→W,which maps executors to workers, and(ii)an allocation A2:W→N,which maps workers to nodes.The allocation has to satisfy the constraints on nodes ca-pacity∀k=1...NA2(A1(e i,j))=n ki=1...T;j=1...E iL i,j≤B k(1)as well as the constraints on the maximum number of work-ers each topology can run on∀i=1...T|{w∈W:A1(e i,j)=w,j=1...E i}|=min(E i,W i)(2) The objective of the allocation is to minimize the inter-node trafficminj,k:A2(A1(e i,j))=A2(A1(e i,k))i=1...T;j,k=1...E iR i,j,k(3)3.2.3AlgorithmThe problem formulated in Section3.2.2is known to be NP-complete[9,18].The requirement of carrying the rebal-ance out at runtime implies the usage of a quick mechanism tofind a new allocation,which in turn means that some heuristic has to be employed.The following algorithm is based on a simple greedy heuristic that place executors to node so as to minimize inter-node traffic and avoid load im-balances among all the nodes.It consists of two consecutive phases.In thefirst phase,the executors of each topology are par-titioned among the number of workers the topology has been。

相关文档
最新文档