Flink学习(七)多流转换算子拆分合并流
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
Flink学习(七)多流转换算⼦拆分合并流
⼀、Split 和 Select (使⽤split切分过的流是不能被⼆次切分的)
DataStream --> SplitStream : 根据特征把⼀个DataSteam 拆分成两个或者多个DataStream.
SplitStream --> DataStream:从⼀个SplitStream中获取⼀个或者多个DataStream。
⼆、Connect 和 CoMap / CoFlatMap
DataStream,DataStream --> ConnectedStream:连接两个保持他们类型的数据流,两个数据流被Connect之后,只是被放在了⼀个同⼀个流中,内部依然保持着各⾃的数据和形式,不发⽣变化,两个流相互独⽴。
ConnectedStream --> DataStream:作⽤与 ConnectedStream上,功能与map和Flatmap⼀样,对 ConnectedStream中的每⼀个Stream分别进⾏map和flatmap处理。
三、Union
DataStream --> DataStream:对两个或者两个以上的DataStream进⾏union操作,产⽣⼀个包含所有DataStream元素的新DataStream
注意:Connect 与 Union区别:
1、Union之前两个流的类型必须是⼀样的,Conect可以不⼀样,并且Connect之后进⾏coMap中调整为⼀样的。
2、Connect只能操作两个流,Union可以操作多个。
综合代码:(可直接运⾏,数据在注释中)
package com.wyh.streamingApi.Transform
import mon.functions.ReduceFunction
import org.apache.flink.streaming.api.scala._
//温度传感器读数样例类
case class SensorReading(id: String, timestamp: Long, temperature: Double)
object TransformTest {
def main(args: Array[String]): Unit = {
val env = StreamExecutionEnvironment.getExecutionEnvironment
env.setParallelism(1)
/**
* sensor_1,1547718199,35.80018327300259
* sensor_6,1547718201,15.402984393403084
* sensor_7,1547718202,6.720945201171228
* sensor_10,1547718205,38.1010676048934444
* sensor_1,1547718199,35.1
* sensor_1,1547718199,31.0
* sensor_1,1547718199,39
*/
val streamFromFile = env.readTextFile("F:\\flink-study\\wyhFlinkSD\\data\\sensor.txt")
//基本转换算⼦和滚动聚合算⼦======================================================================================= /**
* map keyBy sum
*/
val dataStream: DataStream[SensorReading] = streamFromFile.map(data => {
val dataArray = data.split(",")
SensorReading(dataArray(0).trim, dataArray(1).trim.toLong, dataArray(2).trim.toDouble)
})
// dataStream.keyBy(0).sum(2).printToErr("keyBy test")
//scala强类型语⾔只有_.id 可以指定返回类型
val aggStream: KeyedStream[SensorReading, String] = dataStream.keyBy(_.id)
val stream1: DataStream[SensorReading] = aggStream.sum("temperature")
// stream1.printToErr("scala强类型语⾔")
/**
* reduce
*
* 输出当前传感器最新的温度要加10,时间戳是上⼀次数据的时间加1
*/
aggStream.reduce(new ReduceFunction[SensorReading] {
override def reduce(t: SensorReading, t1: SensorReading): SensorReading = {
SensorReading(t.id, t.timestamp + 1, t1.temperature + 10)
}
}) //.printToErr("reduce test")
//多流转换算⼦==================================================================================================== /**
* 分流
* split select
* DataStream --> SplitStream --> DataStream
*
* 需求:传感器数据按照温度⾼低(以30度为界),拆分成两个流
*/
val splitStream = dataStream.split(data => {
//盖上戳后⾯进⾏分拣
if (data.temperature > 30) {
Seq("high")
} else if (data.temperature < 10) {
Seq("low")
} else {
Seq("health")
}
})
//根据戳进⾏分拣
val highStream = splitStream.select("high")
val lowStream = splitStream.select("low")
val healthStream = splitStream.select("health")
//可以传多个参数,⼀起分拣出来
val allStream = splitStream.select("high", "low")
// highStream.printToErr("high")
// lowStream.printToErr("low")
// allStream.printToErr("all")
// healthStream.printToErr("healthStream")
/**
* 合并注意: Connect 只能进⾏两条流进⾏合并,但是⽐较灵活,不同流的数据结构可以不⼀样
* Connect CoMap/CoFlatMap
*
* DataStream --> ConnectedStream --> DataStream
*/
val warningStream = highStream.map(data => (data.id, data.temperature))
val connectedStream = warningStream.connect(lowStream)
val coMapDataStream = connectedStream.map(
warningData => (warningData._1, warningData._2, "温度过⾼报警!!"),
lowData => (lowData.id, lowData.temperature, "温度过低报警===")
)
// coMapDataStream.printToErr("合并流")
/**
* 合并多条流注意: 要求数据结构必须要⼀致,⼀样
*
* Union DataStream --> DataSteam 就没有⼀个中间转换操作了
*
*/
val highS = highStream.map(h => (h.id, h.timestamp, h.temperature, "温度过⾼报警!!")) val lowS = lowStream.map(l => (l.id, l.timestamp, l.temperature, "温度过低报警==="))
val healthS = healthStream.map(l => (l.id, l.timestamp, l.temperature, "健康"))
val unionStream = highS.union(lowS).union(healthS)
unionStream.printToErr("union合并")
env.execute("transform test")
}
}。