SparkDataFrame列的合并与拆分

合集下载
  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

SparkDataFrame列的合并与拆分
版本说明:Spark-2.3.0
使⽤Spark SQL在对数据进⾏处理的过程中,可能会遇到对⼀列数据拆分为多列,或者把多列数据合并为⼀列。

这⾥记录⼀下⽬前想到的对DataFrame列数据进⾏合并和拆分的⼏种⽅法。

1 DataFrame列数据的合并
例如:我们有如下数据,想要将三列数据合并为⼀列,并以“,”分割
+----+---+-----------+
|name|age| phone|
+----+---+-----------+
|Ming| 20|155********|
|hong| 19|132********|
| zhi| 21|155********|
+----+---+-----------+
1.1 使⽤map⽅法重写
使⽤map⽅法重写就是将DataFrame使⽤map取值之后,然后使⽤toSeq⽅法转成Seq格式,最后使⽤Seq的foldLeft⽅法拼接数据,并返回,如下所⽰:
//⽅法1:利⽤map重写
val separator = ","
df.map(_.toSeq.foldLeft("")(_ + separator + _).substring(1)).show()
/**
* +-------------------+
* | value|
* +-------------------+
* |Ming,20,155********|
* |hong,19,132********|
* | zhi,21,155********|
* +-------------------+
*/
1.2 使⽤内置函数concat_ws
合并多列数据也可以使⽤SparkSQL的内置函数concat_ws()
//⽅法2:使⽤内置函数 concat_ws
import org.apache.spark.sql.functions._
df.select(concat_ws(separator, $"name", $"age", $"phone").cast(StringType).as("value")).show()
/**
* +-------------------+
* | value|
* +-------------------+
* |Ming,20,155********|
* |hong,19,132********|
* | zhi,21,155********|
* +-------------------+
*/
1.3 使⽤⾃定义UDF函数
⾃⼰编写UDF函数,实现多列合并
//⽅法3:使⽤⾃定义UDF函数
// 编写udf函数
def mergeCols(row: Row): String = {
row.toSeq.foldLeft("")(_ + separator + _).substring(1)
}
val mergeColsUDF = udf(mergeCols _)
df.select(mergeColsUDF(struct($"name", $"age", $"phone")).as("value")).show()
完整代码:
import org.apache.spark.sql.{Row, SparkSession}
import org.apache.spark.sql.types.StringType
/**
* Created by shirukai on 2018/9/12
* DataFrame 合并列
*/
object MergeColsTest {
def main(args: Array[String]): Unit = {
val spark = SparkSession
.builder()
.appName(this.getClass.getSimpleName)
.master("local")
.getOrCreate()
//从内存创建⼀组DataFrame数据
import spark.implicits._
val df = Seq(("Ming", 20, 155********L), ("hong", 19, 132********L), ("zhi", 21, 155********L)) .toDF("name", "age", "phone")
df.show()
/**
* +----+---+-----------+
* |name|age| phone|
* +----+---+-----------+
* |Ming| 20|155********|
* |hong| 19|132********|
* | zhi| 21|155********|
* +----+---+-----------+
*/
//⽅法1:利⽤map重写
val separator = ","
df.map(_.toSeq.foldLeft("")(_ + separator + _).substring(1)).show()
/**
* +-------------------+
* | value|
* +-------------------+
* |Ming,20,155********|
* |hong,19,132********|
* | zhi,21,155********|
* +-------------------+
*/
//⽅法2:使⽤内置函数 concat_ws
import org.apache.spark.sql.functions._
df.select(concat_ws(separator, $"name", $"age", $"phone").cast(StringType).as("value")).show() /**
* +-------------------+
* | value|
* +-------------------+
* |Ming,20,155********|
* |hong,19,132********|
* | zhi,21,155********|
* +-------------------+
*/
//⽅法3:使⽤⾃定义UDF函数
// 编写udf函数
def mergeCols(row: Row): String = {
row.toSeq.foldLeft("")(_ + separator + _).substring(1)
}
val mergeColsUDF = udf(mergeCols _)
df.select(mergeColsUDF(struct($"name", $"age", $"phone")).as("value")).show()
/**
* /**
* * +-------------------+
* * | value|
* * +-------------------+
* * |Ming,20,155********|
* * |hong,19,132********|
* * | zhi,21,155********|
* * +-------------------+
**/
*/
}
}
2 DataFrame列数据的拆分
上⾯我们将DataFrame的多列数据合并为⼀列如下所⽰,有时候我们也需要将单列数据,以某种拆分规则,拆分为多列。

下⾯提供⼏种将⼀列拆分为多列的⽅法。

+-------------------+
| value|
+-------------------+
|Ming,20,155********|
|hong,19,132********|
| zhi,21,155********|
+-------------------+
2.1 使⽤内置函数split,然后遍历添加列
该⽅法,先利⽤内置函数split将单列的数据拆分,然后遍历使⽤getItem(⾓标)⽅法获取拆分后的数据,依次使⽤withColumn⽅法添加新列,代码如下所⽰:
//⽅法1:使⽤内置函数split,然后遍历添加列
val separator = ","
lazy val first = df.first()
val numAttrs = first.toString().split(separator).length
val attrs = Array.tabulate(numAttrs)(n => "col_" + n)
//按指定分隔符拆分value列,⽣成splitCols列
var newDF = df.withColumn("splitCols", split($"value", separator))
attrs.zipWithIndex.foreach(x => {
newDF = newDF.withColumn(x._1, $"splitCols".getItem(x._2))
})
newDF.show()
/**
* +-------------------+--------------------+-----+-----+-----------+
* | value| splitCols|col_0|col_1| col_2|
* +-------------------+--------------------+-----+-----+-----------+
* |Ming,20,155********|[Ming, 20, 155522...| Ming| 20|155********|
* |hong,19,132********|[hong, 19, 132879...| hong| 19|132********|
* | zhi,21,155********|[zhi, 21, 1555221...| zhi| 21|155********|
* +-------------------+--------------------+-----+-----+-----------+
//⽅法2:使⽤udf函数创建多列,然后合并
val attributes: Array[Attribute] = {
val numAttrs = first.toString().split(separator).length
//⽣成attributes
Array.tabulate(numAttrs)(i => NumericAttribute.defaultAttr.withName("value" + "_" + i))
}
//创建多列数据
val fieldCols = attributes.zipWithIndex.map(x => {
val assembleFunc = udf {
str: String =>
str.split(separator)(x._2)
}
assembleFunc(df("value").cast(StringType)).as(x._.get, x._1.toMetadata())
})
//合并数据
df.select(col("*") +: fieldCols: _*).show()
/**
* +-------------------+-------+-------+-----------+
* | value|value_0|value_1| value_2|
* +-------------------+-------+-------+-----------+
* |Ming,20,155********| Ming| 20|155********|
* |hong,19,132********| hong| 19|132********|
* | zhi,21,155********| zhi| 21|155********|
* +-------------------+-------+-------+-----------+
*/
完整代码:
import org.apache.spark.ml.attribute.{Attribute, NumericAttribute}
import org.apache.spark.sql.SparkSession
import org.apache.spark.sql.types.StringType
/**
* Created by shirukai on 2018/9/12
* 拆分列
*/
object SplitColTest {
def main(args: Array[String]): Unit = {
val spark = SparkSession
.builder()
.appName(this.getClass.getSimpleName)
.master("local")
.getOrCreate()
//从内存中创建DataFrame
import spark.implicits._
val df = Seq("Ming,20,155********", "hong,19,132********", "zhi,21,155********") .toDF("value")
df.show()
/**
* +-------------------+
* | value|
* +-------------------+
* |Ming,20,155********|
* |hong,19,132********|
* | zhi,21,155********|
* +-------------------+
*/
import org.apache.spark.sql.functions._
//⽅法1:使⽤内置函数split,然后遍历添加列
val separator = ","
lazy val first = df.first()
val numAttrs = first.toString().split(separator).length
val attrs = Array.tabulate(numAttrs)(n => "col_" + n)
//按指定分隔符拆分value列,⽣成splitCols列
var newDF = df.withColumn("splitCols", split($"value", separator))
attrs.zipWithIndex.foreach(x => {
newDF = newDF.withColumn(x._1, $"splitCols".getItem(x._2))
})
newDF.show()
/**
* +-------------------+--------------------+-----+-----+-----------+
* | value| splitCols|col_0|col_1| col_2|
* +-------------------+--------------------+-----+-----+-----------+
* |Ming,20,155********|[Ming, 20, 155522...| Ming| 20|155********|
* |hong,19,132********|[hong, 19, 132879...| hong| 19|132********|
* | zhi,21,155********|[zhi, 21, 1555221...| zhi| 21|155********|
* +-------------------+--------------------+-----+-----+-----------+
*/
//⽅法2:使⽤udf函数创建多列,然后合并
val attributes: Array[Attribute] = {
val numAttrs = first.toString().split(separator).length
//⽣成attributes
Array.tabulate(numAttrs)(i => NumericAttribute.defaultAttr.withName("value" + "_" + i)) }
//创建多列数据
val fieldCols = attributes.zipWithIndex.map(x => {
val assembleFunc = udf {
str: String =>
str.split(separator)(x._2)
}
assembleFunc(df("value").cast(StringType)).as(x._.get, x._1.toMetadata())
})
//合并数据
df.select(col("*") +: fieldCols: _*).show()
/**
* +-------------------+-------+-------+-----------+
* | value|value_0|value_1| value_2|
* +-------------------+-------+-------+-----------+
* |Ming,20,155********| Ming| 20|155********| * |hong,19,132********| hong| 19|132********| * | zhi,21,155********| zhi| 21|155********| * +-------------------+-------+-------+-----------+
*/
}
}。

相关文档
最新文档