Salting technique
Join salt technique
Salting is a technique used to distribute data more evenly across partitions, preventing skewness and improving the efficiency of certain operations such as joins in distributed computing frameworks like Apache Spark.
example
- 参考链接
https://medium.com/@nikaljeajay36/understanding-salting-in-spark-a-practical-guide-bf30f4525f64
import org.apache.spark.sql.functions.{expr, col, lit, concat, floor, rand}
// Generate data for cities
val cityData = Seq.fill(1000)("Aurangabad") ++ Seq.fill(100)("Mumbai") ++ Seq.fill(10)("Chennai") ++ Seq("Hyderabad")
// Create DataFrame for cities
val cityDF = cityData.toDF("city")
这里 Aurangabad 是一个倾斜 key, skew key
val saltedCityDF = cityDF.
withColumn("Left_city_salt_key", concat(col("city"), lit("_"), floor(rand(123456) * 19)))
// Generate data for states
val stateData = Seq(("Aurangabad", "Maharashtra"),
("Mumbai", "Maharashtra"),
("Chennai", "TamilNadu"),
("Hyderabad", "Telangana"))
// Create DataFrame for states
val stateDF = stateData.toDF("cityname", "state")
对维度表,使用 explode 加 salt
val saltedStateDF = stateDF.
withColumn("salt_key", expr("explode(array(0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19))")).
withColumn("right_city_salt_key", concat(col("cityname"), lit("_"), col("salt_key")))
// Perform inner join between salted city names and salted city names in states
val joinedDF = saltedCityDF.join(saltedStateDF, saltedCityDF("Left_city_salt_key") === saltedStateDF("right_city_salt_key"), "inner")
scala> joinedDF.count
res32: Long = 1111
进行校验
scala> val originDF = cityDF.join(stateDF, cityDF("city") === stateDF("cityname"), "inner")
originDF: org.apache.spark.sql.DataFrame = [city: string, cityname: string ... 1 more field]
scala> originDF.count
res33: Long = 1111
Salting is a powerful technique for improving the performance of distributed computing operations such as joins in Apache Spark. By adding randomness to the partition key, salting helps to distribute data more evenly across partitions, reducing data skew and improving resource utilization. By implementing salting in your Spark applications, you can achieve better performance and scalability, especially when dealing with skewed datasets.
Salt 基本原则
a random number is appended to keys in big table with skew data from a range of random data and the rows in small table with no skew data are duplicated with the same range of random numbers.
Agg salt technique
import org.apache.spark.sql.functions._
df.withColumn("salt", (rand * n).cast(IntegerType))
.groupBy("salt", groupByFields)
.agg(aggFields)
.groupBy(groupByFields)
.agg(aggFields)