pyspark.RDD.randomSplit

RDD.randomSplit(weights, seed=None)[source]

Randomly splits this RDD with the provided weights.

Parameters
  • weights – weights for splits, will be normalized if they don’t sum to 1

  • seed – random seed

Returns

split RDDs in a list

>>> rdd = sc.parallelize(range(500), 1)
>>> rdd1, rdd2 = rdd.randomSplit([2, 3], 17)
>>> len(rdd1.collect() + rdd2.collect())
500
>>> 150 < rdd1.count() < 250
True
>>> 250 < rdd2.count() < 350
True