pyspark.RDD.mapPartitions¶
- 
RDD.mapPartitions(f: Callable[[Iterable[T]], Iterable[U]], preservesPartitioning: bool = False) → pyspark.rdd.RDD[U][source]¶
- Return a new RDD by applying a function to each partition of this RDD. - New in version 0.7.0. - Parameters
- ffunction
- a function to run on each partition of the RDD 
- preservesPartitioningbool, optional, default False
- indicates whether the input function preserves the partitioner, which should be False unless this is a pair RDD and the input 
 
- Returns
 - See also - Examples - >>> rdd = sc.parallelize([1, 2, 3, 4], 2) >>> def f(iterator): yield sum(iterator) ... >>> rdd.mapPartitions(f).collect() [3, 7]