Correct Answer : filter()
Explanation : filter() is a transformation operation in PySpark. It creates a new RDD by selecting elements from an existing RDD based on a condition. Other transformation operations in PySpark include map(), flatMap(), union(), distinct(), and groupByKey().