I'm running Spark on EKS with Fargate. I don't quite understand why the memory of the Spark executor is always different from my expectations. When I set:
spark.executor.memory = 6g
spark.executor.memoryOverhead = 0.10
spark.memory.offHeap.size = 0
the resource limit of the Spark executor pod is actually 8601Mi. I don't understand why this is the case. (6144*0.10)+6144=6,758.4. What else is occupying 1,842.6M of memory?
If I change the spark-submit parameters to:
spark.executor.pyspark.memory = 1g
spark.executor.memory = 6g
spark.executor.memoryOverhead = 0.10
spark.memory.offHeap.size = 0
then the resource limit of the Spark executor pod is 9625Mi. 9625 - ((6144*0.10)+6144+1024) = 1,842.6.
The larger the memory value of the Spark executor is, the greater the difference will be.