More than 30 child processes for spark worker

htop on a spark worker node shows more than 30 child processes for org.apache.spark.deploy.worker.Worker. Should there be that many, or should they be limited?

Hey @kibri,

Are those processes or threads? Could you screenshot what you see? If I remember your config correctly you set 1 worker instance per node.

Try changing the SPARK_WORKER_CORES value to see what happens to that count. Start with a conservative value like half the number of CPU cores just to see what happens.

You’re right, that’s 30 threads, not processes. I didn’t realize that htop shows threads by default.

I started looking into the process performance after I got some statistics using collectl when a worker died. collectl shows 23K “Minor Page Faults” per second when the RPC timeout happened, and the “Resident Virtual Memory” is at 25MB.

Date Time       PID  User     S VmSize  VmLck  VmRSS VmData  VmStk  VmExe  VmLib  VmSwp MajF MinF 
02/09 22:34:46  6381  root     S 33679M      0 25370M 33600M   136K     4K 17596K      0    0  23K /usr/lib/jvm/java-8-oracle/jre/bin/java

MinF: Minor Page Faults per second
VmRSS: Size of Resident Virtual Memory
VmStk: Size of Virtual Memory used for stack

Minor Page Faults can be satisfied by re-assigning pages between processes. It seems strange that the OS would be doing so much page sharing between processes.

I’m re-running with the SPARK_WORKER_CORES=3, on 6 core machines. It’s running longer than the last time, which is a good sign.

OK, the workflow gets further with SPARK_WORKER_CORES=3. 22 projects went through JOIN, but then failed in DOCUMENT. DOCUMENT fails in 15 minutes with RPC disassociation. I thought I saw all 6 CPUs at 100% on the worker that failed in one of the failed runs, so I’m wondering if spark and hdfs together give the machine more CPU demand than it can handle for the timeout. I will try a run of DOCUMENT with collectl gathering stats on the workers.

Found an error in the HDFS log. Maybe this is an HDFS problem now.

2017-02-13 23:05:09,269 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: dcc-spark-worker-4:50010:DataXceiver error processing WRITE_BLOCK operation src: /10.60.60.228:59922 dst: /10.60.60.228:50010
java.io.IOException: Premature EOF from inputStream
at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194)

Thanks for the information. Visit here!