I already have a high time consuming map reduce job running on my cluster. When I am submitting another job, it is stuck at the below point which suggests that it is waiting for currently running job to complete:
hive> select distinct(circle) from vf_final_table_orc_format1;
Query ID = hduser_20181022153503_335ffd89-1528-49be-b091-21213d702a03
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks not specified. Estimated from input data size: 10
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapreduce.job.reduces=<number>
Starting Job = job_1539782606189_0033, Tracking URL = http://secondary:8088/proxy/application_1539782606189_0033/
Kill Command = /home/hduser/hadoop/bin/hadoop job -kill job_1539782606189_0033
I am running a mapreduce job on 166GB
of data currently. My setup included 7 nodes
out of which 5
are DN with 32GB RAM
and 8.7TB HDD
while 1 NN
and 1 SN
has 32 GB RAM
and 1.1TB HDD
.
What settings do I need to tweak in order to execute the jobs in parallel? I am currently using hadoop 2.5.2 version.
EDIT : Right now my cluster is consuming only 8-10 GB of RAM out of 32 GB per node. The other HIVE queries,MR Jobs are stuck and are waiting for a single job to finish. How do I increase the memory consumption to facilitate more jobs executing in parallel. Here is the current output of ps
command :
[hduser@secondary ~]$ ps -ef | grep -i runjar | grep -v grep
hduser 110398 1 0 Nov11 ? 00:07:15 /opt/jdk1.8.0_77//bin/java -Dproc_jar -Xmx1000m
-Dhadoop.log.dir=/home/hduser/hadoop/logs -Dyarn.log.dir=/home/hduser/hadoop/logs
-Dhadoop.log.file=yarn.log -Dyarn.log.file=yarn.log -Dyarn.home.dir=
-Dyarn.id.str= -Dhadoop.root.logger=INFO,console -Dyarn.root.logger=INFO,console -Dyarn.policy.file=hadoop-policy.xml
-Dhadoop.log.dir=/home/hduser/hadoop/logs -Dyarn.log.dir=/home/hduser/hadoop/logs
-Dhadoop.log.file=yarn.log -Dyarn.log.file=yarn.log
-Dyarn.home.dir=/home/hduser/hadoop -Dhadoop.home.dir=/home/hduser/hadoop
-Dhadoop.root.logger=INFO,console
-Dyarn.root.logger=INFO,console
-classpath /home/hduser/hadoop/etc/hadoop:/home/hduser/hadoop/etc/hadoop:/home/hduser/hadoop/etc/hadoop:/home/hduser/hadoop/share/hadoop/common/lib/*:/home/hduser/hadoop/share/hadoop/common/*:/home/hduser/hadoop/share/hadoop/hdfs:/home/hduser/hadoop/share/hadoop/hdfs/lib/*:/home/hduser/hadoop/share/hadoop/hdfs/*:/home/hduser/hadoop/share/hadoop/yarn/lib/*:/home/hduser/hadoop/share/hadoop/yarn/*:/home/hduser/hadoop/share/hadoop/mapreduce/lib/*:/home/hduser/hadoop/share/hadoop/mapreduce/*:/home/hduser/hadoop/contrib/capacity-scheduler/*.jar:/home/hduser/hadoop/share/hadoop/yarn/*:/home/hduser/hadoop/share/hadoop/yarn/lib/*
org.apache.hadoop.util.RunJar abc.jar def.mydriver2 /raw_data /mr_output/
Copyright Notice:Content Author:「Rishabh Dixit」,Reproduced under the CC 4.0 BY-SA copyright license with a link to the original source and this disclaimer.
Link to original article:https://stackoverflow.com/questions/52926999/hadoop-how-to-run-another-mapreduce-job-while-one-is-running