2015-05-14 19 views
2


我在羣集上使用Cloudera分佈和Hive的第13個版本。
配置單元工作沒有取得進展由於減少任務的數量設置爲0,因爲沒有減少運營商

我碰到那裏的工作不是寫日誌行後取得任何進展的一個問題來了 - 「的減少任務的數量設置爲0,因爲沒有降低運營商

下面是在日誌中同樣,你能幫我什麼樣的問題是因爲這不是一個代碼問題,就好像我重新運行相同的工作,它成功完成。

Logging initialized using configuration in jar:file:/opt/cloudera/parcels/CDH-5.2.1-1.cdh5.2.1.p0.12/jars/hive-common-0.13.1-cdh5.2.1.jar!/hive-log4j.properties 
Total jobs = 5 
Launching Job 1 out of 5 
Launching Job 2 out of 5 
Number of reduce tasks not specified. Defaulting to jobconf value of: 10 
In order to change the average load for a reducer (in bytes): 
    set hive.exec.reducers.bytes.per.reducer=<number> 
In order to limit the maximum number of reducers: 
    set hive.exec.reducers.max=<number> 
In order to set a constant number of reducers: 
Number of reduce tasks not specified. Defaulting to jobconf value of: 10 
    set mapreduce.job.reduces=<number> 
In order to change the average load for a reducer (in bytes): 
    set hive.exec.reducers.bytes.per.reducer=<number> 
In order to limit the maximum number of reducers: 
    set hive.exec.reducers.max=<number> 
In order to set a constant number of reducers: 
    set mapreduce.job.reduces=<number> 
Starting Job = job_1431159077692_1399, Tracking URL = xyz.com:8088/proxy/application_1431159077692_1399/ 
Starting Job = job_1431159077692_1398, Tracking URL = hxyz.com:8088/proxy/application_1431159077692_1398/ 
Kill Command = /opt/cloudera/parcels/CDH-5.2.1-1.cdh5.2.1.p0.12/lib/hadoop/bin/hadoop job -kill job_1431159077692_1399 
Kill Command = /opt/cloudera/parcels/CDH-5.2.1-1.cdh5.2.1.p0.12/lib/hadoop/bin/hadoop job -kill job_1431159077692_1398 
Hadoop job information for Stage-12: number of mappers: 5; number of reducers: 10 
Hadoop job information for Stage-1: number of mappers: 5; number of reducers: 10 
2015-05-12 19:59:12,298 Stage-1 map = 0%, reduce = 0% 
2015-05-12 19:59:12,298 Stage-12 map = 0%, reduce = 0% 
2015-05-12 19:59:20,832 Stage-1 map = 20%, reduce = 0%, Cumulative CPU 2.5 sec 
2015-05-12 19:59:20,832 Stage-12 map = 80%, reduce = 0%, Cumulative CPU 8.63 sec 
2015-05-12 19:59:21,905 Stage-1 map = 60%, reduce = 0%, Cumulative CPU 7.06 sec 
2015-05-12 19:59:22,968 Stage-1 map = 80%, reduce = 0%, Cumulative CPU 9.34 sec 
2015-05-12 19:59:24,031 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 11.46 sec 
2015-05-12 19:59:26,265 Stage-12 map = 100%, reduce = 0%, Cumulative CPU 10.92 sec 
2015-05-12 19:59:32,665 Stage-12 map = 100%, reduce = 30%, Cumulative CPU 24.51 sec 
2015-05-12 19:59:33,726 Stage-12 map = 100%, reduce = 100%, Cumulative CPU 57.61 sec 
2015-05-12 19:59:35,021 Stage-1 map = 100%, reduce = 30%, Cumulative CPU 20.99 sec 
MapReduce Total cumulative CPU time: 57 seconds 610 msec 
Ended Job = job_1431159077692_1399 
2015-05-12 19:59:36,084 Stage-1 map = 100%, reduce = 80%, Cumulative CPU 39.24 sec 
2015-05-12 19:59:37,146 Stage-1 map = 100%, reduce = 90%, Cumulative CPU 42.37 sec 
2015-05-12 19:59:38,203 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 45.97 sec 
MapReduce Total cumulative CPU time: 45 seconds 970 msec 
Ended Job = job_1431159077692_1398 
2015-05-12 19:59:45,180 WARN [main] conf.Configuration (Configuration.java:loadProperty(2510)) - file:/tmp/srv-hdp-mkt-d/hive_2015-05-12_19-58-53_081_2145723752519383568-1/-local-10014/jobconf.xml:an attempt to override final parameter: hadoop.ssl.require.client.cert; Ignoring. 
2015-05-12 19:59:45,193 WARN [main] conf.Configuration (Configuration.java:loadProperty(2510)) - file:/tmp/srv-hdp-mkt-d/hive_2015-05-12_19-58-53_081_2145723752519383568-1/-local-10014/jobconf.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring. 
2015-05-12 19:59:45,196 WARN [main] conf.Configuration (Configuration.java:loadProperty(2510)) - file:/tmp/srv-hdp-mkt-d/hive_2015-05-12_19-58-53_081_2145723752519383568-1/-local-10014/jobconf.xml:an attempt to override final parameter: hadoop.ssl.client.conf; Ignoring. 
2015-05-12 19:59:45,201 WARN [main] conf.Configuration (Configuration.java:loadProperty(2510)) - file:/tmp/srv-hdp-mkt-d/hive_2015-05-12_19-58-53_081_2145723752519383568-1/-local-10014/jobconf.xml:an attempt to override final parameter: hadoop.ssl.keystores.factory.class; Ignoring. 
2015-05-12 19:59:45,210 WARN [main] conf.Configuration (Configuration.java:loadProperty(2510)) - file:/tmp/srv-hdp-mkt-d/hive_2015-05-12_19-58-53_081_2145723752519383568-1/-local-10014/jobconf.xml:an attempt to override final parameter: hadoop.ssl.server.conf; Ignoring. 
2015-05-12 19:59:45,258 WARN [main] conf.Configuration (Configuration.java:loadProperty(2510)) - file:/tmp/srv-hdp-mkt-d/hive_2015-05-12_19-58-53_081_2145723752519383568-1/-local-10014/jobconf.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring. 
2015-05-12 19:59:45,792 WARN [main] conf.HiveConf (HiveConf.java:initialize(1491)) - DEPRECATED: Configuration property hive.metastore.local no longer has any effect. Make sure to provide a valid value for hive.metastore.uris if you are connecting to a remote metastore. 
Execution log at: /tmp/srv-hdp-mkt-d/srv-hdp-mkt-d_20150512195858_1b598453-78a8-4867-9402-d972e3c067f2.log 
2015-05-12 07:59:46 Starting to launch local task to process map join; maximum memory = 257949696 
2015-05-12 07:59:47 Dump the side-table into file: file:/tmp/srv-hdp-mkt-d/hive_2015-05-12_19-58-53_081_2145723752519383568-1/-local-10007/HashTable-Stage-4/MapJoin-mapfile10--.hashtable 
2015-05-12 07:59:47 Uploaded 1 File to: file:/tmp/srv-hdp-mkt-d/hive_2015-05-12_19-58-53_081_2145723752519383568-1/-local-10007/HashTable-Stage-4/MapJoin-mapfile10--.hashtable (475 bytes) 
2015-05-12 07:59:47 Dump the side-table into file: file:/tmp/srv-hdp-mkt-d/hive_2015-05-12_19-58-53_081_2145723752519383568-1/-local-10007/HashTable-Stage-4/MapJoin-mapfile01--.hashtable 
2015-05-12 07:59:47 Uploaded 1 File to: file:/tmp/srv-hdp-mkt-d/hive_2015-05-12_19-58-53_081_2145723752519383568-1/-local-10007/HashTable-Stage-4/MapJoin-mapfile01--.hashtable (388 bytes) 
2015-05-12 07:59:47 End of local task; Time Taken: 1.209 sec. 
Execution completed successfully 
MapredLocal task succeeded 
Launching Job 3 out of 5 
Number of reduce tasks is set to 0 since there's no reduce operator 
+1

這個問題阻礙了我的發展,每一個輸入將不勝感激。 – Bector

+0

解釋計劃的輸出是什麼? – Venkat

+0

對不起,我沒有得到你的問題。我是否希望在執行此項工作時添加任何關鍵字? – Bector

回答

1

您在腳本設定的任務,如果是然後刪除並重新運行它。 我覺得如果你沒有工作,就不需要創建多個任務。

+0

海倫,我不這麼認爲,因爲設置更多的任務應該會造成問題。仍在嘗試。 – Bector

0

我也遇到過這個問題。爲了解決這個問題,我檢查了我的所有HDFS服務是否都啓動了。當我做'jps'時,我發現我的資源管理器沒有啓動。所以我繼續使用start-yarn.sh開始它

在完成上述操作之後,花了很長時間才第一次。隨後,如果工作更快。

2

指定的MR任務隊列,

hive> set mapred.job.queue.name=long_running; 
hive> SELECT * FROM table_name LIMIT 10; 

這爲我工作。