2015-10-05 22 views
2

我想用我自己的自定義serde HiveQL(它與純Hive正常工作)。我遵循以下指令:https://cwiki.apache.org/confluence/display/Hive/Hive+on+Spark%3A+Getting+StartedHive on Spark>紗線模式>火花配置>什麼值給spark.master

但我對這部分非常困惑:啓動Spark羣集(支持獨立和Spark on YARN)。 根據我的理解,如果Spark在獨立模式下運行,我們只需要啓動Spark羣集。但是我打算在Yarn上運行Spark,是否需要啓動Spark集羣?我所做的是:我剛剛開始使用Hadoop Yarn,因爲我真的不知道如何設置屬性spark.master,我只是沒有設置它。

2015-10-05 20:42:07,184 INFO [main]: status.SparkJobMonitor (RemoteSparkJobMonitor.java:startMonitor(67)) - Job hasn't been submitted after 61s. Abor 

婷是:可能是因爲這個設置的,我運行一個蜂巢的查詢,它使用我自己的SERDE時遇到錯誤消息。

2015-10-05 20:42:07,184 ERROR [main]: status.SparkJobMonitor (SessionState.java:printError(960)) - Status: SENT 
2015-10-05 20:42:07,184 INFO [main]: log.PerfLogger (PerfLogger.java:PerfLogEnd(148)) - </PERFLOG method=SparkRunJob start=1444066866174 end=1444066927184 duration=61010 from=org.apache.hadoop.hive.ql.exec.spark.status.SparkJobMonitor> 
2015-10-05 20:42:07,300 ERROR [main]: ql.Driver (SessionState.java:printError(960)) - FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.spark.SparkTask 
2015-10-05 20:42:07,300 INFO [main]: log.PerfLogger (PerfLogger.java:PerfLogEnd(148)) - </PERFLOG method=Driver.execute start=1444066848958 end=1444066927300 duration=78342 from=org.apache.hadoop.hive.ql.Driver> 

...

在端部也有以下不同之處:

2015-10-05 20:42:16,658 INFO [stderr-redir-1]: client.SparkClientImpl (SparkClientImpl.java:run(569)) - 15/10/05 20:42:16 INFO yarn.Client: Application report for application_1444066615793_0001 (state: ACCEPTED) 
2015-10-05 20:42:17,337 WARN [main]: client.SparkClientImpl (SparkClientImpl.java:stop(154)) - Timed out shutting down remote driver, interrupting... 
2015-10-05 20:42:17,337 WARN [Driver]: client.SparkClientImpl (SparkClientImpl.java:run(430)) - Waiting thread interrupted, killing child process. 
2015-10-05 20:42:17,345 WARN [stderr-redir-1]: client.SparkClientImpl (SparkClientImpl.java:run(572)) - Error in redirector thread. 
java.io.IOException: Stream closed 
    at  java.io.BufferedInputStream.getBufIfOpen(BufferedInputStream.java:162) 
    at java.io.BufferedInputStream.read1(BufferedInputStream.java:272) 
    at java.io.BufferedInputStream.read(BufferedInputStream.java:334) 
    at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:283) 
    at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:325) 
    at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:177) 
    at java.io.InputStreamReader.read(InputStreamReader.java:184) 
    at java.io.BufferedReader.fill(BufferedReader.java:154) 
    at java.io.BufferedReader.readLine(BufferedReader.java:317) 
    at java.io.BufferedReader.readLine(BufferedReader.java:382) 
    at org.apache.hive.spark.client.SparkClientImpl$Redirector.run(SparkClientImpl.java:568) 
    at java.lang.Thread.run(Thread.java:745) 

2015年10月5日20:42:17371 INFO [線程15]:會話。 SparkSessionManagerImpl(SparkSessionManagerImpl.java:shutdown(146)) - 關閉會話管理器。

忠實希望任何人都可以給一些建議,非常感謝提前

回答

1

請嘗試set spark.master=yarn-client;

2

由於從官方文檔Spark on YARN,各位高手將基本:

  • 紗-cluster:如果您要提交作業以啓動或
  • yarn-client:如果你想要實例SparkContext本地

不要忘了有configurarion文件(核心-site.xml中,HDFS-site.xml中,紗的site.xml,mapred-site.xml中,蜂房-site.xml等)可用於HADOOP_CONF_DIRYARN_CONF_DIR。您可以設置這些變量爲<spark_home>/conf/spark-env.sh