2016-07-13 36 views
0

當發出像在命令行中 我得到一個異常計數或獨特的命令,並從控制檯啓動 不知道在哪裏可以檢查整個日誌,但這裏是摘錄的Apache蜂巢例外NoClassDefFoundError的:斯卡拉/收集/迭代

Exception in thread "main" java.lang.NoClassDefFoundError: scala/collection/Iterable 
     at org.apache.hadoop.hive.ql.optimizer.spark.SetSparkReducerParallelism.process(SetSparkReducerParallelism.java:117) 
     at org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:90) 
     at org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:105) 
     at org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:89) 
     at org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.walk(DefaultGraphWalker.java:158) 
     at org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:120) 
     at org.apache.hadoop.hive.ql.parse.spark.SparkCompiler.runJoinOptimizations(SparkCompiler.java:178) 
     at org.apache.hadoop.hive.ql.parse.spark.SparkCompiler.optimizeOperatorPlan(SparkCompiler.java:116) 
     at org.apache.hadoop.hive.ql.parse.TaskCompiler.compile(TaskCompiler.java:134) 
     at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:10857) 
     at org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:239) 
     at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:250) 
     at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:437) 
     at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:329) 
     at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1158) 
     at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1253) 
     at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1084) 
     at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1072) 
     at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:232) 
     at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:183) 
     at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:399) 
     at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:776) 
     at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:714) 
     at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:641) 
     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
     at java.lang.reflect.Method.invoke(Method.java:498) 
     at org.apache.hadoop.util.RunJar.run(RunJar.java:221) 
     at org.apache.hadoop.util.RunJar.main(RunJar.java:136) 
Caused by: java.lang.ClassNotFoundException: scala.collection.Iterable 
     at java.net.URLClassLoader.findClass(URLClassLoader.java:381) 
     at java.lang.ClassLoader.loadClass(ClassLoader.java:424) 
     at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331) 
     at java.lang.ClassLoader.loadClass(ClassLoader.java:357) 
     ... 30 more 

回答

3

這個問題通常是因爲類路徑的配置問題,請嘗試這可能有用

第1步:找到$ HIVE_HOME這個代碼/斌/蜂巢(也許備份這個文件是更好):

CLASSPATH=${CLASSPATH}:${HIVE_LIB}/*.jar 

for f in ${HIVE_LIB}/*.jar; do 
    CLASSPATH=${CLASSPATH}:$f; 
done 

第2步:添加火花lib下,像hive_lib

for f in ${SPARK_HOME}/jars/*.jar; do 
    CLASSPATH=${CLASSPATH}:$f; 
done 
在此之後

,蜂巢executeable文件看起來這

for f in ${HIVE_LIB}/*.jar; do 
    CLASSPATH=${CLASSPATH}:$f; 
done 


for f in ${SPARK_HOME}/jars/*.jar; do 
    CLASSPATH=${CLASSPATH}:$f; 
done 

SPARK_HOME這點你的火花安裝位置,爲我所用的火花2.0 .0,spark libs在$ {SPARK_HOME}/jar下,我發現了一些google的答案,但這些並不能真正解決我的問題,所以我修改了hive可執行的shell腳本並添加了spark lib,對我有用,希望對你有所幫助

+0

我在配置單元文件的末尾添加了這兩個腳本,但配置單元表創建好了,但是如果我運行select * from tablename會得到相同的錯誤。 –