2017-06-30 63 views
-1

如何解決此錯誤?org.apache.spark.sql.AnalysisException當調用saveAsTable

下面的代碼在Zeppelin中工作,但沒有編譯到彙編語言jar中並使用spark-submit提交。

錯誤是:

org.apache.spark.sql.AnalysisException:指定數據庫名稱或其他 預選賽不允許臨時表。如果表 名稱中包含點(。),請用反引號 (`)引用表名。

代碼:

import org.apache.spark._ 
    import org.apache.spark.rdd.NewHadoopRDD 
    import org.apache.spark.SparkContext 
    import org.apache.spark.SparkContext._ 
    import org.apache.spark.SparkConf 
    import org.apache.spark.sql.SQLContext 
    import org.apache.spark.sql.hive.HiveContext 
    import java.text.SimpleDateFormat 
    import java.util.Calendar 

    case class Benchmark(date: String, time: String, start_end: String, 
         server: String, timestamp: Long, interface: String, 
         cid: String, raw: String) 

    object job { 

     def main(args: Array[String]) { 

      val sdf = new java.text.SimpleDateFormat("yyyyMMdd") 
      val sdf1 = new java.text.SimpleDateFormat("yyyy-MM-dd") 
      val calendar = Calendar.getInstance() 
      calendar.set(Calendar.DAY_OF_YEAR, 
         calendar.get(Calendar.DAY_OF_YEAR) -1) 
      val date = sdf.format(calendar.getTime()) 
      val dt = sdf1.format(calendar.getTime()) 

      val conf = new SparkConf().setAppName("Interface_HtoH_Job") 
      val sc = new SparkContext(conf) 
      val sqlContext = new SQLContext(sc) 
      import sqlContext.implicits._ 
      val hiveContext = new HiveContext(sc) 

      val benchmarkText = sc.textFile(s"hdfs:/rawlogs/prod/log/${date}/*.gz") 

      val pattern = "([0-9-]{10}) ([0-9:]{8}),[0-9]{1,3} Benchmark..* - (Start|End)<ID=([0-9a-zA-Z_]+)-([0-9]+)><([0-9a-zA-Z.,:[email protected]() =_-]*)><cid=TaskId_([0-9A-Z#_a-z]+),.*><[,0-9:a-zA-Z ]+>".r 

      benchmarkText.filter { ln => ln.startsWith("2017-") } 
         .filter { l => l.endsWith(">") } 
         .filter { k => k.contains("<cid=TaskId") } 
         .map { line => 
           try { 
            var pattern(date,time,startEnd,server,ts,interface,cid) = line 
             Benchmark(date,time,startEnd,server,ts.toLong,interface,cid,line) 

           } catch { 

            case e: Exception => Benchmark(dt,"00:00:00","bad",e.toString,"0".toLong,"bad","bad",line) 

           } 

           }.toDF() 
       .write 
       .mode("overwrite") 
       .saveAsTable("prod_ol_bm.interface_benchmark_tmp") // error here 
    } 
} 

使用運行火花提交:

HDP : 2.5.3.0-37 
Spark : 1.6.2.2.5.3.0-37 built for Hadoop 2.7.3.2.5.3.0-37 
+0

看起來你已經忘記了實際提出問題。請花更多的時間來整理你的問題,然後檢查它,以確保你已經包括一個明確的問題陳述。 – zzzzBov

回答

0

更改以下行

val sqlContext = new SQLContext(sc) 

val sqlContext = new HiveContext(sc) 

shell和zeppelin都創建名爲sqlContext的HiveContext,這有點傻。 您需要HiveContext連接到配置單元。

+0

謝謝,我推斷這一點 –

相關問題