2017-03-23 37 views
0

我正在使用的spark版本是2.0+ 我所要做的只是讀取一個pipe(|)分隔值文件到一個Dataframe中,然後像查詢一樣運行SQL。我也試過用逗號分隔的文件。 我正在使用火花外殼與火花進行交互 我已經下載了spark-csv jar並運行spark-shell和--packages選項將它導入到我的會話中。它已成功導入。spark psv file to data frame轉換錯誤

import spark.implicits._ 
import org.apache.spark.sql.SQLContext 
import org.apache.spark.sql._ 
val session = 
SparkSession.builder().appName("test").master("local").getOrCreate() 
    val df = session.read.format("com.databricks.spark.csv").option("header", "true").option("mode", "DROPMALFORMED").load("testdata.txt"); 

WARN Hive: Failed to access metastore. This class should not accessed in runtime. 
apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hi 
at org.apache.hadoop.hive.ql.metadata.Hive.getAllDatabases(Hive.java:1236) 
at org.apache.hadoop.hive.ql.metadata.Hive.reloadFunctions(Hive.java:174) 
at org.apache.hadoop.hive.ql.metadata.Hive.<clinit>(Hive.java:166) 
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:503) 
at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:171) 
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) 
at sun.reflect.NativeConstructorAccessorImpl.newInstance(Unknown Source) 
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(Unknown Source) 
at java.lang.reflect.Constructor.newInstance(Unknown Source) 
at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:258) 
at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:359) 
at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:263) 
at org.apache.spark.sql.hive.HiveSharedState.metadataHive$lzycompute(HiveSharedState.scala:39) 

回答

0

您可以直接將psv文件加載到RDD中,然後按照您的要求拆分它,然後您可以對其應用模式。這是java的例子。

import org.apache.spark.sql.SparkSession; 
import org.apache.spark.sql.types.DataTypes; 
import org.apache.spark.sql.types.StructField; 
import org.apache.spark.sql.types.StructType; 
import org.apache.spark.api.java.JavaRDD; 
import org.apache.spark.sql.Dataset; 
import org.apache.spark.sql.Row; 
import org.apache.spark.sql.RowFactory; 

public class RDDtoDF_Update { 
    public static void main(final String[] args) throws Exception { 

     SparkSession spark = SparkSession 
       .builder() 
       .appName("RDDtoDF_Updated") 
       .master("local[2]") 
       .config("spark.some.config.option", "some-value") 
       .getOrCreate(); 

     StructType schema = DataTypes 
       .createStructType(new StructField[] { 
         DataTypes.createStructField("eid", DataTypes.IntegerType, false), 
         DataTypes.createStructField("eName", DataTypes.StringType, false), 
         DataTypes.createStructField("eAge", DataTypes.IntegerType, true), 
         DataTypes.createStructField("eDept", DataTypes.IntegerType, true), 
         DataTypes.createStructField("eSal", DataTypes.IntegerType, true), 
         DataTypes.createStructField("eGen", DataTypes.StringType,true)}); 


     String filepath = "F:/Hadoop/Data/EMPData.txt"; 
     JavaRDD<Row> empRDD = spark.read() 
       .textFile(filepath) 
       .javaRDD() 
       .map(line -> line.split("\t")) 
       .map(r -> RowFactory.create(Integer.parseInt(r[0]), r[1].trim(),Integer.parseInt(r[2]), 
         Integer.parseInt(r[3]),Integer.parseInt(r[4]),r[5].trim())); 


     Dataset<Row> empDF = spark.createDataFrame(empRDD, schema); 
     empDF.groupBy("edept").max("esal").show(); 

謝謝。

+0

將psv文件直接加載到Dataframe的整個想法是,這樣我就可以像上面查詢那樣運行SQL。我明白,我可以加載爲RDD,然後解析它,然後將其轉換爲數據框,但我想直接將其導入到數據框中,還有爲什麼不呢?如果不需要預處理和數據結構化管道分離。 – jane