0
在4節點集羣中的每個節點上,更改爲/etc/dse/spark/hive-site.xml。在Spark Beeline上從S3創建外部表
<property>
<name>fs.s3.awsAccessKeyId</name>
<value>****</value>
</property>
<property>
<name>fs.s3.awsSecretAccessKey</name>
<value>****</value>
</property>
<property>
<name>fs.s3n.awsAccessKeyId</name>
<value>****</value>
</property>
<property>
<name>fs.s3n.awsSecretAccessKey</name>
<value>****</value>
</property>
<property>
<name>fs.s3a.awsAccessKeyId</name>
<value>****</value>
</property>
<property>
<name>fs.s3a.awsSecretAccessKey</name>
<value>****</value>
</property>
設置爲低於ENV變量從那裏火花節儉服務器和火花直線客戶端節點上運行 出口AWS_SECRET_ACCESS_KEY = ****
出口AWS_ACCESS_KEY_ID = *****
入門星火節儉服務器如下面
dse -u cassandra -p ***** spark-sql-thriftserver start --conf spark.cores.max=2 --conf spark.executor.memory=2G --conf
spark.driver.maxResultSize=1G --conf spark.kryoserializer.buffer.max=512M --conf spark.sql.thriftServer.incrementalCollect=true
創建從火花直線的表S3桶作爲源
dse -u cassandra -p ***** spark-beeline --total-executor-cores 2 --executor-memory 2G
The log file is at /home/ubuntu/.spark-beeline.log
Beeline version 1.2.1.2_dse_spark by Apache Hive
beeline> !connect jdbc:hive2://localhost:10000 cassandra
Connecting to jdbc:hive2://localhost:10000
Enter password for jdbc:hive2://localhost:10000: ****************
Connected to: Spark SQL (version 1.6.3)
Driver: Hive JDBC (version 1.2.1.2_dse_spark)
Transaction isolation: TRANSACTION_REPEATABLE_READ
0: jdbc:hive2://localhost:10000> CREATE EXTERNAL TABLE test_table (name string,phone string) PARTITIONED BY(day date)
ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' LOCATION 's3a://hive-getsimpl/test';
我得到下面的錯誤
Error: org.apache.spark.sql.execution.QueryExecutionException: FAILED:
Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask.MetaException (message:com.amazonaws.services.s3.model.AmazonS3Exception:
Status Code: 403, AWS Service: Amazon S3, AWS Request ID: 29991E2338CC6B49, AWS Error Code: null,
AWS Error Message: Forbidden, S3 Extended Request ID: kidxZNQI73PBsluGoLQlB4+VEdIx0t82Y/J/q69NA18k8MnSILEyo5riCuj3QcEiGiFRqB4rAbc=) (state=,code=0)
注:AWS鍵是有效的,並已與S3A使用其他Python腳本
Steve,我現在可以創建表格了。但從表中選擇時出現以下錯誤。 '0:jdbc:hive2:// localhost:10000> select * from test_table; 錯誤:org.apache.hadoop.hive.ql.metadata.HiveException:無法獲取表test_table。 Bucket 172.31.26.109不存在(state =,code = 0)' –
抱歉,出於我的知識 –