2017-09-24 52 views
0

我正在嘗試訪問配置單元CLI。但是,它無法從以下AccessControl問題開始。 足夠強烈,我能夠查詢來自Hue的配置單元數據而沒有AccessControl問題。但是,配置單元CLI不起作用。 我在MapR羣集上。無法啓動Hive CLI Hadoop(MapR)

任何幫助,非常感謝。

[<user_name>@<edge_node> ~]$ hive 
SLF4J: Class path contains multiple SLF4J bindings. 
SLF4J: Found binding in [jar:file:/opt/mapr/hive/hive-2.1/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class] 
SLF4J: Found binding in [jar:file:/opt/mapr/lib/slf4j-log4j12-1.7.12.jar!/org/slf4j/impl/StaticLoggerBinder.class] 
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. 
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] 
Logging initialized using configuration in file:/opt/mapr/hive/hive-2.1/conf/hive-log4j2.properties Async: true 
2017-09-23 23:52:08,988 WARN [main] DataNucleus.General: Plugin (Bundle) "org.datanucleus.api.jdo" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/opt/mapr/spark/spark-2.1.0/jars/datanucleus-api-jdo-4.2.4.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/opt/mapr/hive/hive-2.1/lib/datanucleus-api-jdo-4.2.1.jar." 
2017-09-23 23:52:08,993 WARN [main] DataNucleus.General: Plugin (Bundle) "org.datanucleus" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/opt/mapr/spark/spark-2.1.0/jars/datanucleus-core-4.1.6.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/opt/mapr/hive/hive-2.1/lib/datanucleus-core-4.1.6.jar." 
2017-09-23 23:52:09,004 WARN [main] DataNucleus.General: Plugin (Bundle) "org.datanucleus.store.rdbms" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/opt/mapr/spark/spark-2.1.0/jars/datanucleus-rdbms-4.1.19.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/opt/mapr/hive/hive-2.1/lib/datanucleus-rdbms-4.1.7.jar." 
2017-09-23 23:52:09,038 INFO [main] DataNucleus.Persistence: Property datanucleus.cache.level2 unknown - will be ignored 
2017-09-23 23:52:09,039 INFO [main] DataNucleus.Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored 
2017-09-23 23:52:14,2251 ERROR JniCommon fs/client/fileclient/cc/jni_MapRClient.cc:2172 Thread: 20235 mkdirs failed for /user/<user_name>, error 13 
Exception in thread "main" java.lang.RuntimeException: org.apache.hadoop.security.AccessControlException: User <user_name>(user id 50005586) has been denied access to create <user_name> 
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:617) 
at org.apache.hadoop.hive.ql.session.SessionState.beginStart(SessionState.java:531) 
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:714) 
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:646) 
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:641) 
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
at java.lang.reflect.Method.invoke(Method.java:606) 
at org.apache.hadoop.util.RunJar.run(RunJar.java:221) 
at org.apache.hadoop.util.RunJar.main(RunJar.java:136) 
Caused by: org.apache.hadoop.security.AccessControlException: User <user_name>(user id 50005586) has been denied access to create <user_name> 
at com.mapr.fs.MapRFileSystem.makeDir(MapRFileSystem.java:1256) 
at com.mapr.fs.MapRFileSystem.mkdirs(MapRFileSystem.java:1276) 
at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1913) 
at org.apache.hadoop.hive.ql.exec.tez.DagUtils.getDefaultDestDir(DagUtils.java:823) 
at org.apache.hadoop.hive.ql.exec.tez.DagUtils.getHiveJarDirectory(DagUtils.java:917) 
at org.apache.hadoop.hive.ql.exec.tez.TezSessionState.createJarLocalResource(TezSessionState.java:616) 
at org.apache.hadoop.hive.ql.exec.tez.TezSessionState.openInternal(TezSessionState.java:256) 
at org.apache.hadoop.hive.ql.exec.tez.TezSessionState.beginOpen(TezSessionState.java:220) 
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:614) 
... 10 more 

回答

1

錯誤提示您定義了在文件系統中創建目錄的訪問權限。這很可能是/user/<user name>,這需要由HDFS/MapR FS超級用戶添加。

我可以查詢從順蜂房數據而不AccessControl的

色相通過節儉和HiveServer2通信。

Hive CLI繞過HiveServer2並被棄用。

您應該使用直線代替。

beeline -n $(whoami) -u jdbc:hive2://hiveserver:10000/default 

如果你在一個kerberized集羣,那麼你需要一些額外的選項。