2

在通過Google docs時,我在最終導出命令(通過設置適當的env變量從主實例執行)獲取以下堆棧跟蹤。從Google雲端導出數據時出錯Bigtable

${HADOOP_HOME}/bin/hadoop jar ${HADOOP_BIGTABLE_JAR} export-table -libjars ${HADOOP_BIGTABLE_JAR} <table-name> <gs://bucket>

SLF4J: Class path contains multiple SLF4J bindings. 
SLF4J: Found binding in [jar:file:/home/hadoop/hbase-install/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] 
SLF4J: Found binding in [jar:file:/home/hadoop/hadoop-install/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] 
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. 
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] 
2016-02-08 23:39:39,068 INFO [main] mapreduce.Export: versions=1, starttime=0, endtime=9223372036854775807, keepDeletedCells=false 
2016-02-08 23:39:39,213 INFO [main] gcs.GoogleHadoopFileSystemBase: GHFS version: 1.4.4-hadoop2 
java.lang.IllegalAccessError: tried to access field sun.security.ssl.Handshaker.localSupportedSignAlgs from class sun.security.ssl.ClientHandshaker 
    at sun.security.ssl.ClientHandshaker.processMessage(ClientHandshaker.java:278) 
    at sun.security.ssl.Handshaker.processLoop(Handshaker.java:913) 
    at sun.security.ssl.Handshaker.process_record(Handshaker.java:849) 
    at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1035) 
    at sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1344) 
    at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1371) 
    at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1355) 
    at sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:559) 
    at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:185) 
    at sun.net.www.protocol.https.HttpsURLConnectionImpl.connect(HttpsURLConnectionImpl.java:153) 
    at com.google.api.client.http.javanet.NetHttpRequest.execute(NetHttpRequest.java:93) 
    at com.google.api.client.http.HttpRequest.execute(HttpRequest.java:972) 
    at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:419) 
    at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:352) 
    at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.execute(AbstractGoogleClientRequest.java:469) 
    at com.google.cloud.hadoop.gcsio.GoogleCloudStorageImpl.getBucket(GoogleCloudStorageImpl.java:1599) 
    at com.google.cloud.hadoop.gcsio.GoogleCloudStorageImpl.getItemInfo(GoogleCloudStorageImpl.java:1554) 
    at com.google.cloud.hadoop.gcsio.CacheSupplementedGoogleCloudStorage.getItemInfo(CacheSupplementedGoogleCloudStorage.java:547) 
    at com.google.cloud.hadoop.gcsio.GoogleCloudStorageFileSystem.getFileInfo(GoogleCloudStorageFileSystem.java:1042) 
    at com.google.cloud.hadoop.gcsio.GoogleCloudStorageFileSystem.exists(GoogleCloudStorageFileSystem.java:383) 
    at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.configureBuckets(GoogleHadoopFileSystemBase.java:1650) 
    at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystem.configureBuckets(GoogleHadoopFileSystem.java:71) 
    at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.configure(GoogleHadoopFileSystemBase.java:1598) 
    at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.initialize(GoogleHadoopFileSystemBase.java:783) 
    at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.initialize(GoogleHadoopFileSystemBase.java:746) 
    at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2591) 
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:89) 
    at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2625) 
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2607) 
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:368) 
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:167) 
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:352) 
    at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296) 
    at org.apache.hadoop.hbase.util.DynamicClassLoader.<init>(DynamicClassLoader.java:104) 
    at org.apache.hadoop.hbase.protobuf.ProtobufUtil.<clinit>(ProtobufUtil.java:241) 
    at org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.convertScanToString(TableMapReduceUtil.java:509) 
    at org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:207) 
    at org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:168) 
    at org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:291) 
    at org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:92) 
    at org.apache.hadoop.hbase.mapreduce.IdentityTableMapper.initJob(IdentityTableMapper.java:51) 
    at org.apache.hadoop.hbase.mapreduce.Export.createSubmittableJob(Export.java:75) 
    at org.apache.hadoop.hbase.mapreduce.Export.main(Export.java:187) 
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
    at java.lang.reflect.Method.invoke(Method.java:606) 
    at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:72) 
    at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:145) 
    at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:153) 
    at com.google.cloud.bigtable.mapreduce.Driver.main(Driver.java:35) 
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
    at java.lang.reflect.Method.invoke(Method.java:606) 
    at org.apache.hadoop.util.RunJar.main(RunJar.java:212) 

,這裏是我的環境變量設置的情況下,它是有幫助的:

export HBASE_HOME=/home/hadoop/hbase-install 
export HADOOP_CLASSPATH=`${HBASE_HOME}/bin/hbase classpath` 
export HADOOP_HOME=/home/hadoop/hadoop-install 

export HADOOP_CLIENT_OPTS="-Xbootclasspath/p:${HBASE_HOME}/lib/bigtable/alpn-boot-7.1.3.v20150130.jar" 
export HADOOP_BIGTABLE_JAR=${HBASE_HOME}/lib/bigtable/bigtable-hbase-mapreduce-0.2.2-shaded.jar 
export HADOOP_HBASE_JAR=${HBASE_HOME}/lib/hbase-server-1.1.2.jar 

此外,當我嘗試運行hbase shell然後list表它只是掛起,不取我列表的表。這是發生了什麼:

~$ hbase shell 
SLF4J: Class path contains multiple SLF4J bindings. 
SLF4J: Found binding in [jar:file:/home/hadoop/hbase-install/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] 
SLF4J: Found binding in [jar:file:/home/hadoop/hadoop-install/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] 
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. 
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] 
2016-02-09 00:02:01,334 INFO [main] grpc.BigtableSession: Opening connection for projectId mystical-height-89421, zoneId us-central1-b, clusterId twitter-data, on data host bigtable.googleapis.com, table admin host bigtabletableadmin.googleapis.com. 
2016-02-09 00:02:01,358 INFO [BigtableSession-startup-0] grpc.BigtableSession: gRPC is using the JDK provider (alpn-boot jar) 
2016-02-09 00:02:01,648 INFO [bigtable-connection-shared-executor-pool1-t2] io.RefreshingOAuth2CredentialsInterceptor: Refreshing the OAuth token 
HBase Shell; enter 'help<RETURN>' for list of supported commands. 
Type "exit<RETURN>" to leave the HBase Shell 
Version 1.1.2, rcc2b70cf03e3378800661ec5cab11eb43fafe0fc, Wed Aug 26 20:11:27 PDT 2015 

hbase(main):001:0> list 
TABLE 

我已經試過:

  • 雙重檢查ALPN和ENV變量適當設置
  • 雙重檢查HBase的-site.xml中和hbase-env.sh使確定沒有看起來錯誤。

我還甚至試圖連接到我的集羣(就像我以前能以下these方向)從另一個gcloud實例,但好像我似乎無法得到這現在要麼工作...(它也掛起)

[email protected]:hbase-1.1.2$ bin/hbase shell 
2016-02-09 00:07:03,506 WARN [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 
2016-02-09 00:07:03,913 INFO [main] grpc.BigtableSession: Opening connection for projectId <project>, zoneId us-central1-b, clusterId <cluster>, on data host bigtable.googleapis.com, table admin host bigtabletableadmin.googleapis.com. 
2016-02-09 00:07:04,039 INFO [BigtableSession-startup-0] grpc.BigtableSession: gRPC is using the JDK provider (alpn-boot jar) 
2016-02-09 00:07:05,138 INFO [Credentials-Refresh-0] io.RefreshingOAuth2CredentialsInterceptor: Refreshing the OAuth token 
HBase Shell; enter 'help<RETURN>' for list of supported commands. 
Type "exit<RETURN>" to leave the HBase Shell 
Version 1.1.2, rcc2b70cf03e3378800661ec5cab11eb43fafe0fc, Wed Aug 26 20:11:27 PDT 2015 

hbase(main):001:0> list 
TABLE 
Feb 09, 2016 12:07:08 AM com.google.bigtable.repackaged.io.grpc.internal.TransportSet$1 run 
INFO: Created transport co[email protected]7b480442(bigtabletableadmin.googleapis.com/64.233.183.219:443) for bigtabletableadmin.googleapis.com/64.233.183.219:443 

任何想法,我做錯了什麼?看起來像訪問問題 - 我該如何解決它?

謝謝!

+0

我們將更新頁面以使用Dataproc。同時,您可以嘗試添加netty-tcnactive jar。 io.netty netty-tcnative 1.1.33。將Fork7添加到您的類路徑中。 –

+0

感謝您的回覆@ LesVogel-GoogleDevRel,但我仍然無法從hbase shell訪問我的集羣。我下載了您指定的jar並將其設置爲我的CLASSPATH(export CLASSPATH =/path/to/jar/netty-tcnative-1.1.33.Fork7.jar)的一部分,並重新命名shell。似乎仍然掛起。我做錯了嗎?此外,更重要的是,關於如何解決我的第一個問題(從表中導出數據,而不是從hbase shell訪問它)的任何指針。提前致謝! – Kamran

+0

一位同事提到這似乎不是一個Bigtable問題,而是一個GCS問題。 com.google.cloud.hadoop.gcsio.GoogleCloudStorageImpl.getBucket(GoogleCloudStorageImpl.java:1599) 我們已經問過該團隊中的某人是否可以提供建議。 –

回答

2
  1. 你可以啓動一個Dataproc羣集w/Bigtable啓用以下these instructions

  2. SSH到主通過./cluster.sh ssh

  3. hbase shell來驗證所有是爲了。

  4. hadoop jar ${HADOOP_BIGTABLE_JAR} export-table -libjars ${HADOOP_BIGTABLE_JAR} <table-name> gs://<bucket>/some-folder

  5. gsutil ls gs://<bucket>/some-folder/**,看看是否_SUCCESS存在。如果是這樣,剩餘的文件就是你的數據。

  6. exit從羣集主

  7. ./cluster.sh delete擺脫集羣的,如果你不再需要它。

您遇到了每週Java運行時更新的問題,該問題已得到糾正。

相關問題