2014-01-22 72 views
2

我2周的Kerberos之間試圖在Hadoop DistCp使用啓用Hadoop集羣(版本 - Hadoop的2.0.0-cdh4.3.0)Hadoop的DistCp使用錯誤

當我使用命令「Hadoop的HDFS DistCp使用:cluster1中:8020 /用戶/test.txt hdfs:// cluster2:8020/user「,它工作正常。但是,當我執行的是源集羣中的命令,我得到下面的錯誤 -

Copy failed: java.io.IOException: Failed on local exception: java.io.IOException: Response is null.; Host Details : local host is: "cluster1/10.96.82.149"; destination host is: "cluster2":8020; 
     at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:763) 
     at org.apache.hadoop.ipc.Client.call(Client.java:1229) 
     at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202) 
     at $Proxy9.getDelegationToken(Unknown Source) 
     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) 
     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) 
     at java.lang.reflect.Method.invoke(Method.java:597) 
     at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164) 
     at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83) 
     at $Proxy9.getDelegationToken(Unknown Source) 
     at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getDelegationToken(ClientNamenodeProtocolTranslatorPB.java:783) 
     at org.apache.hadoop.hdfs.DFSClient.getDelegationToken(DFSClient.java:783) 
     at org.apache.hadoop.hdfs.DistributedFileSystem.getDelegationToken(DistributedFileSystem.java:868) 
     at org.apache.hadoop.fs.FileSystem.collectDelegationTokens(FileSystem.java:509) 
     at org.apache.hadoop.fs.FileSystem.addDelegationTokens(FileSystem.java:487) 
     at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:130) 
     at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:111) 
     at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:85) 
     at org.apache.hadoop.tools.DistCp.setup(DistCp.java:1046) 
     at org.apache.hadoop.tools.DistCp.copy(DistCp.java:666) 
     at org.apache.hadoop.tools.DistCp.run(DistCp.java:881) 
     at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) 
     at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84) 
     at org.apache.hadoop.tools.DistCp.main(DistCp.java:908) 
Caused by: java.io.IOException: Response is null. 
     at org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:941) 
     at org.apache.hadoop.ipc.Client$Connection.run(Client.java:836) 

當我嘗試使用「Hadoop的DistCp使用HFTP:cluster1中:50070 /用戶/ test.txt的HDFS:// Cluster2中: 8020 /用戶「在源或目標羣集上,我收到以下錯誤 -

org.apache.hadoop.ipc.RemoteException(java.io.IOException): Security enabled but user not authenticated by filter 
     at org.apache.hadoop.ipc.RemoteException.valueOf(RemoteException.java:97) 
     at org.apache.hadoop.hdfs.HftpFileSystem$LsParser.startElement(HftpFileSystem.java:425) 
     at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.startElement(AbstractSAXParser.java:501) 
     at com.sun.org.apache.xerces.internal.parsers.AbstractXMLDocumentParser.emptyElement(AbstractXMLDocumentParser.java:179) 
     at com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.scanStartElement(XMLNSDocumentScannerImpl.java:377) 
     at com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl$NSContentDriver.scanRootElementHook(XMLNSDocumentScannerImpl.java:626) 
     at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:3104) 
     at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl$PrologDriver.next(XMLDocumentScannerImpl.java:922) 
     at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:648) 
     at com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.next(XMLNSDocumentScannerImpl.java:140) 
     at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:511) 
     at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:808) 
     at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:737) 
     at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:119) 
     at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1205) 
     at org.apache.hadoop.hdfs.HftpFileSystem$LsParser.fetchList(HftpFileSystem.java:464) 
     at org.apache.hadoop.hdfs.HftpFileSystem$LsParser.getFileStatus(HftpFileSystem.java:475) 
     at org.apache.hadoop.hdfs.HftpFileSystem.getFileStatus(HftpFileSystem.java:504) 
     at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1378) 
     at org.apache.hadoop.tools.DistCp.checkSrcPath(DistCp.java:636) 
     at org.apache.hadoop.tools.DistCp.copy(DistCp.java:656) 
     at org.apache.hadoop.tools.DistCp.run(DistCp.java:881) 
     at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) 
     at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84) 
     at org.apache.hadoop.tools.DistCp.main(DistCp.java:908) 

請幫助reslove此。我想在源羣集上執行此操作。

回答

0

您使用的是高可用性NameNodes嗎?

我遇到過使用distcp和High Availability的問題。爲了通過它,我只指定了活動namenode的主機名而不是集羣的邏輯名。