2016-04-28 105 views
1

我想把一個70G文件放到hdfs中,所以我用'put'命令來做到這一點。但是,我得到了以下例外。我用相同的命令嘗試了小文件,它工作。有誰知道是什麼問題?謝謝!HDFS把本地文件放到hdfs中,但得到了UnresolvedAddressException

WARN [DataStreamer for file /user/qzhao/data/sorted/WGC033800D_sorted.bam._COPYING_] hdfs.DFSClient (DFSOutputStream.java:run(628)) - DataStreamer Exception java.nio.channels.UnresolvedAddressException 
    at sun.nio.ch.Net.checkAddress(Net.java:127) 
    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:644) 
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192) 
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529) 
    at org.apache.hadoop.hdfs.DFSOutputStream.createSocketForPipeline(DFSOutputStream.java:1526) 
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1328) 
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1281) 
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:526) 
put: java.nio.channels.ClosedChannelException 
    at org.apache.hadoop.hdfs.DFSOutputStream.checkClosed(DFSOutputStream.java:1538) 
    at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:98) 
    at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58) 
    at java.io.DataOutputStream.write(DataOutputStream.java:107) 
    at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:80) 
    at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:52) 
    at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:112) 
    at org.apache.hadoop.fs.shell.CommandWithDestination$TargetFileSystem.writeStreamToFile(CommandWithDestination.java:395) 
    at org.apache.hadoop.fs.shell.CommandWithDestination.copyStreamToTarget(CommandWithDestination.java:327) 
    at org.apache.hadoop.fs.shell.CommandWithDestination.copyFileToTarget(CommandWithDestination.java:303) 
    at org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:243) 
    at org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:228) 
    at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:306) 
    at org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:278) 
    at org.apache.hadoop.fs.shell.CommandWithDestination.processPathArgument(CommandWithDestination.java:223) 
    at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:260) 
    at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:244) 
    at org.apache.hadoop.fs.shell.CommandWithDestination.processArguments(CommandWithDestination.java:200) 
    at org.apache.hadoop.fs.shell.CopyCommands$Put.processArguments(CopyCommands.java:259) 
    at org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:190) 
    at org.apache.hadoop.fs.shell.Command.run(Command.java:154) 
    at org.apache.hadoop.fs.FsShell.run(FsShell.java:287) 
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) 
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84) 
    at org.apache.hadoop.fs.FsShell.main(FsShell.java:340) 
+0

與NameNode的任何問題,從節點條目?你可以檢查hdfs-site.xml.core-site.xml的配置 –

+0

它是DNS還是防火牆問題?檢查所有節點的地址。 – magooup

回答

0

我遇到過類似的問題(小文件 - 作品,大文件 - 不)。 問題是我已經連接到主設備,但沒有連接到從設備。

在我來說,我只是簡單地添加關於/etc/hosts

0

由於相同的命令適用於小型文件,HDFS上可能存在空間問題。您可以檢查hdfs dfsadmin -report命令的輸出以查看它是否已滿。

假設您正在使用類似於hdfs dfs -put /path/to/local.file /user/USERNAME/path/to/dir-or-file的命令。如果沒有,請分享您的命令。