2012-08-01 140 views
0

當我嘗試複製的3個文件目錄HDFS中我獲得以下錯誤Hadoop的:錯誤而複製文件的本地目錄到HDFS

 [email protected]:/usr/local/hadoop$ bin/hadoop dfs -copyFromLocal /tmp/gutenberg /user/hduser/gutenberg 
12/08/01 23:48:46 WARN hdfs.DFSClient: DataStreamer Exception: org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/hduser/gutenberg/gutenberg/pg20417.txt could only be replicated to 0 nodes, instead of 1 
12/08/01 23:48:46 WARN hdfs.DFSClient: Error Recovery for block null bad datanode[0] nodes == null 
12/08/01 23:48:46 WARN hdfs.DFSClient: Could not get block locations. Source file "/user/hduser/gutenberg/gutenberg/pg20417.txt" - Aborting... 
copyFromLocal: java.io.IOException: File /user/hduser/gutenberg/gutenberg/pg20417.txt could only be replicated to 0 nodes, instead of 1 
org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/hduser/gutenberg/gutenberg/pg20417.txt could only be replicated to 0 nodes, instead of 1 

我的fsck輸出

[email protected]:/usr/local/hadoop$ bin/hadoop fsck -blocks 
FSCK started by hduser from /127.0.0.1 for path/at Wed Aug 01 23:50:49 IST 2012 
Status: HEALTHY 
Total size: 0 B 
Total dirs: 10 
Total files: 0 (Files currently being written: 2) 
Total blocks (validated): 0 
Minimally replicated blocks: 0 
Over-replicated blocks: 0 
Under-replicated blocks: 0 
Mis-replicated blocks:  0 
Default replication factor: 1 
Average block replication: 0.0 
Corrupt blocks:  0 
Missing replicas:  0 
Number of data-nodes:  0 
Number of racks:  0 
FSCK ended at Wed Aug 01 23:50:49 IST 2012 in 3 milliseconds 


The filesystem under path '/' is HEALTHY 

而且當我嘗試格式化的NameNode我獲得以下錯誤

[email protected]:/usr/local/hadoop$ bin/hadoop namenode -format 
12/08/01 23:53:07 INFO namenode.NameNode: STARTUP_MSG: 
STARTUP_MSG: Starting NameNode 
STARTUP_MSG: host = saket-K53SM/127.0.1.1 
STARTUP_MSG: args = [-format] 
STARTUP_MSG: version = 1.0.3 
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch- 1.0 -r 1335192; compiled by 'hortonfo' on Tue May 8 20:31:25 UTC 2012 

Re-format filesystem in /app/hadoop/tmp/dfs/name ? (Y or N) y 
Format aborted in /app/hadoop/tmp/dfs/name 
12/08/01 23:53:09 INFO namenode.NameNode: SHUTDOWN_MSG: 
SHUTDOWN_MSG: Shutting down NameNode at saket-K53SM/127.0.1.1 

任何幫助,將不勝感激..

+0

你能在日誌中查找您的NameNode和粘貼任何相關的錯誤/警告消息 – 2012-08-01 10:32:50

+0

還可以做什麼太平紳士,並檢查是否所有進程都正常工作? – 2012-08-01 12:47:11

回答

5

我相信這是一個非常愚蠢的問題。輸入的,而不是小寫的「Y」, 「Y」(它應該是大寫)

+0

我不知道爲什麼這是downvoted。我做了同樣的事情(我輸入了'y'而不是'Y'),它沒有重新格式化文件系統。 – Calavoow 2012-10-04 09:11:28

+0

確實如此,它僅格式化爲大寫'Y'。 moronkreacionz顯然不是一個白癡! – JesperSM 2012-10-07 00:05:45

+0

謝謝你的評論,JesperSM。 – moronkreacionz 2012-11-23 09:46:15

1

您是否嘗試過:

  • 停止的NameNode
  • 停止數據節點
  • 刪除/應用/ Hadoop的*
  • 格式的NameNode
  • 啓動數據節點,並再次
0

的NameNode年代後(i.g stop-all.sh),刪除數據保存namenode臨時文件的目錄。刪除數據後目錄再次啓動hadoop守護進程i.g start-all.sh。 「data」目錄的路徑值爲hadoop.tmp.dir屬性conf/core-site.xml。

我認爲這將解決您的問題

相關問題