2015-09-28 80 views
1

當我嘗試將文件從我local directory複製到HDFS我收到以下錯誤:不能寫入文件到HDFS - 收到錯誤HDFS在安全模式下

[[email protected] ~]$ hadoop fs -copyFromLocal hello.txt /user/cloudera/my_data 


copyFromLocal: Cannot create file/user/cloudera/my_data/hello.txt._COPYING_. Name node is in safe mode. 

然後我執行的命令:

[[email protected] ~]$ su 
Password: 
[[email protected] cloudera]# hdfs dfsadmin -safemode leave 
safemode: Access denied for user root. Superuser privilege is required 

並進一步執行該命令將文件存儲到HDFS我得到了同樣的錯誤。

我再次執行該命令:

[[email protected] ~]$ su - root 
Password: 
[[email protected] ~]# hdfs dfsadmin -safemode leave 

我收到了同樣的錯誤。我使用cloudera分佈hadoop

+0

感謝Maximillian糾正格式。 – user1574688

回答

0

Namenode在重啓後有時會處於安全模式,如果等待一段時間(取決於塊的數量),namenode會自動離開安全模式。

您可以使用hdfs dfsadmin -safemode leave命令強制執行此操作,只有HDFS管理員用戶可以執行此命令,因此在執行此命令之前切換到hdfs用戶。

su hdfs

0

嘗試用

hadoop dfsadmin -safemode leave  

這應該工作...

+0

請添加一些解釋,而不是隻是代碼... – eirikir

2

從Apache文檔here

During start up the NameNode loads the file system state from the fsimage and the edits log file. It then waits for DataNodes to report their blocks so that it does not prematurely start replicating the blocks though enough replicas already exist in the cluster. During this time NameNode stays in Safemode. Safemode for the NameNode is essentially a read-only mode for the HDFS cluster, where it does not allow any modifications to file system or blocks. Normally the NameNode leaves Safemode automatically after the DataNodes have reported that most file system blocks are available. If required, HDFS could be placed in Safemode explicitly using bin/hadoop dfsadmin -safemode command.

在大多數情況下,在過程中完成HDFS之後的合理時間是s tarted。但是,您可以強制HDFS通過以下命令出來的安全模式:

hadoop dfsadmin -safemode leave 

強烈建議運行fsck從不一致的狀態中恢復過來。