2012-11-21 102 views
0

我在關注Tom White的'Hadoop - 權威指南'。 當我嘗試使用Java接口從Hadoop的URL我收到以下錯誤信息讀取數據:無法從hadoop URL讀取數據

[email protected]:/usr/local/hadoop$ hadoop URLCat hdfs://master/hdfs/data/SampleText.txt 
12/11/21 13:46:32 INFO ipc.Client: Retrying connect to server: master/192.168.9.55:8020. Already tried 0 time(s). 
12/11/21 13:46:33 INFO ipc.Client: Retrying connect to server: master/192.168.9.55:8020. Already tried 1 time(s). 
12/11/21 13:46:34 INFO ipc.Client: Retrying connect to server: master/192.168.9.55:8020. Already tried 2 time(s). 
12/11/21 13:46:35 INFO ipc.Client: Retrying connect to server: master/192.168.9.55:8020. Already tried 3 time(s). 
12/11/21 13:46:36 INFO ipc.Client: Retrying connect to server: master/192.168.9.55:8020. Already tried 4 time(s). 
12/11/21 13:46:37 INFO ipc.Client: Retrying connect to server: master/192.168.9.55:8020. Already tried 5 time(s). 

的URLCat文件的內容如下:

import java.net.URL; 
import java.io.InputStream; 
import org.apache.hadoop.io.IOUtils; 
import org.apache.hadoop.fs.FsUrlStreamHandlerFactory; 

public class URLCat { 
static { 
    URL.setURLStreamHandlerFactory(new FsUrlStreamHandlerFactory()); 
} 

public static void main(String[] args) throws Exception { 
    InputStream in = null; 
    try { 
     in = new URL(args[0]).openStream(); 
     IOUtils.copyBytes(in, System.out, 4096, false); 
    } finally { 
     IOUtils.closeStream(in); 
    } 
} 
} 

在/ etc/hosts文件內容如下:

127.0.0.1 localhost 
127.0.1.1 ubuntu.ubuntu-domain ubuntu 

# The following lines are desirable for IPv6 capable hosts 
::1  ip6-localhost ip6-loopback 
fe00::0 ip6-localnet 
ff00::0 ip6-mcastprefix 
ff02::1 ip6-allnodes 
ff02::2 ip6-allrouters 

# /ect/hosts Master and slaves 
192.168.9.55 master 
192.168.9.56 slave1 
192.168.9.57 slave2 
192.168.9.58 slave3 
+2

請提供更多信息:Hadoop守護進程運行的是什麼(用jps檢查它),core-site.xml中'fs.default.name'的值是什麼? –

+0

'fs.default.name'的值是'hdfs:// master:54310'。添加':54310'使其正常工作。 你能詳細說明我做了什麼嗎? – Utumbu

+0

太棒了!看到我的答案 –

回答

1

首先我檢查Hadoop守護進程是否正在運行。一個方便的工具是jps。確保(至少)namenode和datanodes正在運行。

如果仍然無法連接,請檢查網址是否正確。如您所提供的hdfs:// master/(沒有任何端口號)Hadoop假定您的名稱節點在端口(默認值)上偵聽。這是你在日誌中看到的。

對於core-site.xml快速查找(fs.default.name),您可以檢查您是否擁有(在這種情況下54310)的文件系統URI定義自定義端口。