0
我在關注Tom White的'Hadoop - 權威指南'。 當我嘗試使用Java接口從Hadoop的URL我收到以下錯誤信息讀取數據:無法從hadoop URL讀取數據
[email protected]:/usr/local/hadoop$ hadoop URLCat hdfs://master/hdfs/data/SampleText.txt
12/11/21 13:46:32 INFO ipc.Client: Retrying connect to server: master/192.168.9.55:8020. Already tried 0 time(s).
12/11/21 13:46:33 INFO ipc.Client: Retrying connect to server: master/192.168.9.55:8020. Already tried 1 time(s).
12/11/21 13:46:34 INFO ipc.Client: Retrying connect to server: master/192.168.9.55:8020. Already tried 2 time(s).
12/11/21 13:46:35 INFO ipc.Client: Retrying connect to server: master/192.168.9.55:8020. Already tried 3 time(s).
12/11/21 13:46:36 INFO ipc.Client: Retrying connect to server: master/192.168.9.55:8020. Already tried 4 time(s).
12/11/21 13:46:37 INFO ipc.Client: Retrying connect to server: master/192.168.9.55:8020. Already tried 5 time(s).
的URLCat文件的內容如下:
import java.net.URL;
import java.io.InputStream;
import org.apache.hadoop.io.IOUtils;
import org.apache.hadoop.fs.FsUrlStreamHandlerFactory;
public class URLCat {
static {
URL.setURLStreamHandlerFactory(new FsUrlStreamHandlerFactory());
}
public static void main(String[] args) throws Exception {
InputStream in = null;
try {
in = new URL(args[0]).openStream();
IOUtils.copyBytes(in, System.out, 4096, false);
} finally {
IOUtils.closeStream(in);
}
}
}
在/ etc/hosts文件內容如下:
127.0.0.1 localhost
127.0.1.1 ubuntu.ubuntu-domain ubuntu
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
# /ect/hosts Master and slaves
192.168.9.55 master
192.168.9.56 slave1
192.168.9.57 slave2
192.168.9.58 slave3
請提供更多信息:Hadoop守護進程運行的是什麼(用jps檢查它),core-site.xml中'fs.default.name'的值是什麼? –
'fs.default.name'的值是'hdfs:// master:54310'。添加':54310'使其正常工作。 你能詳細說明我做了什麼嗎? – Utumbu
太棒了!看到我的答案 –