2017-08-07 56 views
0

我一直在eclipse中使用python(基於hadoop 2.7),我試圖運行示例「word count」,它是我的代碼: #Imports #Take關心未使用的進口(以及未使用的變量), #請評論他們所有,否則,您將在執行過程中遇到任何錯誤。 #請注意,指令「@PydevCodeAnalysisIgnore」和「@UnusedImport」 #都無法解決該問題。 #from pyspark.mllib.clustering進口KMEANS 從pyspark進口SparkConf,SparkContext 進口OS運行我的第一個spark python程序錯誤

# Configure the Spark environment 
sparkConf = SparkConf().setAppName("WordCounts").setMaster("local") 
sc = SparkContext(conf = sparkConf) 

# The WordCounts Spark program 
textFile = sc.textFile(os.environ["SPARK_HOME"] + "/README.md") 
wordCounts = textFile.flatMap(lambda line: line.split()).map(lambda word:  (word, 1)).reduceByKey(lambda a, b: a+b) 
for wc in wordCounts.collect(): print wc 

,然後我得到了以下錯誤:

17/08/07 12:28:13 WARN NativeCodeLoader: Unable to load native-hadoop  library for your platform... using builtin-java classes where applicable 
17/08/07 12:28:16 WARN Utils: Service 'SparkUI' could not bind on port  4040. Attempting port 4041. 
Traceback (most recent call last): 
File "/home/hduser/eclipse-workspace/PythonSpark/src/WordCounts.py", line 12, in <module> 
sc = SparkContext(conf = sparkConf) 
File "/usr/local/spark/python/pyspark/context.py", line 118, in __init__ 
conf, jsc, profiler_cls) 
File "/usr/local/spark/python/pyspark/context.py", line 186, in _do_init 
self._accumulatorServer = accumulators._start_update_server() 
File "/usr/local/spark/python/pyspark/accumulators.py", line 259, in _start_update_server 
server = AccumulatorServer(("localhost", 0), _UpdateRequestHandler) 
File "/usr/lib/python2.7/SocketServer.py", line 417, in __init__ 
self.server_bind() 
File "/usr/lib/python2.7/SocketServer.py", line 431, in server_bind 
self.socket.bind(self.server_address) 
File "/usr/lib/python2.7/socket.py", line 228, in meth 
return getattr(self._sock,name)(*args) 
socket.gaierror: [Errno -3] Temporary failure in name resolution 

任何幫助?我可以使用spark-shell運行Scala的任何項目,也可以在eclipse上運行任何(無火花)python程序,沒有錯誤 我認爲我的問題是與pyspark有什麼關係?

+0

您可以檢查您的Pyspark外殼是否在運行嗎?我認爲pyspark的路徑存在問題。 –

+0

我該如何做到這一點? – EngAhmed

+0

當我在我的EdgeNode運行JPS我得到了2706個的ResourceManager 9717個JPS 2534 SecondaryNameNode 3143 org.eclipse.equinox.launcher_1.4.0.v20161219-1356.jar 2987 SparkSubmit – EngAhmed

回答

0

你可以試試這個,只需創建SparkContext就夠了,它的工作。

sc = SparkContext() 
# The WordCounts Spark program 
textFile = sc.textFile("/home/your/path/Test.txt")// OR on File-->right click get the path paste here 
wordCounts = textFile.flatMap(lambda line: line.split()).map(lambda word:  (word, 1)).reduceByKey(lambda a, b: a+b) 
for wc in wordCounts.collect(): 
print wc 
+0

當我剛SC型,它說:SC = SparkContext() NameError:名稱'SparkContext'未定義 – EngAhmed

+1

我的問題,我認爲它可以達到火花上下文 – EngAhmed

+0

@ kingtouch999是否導入SparkContext?附:來自pyspark導入SparkConf,SparkContext –

0

試試這個方法...

啓動後您的火花它顯示命令提示符SC爲SparkContext。

如果沒有可用的你可以用以下方式..

>>sc=new org.apache.spark.SparkContext() 
>>NOW YOU CAN USE...sc 
+0

這是用的嗎? –

+0

惋惜沒有它給了我:SC =新org.apache.spark.SparkContext() 文件 「」,1號線 SC =新org.apache.spark.SparkContext() ^ 語法錯誤:無效的語法 – EngAhmed

0

這足以運行您的程序。 因爲,sc可用您的殼牌。

先試試這個你SHEEL模式...

由一行行...

textFile = sc.textFile("/home/your/path/Test.txt")// OR on File-->right click get the path paste here 
wordCounts = textFile.flatMap(lambda line: line.split()).map(lambda word:  (word, 1)).reduceByKey(lambda a, b: a+b) 
for wc in wordCounts.collect(): 
print wc 
0

按我的理解下面的代碼是否正確安裝星火應該工作。

from pyspark import SparkConf, SparkContext 

conf = SparkConf().setMaster("local").setAppName("WordCount") 
sc = SparkContext(conf = conf) 

input = sc.textFile("file:///sparkcourse/PATH_NAME") 
words = input.flatMap(lambda x: x.split()) 
wordCounts = words.countByValue() 

for word, count in wordCounts.items(): 
    cleanWord = word.encode('ascii', 'ignore') 
    if (cleanWord): 
     print(cleanWord.decode() + " " + str(count))