2017-08-10 98 views
0

我使用pysparkpyspark錯誤,而使用loadlabledpoints RDD

我讀了libsvm的文件,它轉,然後再保存它。

我每個數據行保存爲稀疏數據

我嘗試使用MLUtils.saveaslibsvm比讀取使用MLUtils.loadlibsvm文件MLUtils.labeledpoint對象,我收到以下錯誤

ValueError: could not convert string to float: [

at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:193) at org.apache.spark.api.python.PythonRunner$$anon$1.(PythonRDD.scala:234) at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:152) at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:63) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323) at org.apache.spark.rdd.RDD$$anonfun$8.apply(RDD.scala:336) at org.apache.spark.rdd.RDD$$anonfun$8.apply(RDD.scala:334) at org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:1055) at org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:1029) at org.apache.spark.storage.BlockManager.doPut(BlockManager.scala:969) at org.apache.spark.storage.BlockManager.doPutIterator(BlockManager.scala:1029) at org.apache.spark.storage.BlockManager.getOrElseUpdate(BlockManager.scala:760) at org.apache.spark.rdd.RDD.getOrCompute(RDD.scala:334) at org.apache.spark.rdd.RDD.iterator(RDD.scala:285) at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:63) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323) at org.apache.spark.rdd.RDD.iterator(RDD.scala:287) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) at org.apache.spark.scheduler.Task.run(Task.scala:108) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:335) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ... 1 more

我讀在MLUtils頁面,如果你想使用loadlabeledpoints,您需要使用RDD.saveAsTextFile保存數據,但是當我這樣做,我得到

17/08/10 16:55:51 WARN TaskSetManager: Lost task 1.0 in stage 1.0 (TID 3, 192.168.1.205, executor 0): org.apache.spark.SparkException: Cannot parse a double from: [ at org.apache.spark.mllib.util.NumericParser$.parseDouble(NumericParser.scala:120) at org.apache.spark.mllib.util.NumericParser$.parseArray(NumericParser.scala:70) at org.apache.spark.mllib.util.NumericParser$.parseTuple(NumericParser.scala:91) at org.apache.spark.mllib.util.NumericParser$.parse(NumericParser.scala:41) at org.apache.spark.mllib.regression.LabeledPoint$.parse(LabeledPoint.scala:62) at org.apache.spark.mllib.util.MLUtils$$anonfun$loadLabeledPoints$1.apply(MLUtils.scala:195) at org.apache.spark.mllib.util.MLUtils$$anonfun$loadLabeledPoints$1.apply(MLUtils.scala:195) at scala.collection.Iterator$$anon$11.next(Iterator.scala:409) at org.apache.spark.api.python.SerDeUtil$AutoBatchedPickler.next(SerDeUtil.scala:121) at org.apache.spark.api.python.SerDeUtil$AutoBatchedPickler.next(SerDeUtil.scala:112) at scala.collection.Iterator$class.foreach(Iterator.scala:893) at org.apache.spark.api.python.SerDeUtil$AutoBatchedPickler.foreach(SerDeUtil.scala:112) at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:59) at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:104) at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:48) at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:310) at org.apache.spark.api.python.SerDeUtil$AutoBatchedPickler.to(SerDeUtil.scala:112) at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:302) at org.apache.spark.api.python.SerDeUtil$AutoBatchedPickler.toBuffer(SerDeUtil.scala:112) at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:289) at org.apache.spark.api.python.SerDeUtil$AutoBatchedPickler.toArray(SerDeUtil.scala:112) at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$13.apply(RDD.scala:936) at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$13.apply(RDD.scala:936) at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2062) at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2062) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) at org.apache.spark.scheduler.Task.run(Task.scala:108) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:335) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: java.lang.NumberFormatException: For input string: "[" at sun.misc.FloatingDecimal.readJavaFormatString(FloatingDecimal.java:2043) at sun.misc.FloatingDecimal.parseDouble(FloatingDecimal.java:110) at java.lang.Double.parseDouble(Double.java:538) at org.apache.spark.mllib.util.NumericParser$.parseDouble(NumericParser.scala:117) ... 30 more

如何將標記點的RDD保存爲libsvm格式,然後使用pyspark從磁盤加載它?

感謝

回答

0

的問題是,寫LabledPoints到文件中沒有使用libsvm的格式,然後這是很難重新閱讀。

我解決它通過在內存中創建素標記點,然後寫入到文件之前,我把它轉換到LIBSVM格式字符串,然後寫的文字,之後,我能讀它作爲LIBSVM格式

def pointToLibsvmRow(point): 
    s = point.features.reshape(2,-1, order="C").transpose().astype("str") 
    pairs = [str(int(float(point.label)))] + ["%s:%s" % (str(int(float(a))), b) for a, b in s.tolist()] 
    st = " ".join(pairs) 
    return st