2017-12-02 154 views
2

我在scala Spark中訓練了一個LDA模型。Spark:將Scala ML模型加載到PySpark

val lda = new LDA().setK(k).setMaxIter(iter).setFeaturesCol(colnames).fit(data) 

lda.save(path) 

我檢查了我保存的模型,它包含兩個文件夾:元數據和數據。

然而,當我嘗試這種模式加載到PySpark,我得到了一個錯誤說:

model = LDAModel.load(sc, path = path) 


File "/Users/hongbowang/spark-2.2.0-bin-hadoop2.7/python/lib/py4j- 
0.10.4-src.zip/py4j/protocol.py", line 319, in get_return_value 
py4j.protocol.Py4JJavaError: An error occurred while calling 
o33.loadLDAModel. 
: org.apache.hadoop.mapred.InvalidInputException: Input path does not 
exist:file:/Users/hongbowang/Personal/Spark%20Program/Spark%20Project/ 
T1/output_K20_topic/lda/metadata 

有誰知道我該如何解決?非常感謝〜!

回答

1

您已保存ml.clustering.LDAModel,但您嘗試使用mllib.clustering.LDAModel閱讀。您應該導入正確的LDAModel。對於本地模式:

from pyspark.ml.clustering import LocalLDAModel 

LocalLDAModel.load(path) 

分佈式模型:

from pyspark.ml.clustering import DistributedLDAModel 

DistributedLDAModel.load(path) 
相關問題