0
嗨,我正在使用Spark ML來訓練模型。訓練數據集有130列和1000萬行。現在的問題是,每當我運行多層感知器它顯示了以下錯誤:PySpark:Spark ML MulitLayerPerceptron失敗,但其他分類器正常工作
org.apache.spark.SparkException: Job aborted due to stage failure: Task 43 in stage 1882.0 failed 4 times, most recent failure: Lost task 43.3 in stage 1882.0 (TID 180174, 10.233.252.145, executor 6): java.lang.ArrayIndexOutOfBoundsException
有趣的是,當我用其他的分類,如Logistic迴歸和隨機森林它不會發生。
我的代碼:
# Building the model
inputneurons = len(features_columns)
#Assembling the Feature Vectors
assembler = VectorAssembler(inputCols=features_columns, outputCol="features")
#Logistic Regression
mlp = MultilayerPerceptronClassifier(labelCol=label, featuresCol="features", layers=[inputneurons,300,2])
#Pipelining the assembling and modeling process
pipeline = Pipeline(stages=[assembler, mlp])
model = pipeline.fit(training_df)
什麼可以在星火等問題背後的理由與MLP?