1
我剛開始使用spark並試圖運行邏輯迴歸。 我不斷收到此錯誤:火花Logistic迴歸錯誤尺寸不匹配
Caused by: java.lang.IllegalArgumentException: requirement failed:
Dimensions mismatch when adding new sample. Expecting 21 but got 17.
的我已經是21的要素數,但我不知道是什麼17在這裏的意思。不知道該怎麼辦? 我的代碼在這裏:
from pyspark.mllib.regression import LabeledPoint
from numpy import array
def isfloat(string):
try:
float(string)
return True
except ValueError:
return False
def parse_interaction(line):
line_split = line.split(",")
# leave_out = [1,2,3]
clean_line_split = line_split[3:24]
retention = 1.0
if line_split[0] == '0.0':
retention = 0.0
return LabeledPoint(retention, array([map(float,i) for i in clean_line_split if isfloat(i)]))
training_data = raw_data.map(parse_interaction)
from pyspark.mllib.classification import LogisticRegressionWithLBFGS
from time import time
t0 = time()
logit_model = LogisticRegressionWithLBFGS.train(training_data)
tt = time() - t0
print "Classifier trained in {} seconds".format(round(tt,3))
由於你在創建'array'時過濾掉了它的值,它的長度可以是0到預期大小之間的任何值。無論如何刪除格式錯誤的條目會更有意義。 – zero323