2016-12-23 60 views
1

我有一個Scala類,旨在推廣線性模型的一些功能 - 具體來說,用戶應該能夠創建一個具有係數數組和實例的數組預測器,並且該類從DataFrame中提取數據,並使用簡單線性模型在整個DataFrame上創建預測,如下所示。scala /火星地圖返回一個數組而不是單個值

我被困在最後一行......我期望生成一列預測值。我嘗試了很多方法(除了其中的一個都被註釋掉了)。因爲它現在不會編譯B/C類型不匹配的代碼:

[error] found : Array[org.apache.spark.sql.Column] 
[error] required: org.apache.spark.sql.Column 
[error]  .withColumn("prediction", colMod(preds.map(p => data(p)))) 
[error]            ^

...這也是我獲得預解碼< - preds版本...並在foreach版本:

[error] found : Unit 
[error] required: org.apache.spark.sql.Column 
[error]  .withColumn("prediction", colMod(preds.foreach(data(_)))) 
[error]             ^

一直妄圖解決...將不勝感激的任何建議。

class LinearModel(coefficients: Array[Double], 
        predictors: Array[String], 
        data: DataFrame) { 

    val coefs = coefficients 
    val preds = Array.concat(Array("bias"), predictors) 
    require(coefs.length == preds.length) 

    /** 
     * predict: computes linear model predictions as the dot product of the coefficents and the 
     * values (X[i] in the model matrix) 
     * @param values: the values from a single row of the given variables from model matrix X 
     * @param coefs: array of coefficients to be applied to each of the variables in values 
     *    (the first coef is assumed to be 1 for the bias/intercept term) 
     * @return: the predicted value 
     */ 
    private def predict(values: Array[Double], coefs: Array[Double]): Unit = { 
     (for ((c, v) <- coefs.zip(values)) yield c * v).sum 
    } 

    /** 
     * colMod (udf): passes the values for each relevant value to predict() 
     * @param values: an Array of the numerical values of each of the specified predictors for a 
     *    given record 
     */ 
    private val colMod = udf((values: Array[Double]) => predict(values, coefs)) 

    val dfPred = data 
     // create the column with the prediction 
     .withColumn("prediction", colMod(preds.map(p => data(p)))) 
     //.withColumn("prediction", colMod(for (pred <- preds) yield data(pred))) 
     //.withColumn("prediction", colMod(preds.foreach(data(_)))) 
     // prev line should = colMod(data(pred1), data(pred2), ..., data(predn)) 
    } 

回答

1

這裏是如何將可以正確地完成:

import org.apache.spark.sql.functions.{lit, col} 
import org.apache.spark.sql.Column 

def predict(coefficients: Seq[Double], predictors: Seq[String], df: DataFrame) = { 

    // I assume there is no predictor for bias 
    // but you can easily correct for that 
    val prediction: Column = predictors.zip(coefficients).map { 
    case (p, c) => col(p) * lit(c) 
    }.foldLeft(col("bias"))(_ + _) 

    df.withColumn("prediction", prediction) 
} 

用法示例:

val df = Seq((1.0, -1.0, 3.0, 5.0)).toDF("bias", "x1", "x2", "x3") 

predict(Seq(2.0, 3.0), Seq("x1", "x3"), df) 

與結果是:

+----+----+---+---+----------+ 
|bias| x1| x2| x3|prediction| 
+----+----+---+---+----------+ 
| 1.0|-1.0|3.0|5.0|  14.0| 
+----+----+---+---+----------+ 

關於你的代碼已經犯了一些錯誤:

  • Array[_]對於ArrayType列不是有效的外部類型。有效的外部表示是Seq[_],因此您傳遞給udf的函數的參數應爲Seq[Double]
  • 傳遞給udf的函數不能是Unit。你的情況應該是Double。結合前一點,有效簽名將爲(Seq[Double], Seq[Double]) => Double
  • colMod需要一個類型爲Column的參數。

    import org.apache.spark.sql.functions.array 
    
    colMod(array(preds.map(col): _*)) 
    
  • 你的代碼是不是NULL/null安全。

相關問題