SPLIT
功能,你可以猜測,分割的是模式字符串。由於您提供的模式字符串匹配整個輸入,因此無法返回。因此是一個空陣列。
import org.apache.spark.sql.functions.{regexp_extract, array}
val pattern = """^([^ ]+) ([^ ]+) ([^ ]+) \[([^\]]+)\] "([^"]+)" \d+ - - "([^"]+)".*"""
val df = sc.parallelize(Seq((
1L, """10.10.10.10 - - [08/Sep/2015:00:00:03 +0000] "GET /index.html HTTP/1.1" 206 - - "Apache-HttpClient" -"""
))).toDF("id", "log")
你所需要的就是regex_extract
:
val exprs = (1 to 6).map(i => regexp_extract($"log", pattern, i).alias(s"_$i"))
df.select(exprs:_*).show
// +-----------+---+---+--------------------+--------------------+-----------------+
// | _1| _2| _3| _4| _5| _6|
// +-----------+---+---+--------------------+--------------------+-----------------+
// |10.10.10.10| -| -|08/Sep/2015:00:00...|GET /index.html H...|Apache-HttpClient|
// +-----------+---+---+--------------------+--------------------+-----------------+
,或者例如一個UDF:
val extractFromLog = udf({
val ip = new Regex(pattern)
(s: String) => s match {
// Lets ignore some fields for simplicity
case ip(ip, _, _, ts, request, client) =>
Some(Array(ip, ts, request, client))
case _ => None
}
})
df.select(extractFromLog($"log"))
我建議建立一個蜂巢表如http://www.dowdandassociates。 com/blog/content/howto-use-hive-with-apache-logs /然後將解析的數據複製到Parquet表中。 –