我正在嘗試處理一些數據並以這樣一種方式編寫輸出,以便將結果按鍵分區,並按另一個參數(例如ASC)排序。例如,Spark groupByKey澄清
>>> data =sc.parallelize(range(10000))
>>> mapped = data.map(lambda x: (x%2,x))
>>> grouped = mapped.groupByKey().partitionBy(2).map(lambda x: x[1]).saveAsTextFile("mymr-output")
$ hadoop fs -cat mymr-output/part-00000 |cut -c1-1000
[0, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 26, 28, 30, 32, 34, 36, 38, 40, 42, 44, 46, 48, 50, 52, 54, 56, 58, 60, 62, 64, 66, 68, 70, 72, 74, 76, 78, 80, 82, 84, 86, 88, 90, 92, 94, 96, 98, 100, 102, 104, 106, 108, 110, 112, 114, 116, 118, 120, 122, 124, 126, 128, 130, 132, 134, 136, 138, 140, 142, 144, 146, 148, 150, 152, 154, 156, 158, 160, 162, 164, 166, 168, 170, 172, 174, 176, 178, 180, 182, 184, 186, 188, 190, 192, 194, 196, 198, 200, 202, 204, 206, 208, 210, 212, 214, 216, 218, 220, 222, 224, 226, 228, 230, 232, 234, 236, 238, 240, 242, 244, 246, 248, 250, 252, 254, 256, 258, 260, 262, 264, 266, 268, 270, 272, 274, 276, 278, 280, 282, 284, 286, 288, 290, 292, 294, 296, 298, 300, 302, 304, 306, 308, 310, 312, 314, 316, 318, 320, 322, 324, 326, 328, 330, 332, 334, 336, 338, 340, 342, 344, 346, 348, 350, 352, 354, 356, 358, 360, 362, 364, 366, 368, 370, 372, 374, 376, 378, 380, 382, 384, 386, 388, 390, 392, 394, 396, 398, 400, 402, 404, 406, 408, 410, 412, 414, 416, 418, 420,
$ hadoop fs -cat mymr-output/part-00001 |cut -c1-1000
[2049, 2051, 2053, 2055, 2057, 2059, 2061, 2063, 2065, 2067, 2069, 2071, 2073, 2075, 2077, 2079, 2081, 2083, 2085, 2087, 2089, 2091, 2093, 2095, 2097, 2099, 2101, 2103, 2105, 2107, 2109, 2111, 2113, 2115, 2117, 2119, 2121, 2123, 2125, 2127, 2129, 2131, 2133, 2135, 2137, 2139, 2141, 2143, 2145, 2147, 2149, 2151, 2153, 2155, 2157, 2159, 2161, 2163, 2165, 2167, 2169, 2171, 2173, 2175, 2177, 2179, 2181, 2183, 2185, 2187, 2189, 2191, 2193, 2195, 2197, 2199, 2201, 2203, 2205, 2207, 2209, 2211, 2213, 2215, 2217, 2219, 2221, 2223, 2225, 2227, 2229, 2231, 2233, 2235, 2237, 2239, 2241, 2243, 2245, 2247, 2249, 2251, 2253, 2255, 2257, 2259, 2261, 2263, 2265, 2267, 2269, 2271, 2273, 2275, 2277, 2279, 2281, 2283, 2285, 2287, 2289, 2291, 2293, 2295, 2297, 2299, 2301, 2303, 2305, 2307, 2309, 2311, 2313, 2315, 2317, 2319, 2321, 2323, 2325, 2327, 2329, 2331, 2333, 2335, 2337, 2339, 2341, 2343, 2345, 2347, 2349, 2351, 2353, 2355, 2357, 2359, 2361, 2363, 2365, 2367, 2369, 2371, 2373, 2375, 2377, 2379, 238
$
這是完美的 - 滿足我的第一個標準,即按鍵結果分區。但我希望結果排序。我嘗試了排序(),但它沒有工作。
>>> grouped= sorted(mapped.groupByKey().partitionBy(2).map(lambda x: x[1]))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'PipelinedRDD' object is not iterable
我不想再次使用並行化,並進行遞歸。任何幫助將不勝感激。
PS:我確實通過了這個:Does groupByKey in Spark preserve the original order?但它沒有幫助。 謝謝, Jeevan。
這有點類似:http://stackoverflow.com/questions/23995040/write-to-multiple-outputs-by-key-spark-one-spark-job同時建議拿起斯卡拉所以你可以更接近底層API並更好地理解正在發生的事情。 – samthebest 2014-09-04 08:06:36