2
基於Unbalanced factor of KMeans?,我試圖計算不平衡因子,但是我失敗了。加在星火變壞
RDD r2_10
的每個元素都是一對,其中密鑰是簇,值是一個點的元組。所有這些都是ID。下面我介紹會發生什麼:
In [1]: r2_10.collect()
Out[1]:
[(0, ('438728517', '28138008')),
(13824, ('4647699097', '6553505321')),
(9216, ('2575712582', '1776542427')),
(1, ('8133836578', '4073591194')),
(9217, ('3112663913', '59443972', '8715330944', '56063461')),
(4609, ('6812455719',)),
(13825, ('5245073744', '3361024394')),
(4610, ('324470279',)),
(2, ('2412402108',)),
(3, ('4766885931', '3800674818', '4673186647', '350804823', '73118846'))]
In [2]: pdd = r2_10.map(lambda x: (x[0], 1)).reduceByKey(lambda a, b: a + b)
In [3]: pdd.collect()
Out[3]:
[(13824, 1),
(9216, 1),
(0, 1),
(13825, 1),
(1, 1),
(4609, 1),
(9217, 1),
(2, 1),
(4610, 1),
(3, 1)]
In [4]: n = pdd.count()
In [5]: n
Out[5]: 10
In [6]: total = pdd.map(lambda x: x[1]).sum()
In [7]: total
Out[7]: 10
和total
應該有點數的總數。然而,這是10 ...目標是成爲22!
我在這裏錯過了什麼?
順便說有一些有用的方法,你可以使用,如[鍵](https://spark.apache.org/docs/latest/api/python/pyspark.html?highlight=rdd#pyspark .RDD.keys)或[mapValues](https://spark.apache.org/docs/latest/api/python/pyspark.html?highlight=rdd#pyspark.RDD.mapValues)。 –
我想知道你爲什麼提到'鑰匙()'阿爾貝託,我看不出它是如何幫助這裏的...... – gsamaras
因爲你可以計算'鑰匙'的數量。例如'keys = rdd.keys()。count()' –