請勿使用sum
加入列表。關於爲什麼這是個不好的主意(稍後會獲得鏈接),python想法郵件列表上有很長的討論。
itertools.chain
是一個很好的解決方案,或者如果你寧願去功能則
>>> my_list = {
... "foo": ["a", "b", "c"],
... "bar": ["d", "e", "f"]
... }
>>> import operator as op
>>> reduce(op.concat, my_list.values())
['a', 'b', 'c', 'd', 'e', 'f']
>>>
這是爲小型和大型辭書chain
和reduce
之間的性能對比。
>>> import random
>>> dict_of_lists = {k: range(random.randint(0, k)) for k in range(0, random.randint(0, 9))}
>>> %timeit list(itertools.chain.from_iterable(my_list.values()))
The slowest run took 12.72 times longer than the fastest. This could mean that an intermediate result is being cached
1000000 loops, best of 3: 995 ns per loop
>>> %timeit reduce(op.concat, my_list.values())
The slowest run took 19.77 times longer than the fastest. This could mean that an intermediate result is being cached
1000000 loops, best of 3: 467 ns per loop
reduce
大約是兩倍的速度itertools
。對於較大的結構來說也是如此。
>>> dict_of_lists = {k: range(random.randint(0, k)) for k in range(0, random.randint(0, 9999))}
>>> %timeit list(itertools.chain.from_iterable(my_list.values()))
The slowest run took 6.47 times longer than the fastest. This could mean that an intermediate result is being cached
1000000 loops, best of 3: 1 µs per loop
>>> %timeit reduce(op.concat, my_list.values())
The slowest run took 13.68 times longer than the fastest. This could mean that an intermediate result is being cached
1000000 loops, best of 3: 425 ns per loop
請說明 - 您有一本名爲'my_list'的字典,並將您想要的輸出清楚地描述爲字典。 – jonrsharpe
@jonrsharpe他在他編碼的最後一行做了 – 2016-07-06 11:28:43
@jonrsharpe我的例子的最後一行包含了一個輸出應該是 – Paradoxis