因爲你的數據幀沒有在索引上排序,這意味着所有的子集必須使用慢向量掃描和快速算法,如二進制搜索不能應用;
def sort_subset(df):
# sort index and find out the positions that separate groups
df = df.sort_index()
split_indices = np.flatnonzero(np.ediff1d(df.index, to_begin=1, to_end=1))
list_df = []
for i in range(len(split_indices)-1):
start_index = split_indices[i]
end_index = split_indices[i+1]
list_df.append(df.iloc[start_index:end_index])
return list_df
一些定時:雖然groupby
總是排序的組由可變的第一數據幀,則可以通過編寫一個簡單的算法排序索引,然後子集,以驗證此模仿此行爲:
import pandas as pd
import numpy as np
nrow = 1000000
df = pd.DataFrame(np.random.randn(nrow), columns=['x'], index=np.random.randint(100, size=nrow))
index = list(set(df.index))
print('no of groups: ', len(index))
%timeit list_df_1 = [df.loc[x] for x in index]
#no of groups: 100
#13.6 s ± 228 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit list_df_2 = [x for i, x in df.groupby(level=0, sort=False)]
#54.8 ms ± 1.36 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
# Not as fast because my algorithm is not optimized at all but the same order of magnitude
%timeit list_df_3 = sort_subset(df)
#102 ms ± 3.53 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
list_df_1 = [df.loc[x] for x in index]
list_df_2 = [x for i, x in df.groupby(level=0, sort=False)]
list_df_3 = sort_subset(df)
比較結果:
all(list_df_3[i].eq(list_df_2[i]).all().iat[0] for i in range(len(list_df_2)))
# True
你看到一個顯著的速度,如果你子集化以及之前排序指數:
def sort_subset_with_loc(df):
df = df.sort_index()
list_df_1 = [df.loc[x] for x in index]
return list_df_1
%timeit sort_subset_with_loc(df)
# 25.4 ms ± 897 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
我相信是如何比爲什麼更有趣。 –
它看起來像你有一個非唯一索引,可能也是非單調的。在這種退化的情況下,每次調用'loc',我都相信熊貓必須遍歷整個*索引來構建一個新的數組(長度與索引相同)以用於布爾索引。 OTOH,'groupby'只是掃描索引一次,並跟蹤每個標籤的整數位置。我必須仔細檢查源代碼中的所有內容。 –