2016-09-09 35 views
1

我有一天中分佈的數據。 我對它進行聚類,然後計算每個小時每小時的比率(重量)(並非所有小時都存在)。 (數據幀time_df)由小時和使用NP bincount從不同大小的元組填充數據框

cluster        Date 
0  1 2014-02-28 14:24:59.535000+02:00 
1  1 2014-02-28 14:26:35.019000+02:00 
2  1 2014-02-28 14:27:37.213000+02:00 
3  2 2014-02-28 14:28:35.246000+02:00 
4  2 2014-02-28 14:29:37.283000+02:00 

I組計算每個簇的權重:

group_by_hour = time_df.groupby(time_df.Date.dt.hour) 
cluster_ids_hour = group_by_hour.cluster.\ 
    apply(lambda arr: list(range(0,(arr+1).max()+1))) 
cluster_ratio_hour = group_by_hour.cluster.\ 
    apply(lambda arr: 1.0*np.bincount(arr+1)/len(arr)) 

這給出每小時簇的不同陣列的大小和它們的重量 它試圖構建數據幀

pd.DataFrame(溫度,列= [ '小時', '簇', '比例'])

但我得到以下幾點:

hour clusters           weights 
0 14  [0]           [1.0] 
1 15  [0, 1]     [0.488888888889, 0.511111111111] 
2 16 [0, 1, 2] [0.302325581395, 0.162790697674, 0.53488372093] 
3 17 [0, 1, 2]         [0.0, 0.0, 1.0] 
4 18 [0, 1, 2]         [0.0, 0.0, 1.0] 
5 19 [0, 1, 2]         [0.0, 0.0, 1.0] 
6 20 [0, 1, 2]         [0.0, 0.0, 1.0] 
7 21 [0, 1, 2]         [0.0, 0.0, 1.0] 
8 22 [0, 1, 2]         [0.0, 0.0, 1.0] 
9 23 [0, 1, 2]         [0.0, 0.0, 1.0] 

我怎樣才能讓它的集羣作爲索引和小時爲列?

0 1 2 3 4 ... 
0 0.2 0.6 0.4 0.0 0.6 
1 0.0 0.4 0.1 0.0 0.4 
2 0.8 0.0 0.5 1.0 0.0 

回答

1

我認爲你可以使用:

import pandas as pd 
import numpy as np 

time_df = pd.DataFrame({'cluster': {0: 1, 1: 1, 2: 1, 3: 2, 4: 2, 5: 1, 6: 1, 7: 2}, 
         'Date': {0: pd.Timestamp('2014-02-28 12:24:59.535000'), 
           1: pd.Timestamp('2014-02-28 12:26:35.019000'), 
           2: pd.Timestamp('2014-02-28 12:27:37.213000'), 
           3: pd.Timestamp('2014-02-28 12:28:35.246000'), 
           4: pd.Timestamp('2014-02-28 12:29:37.283000'), 
           5: pd.Timestamp('2014-02-28 13:27:37.213000'), 
           6: pd.Timestamp('2014-02-28 14:28:35.246000'), 
           7: pd.Timestamp('2014-02-28 14:29:37.283000')}}) 

print (time_df) 
        Date cluster 
0 2014-02-28 12:24:59.535  1 
1 2014-02-28 12:26:35.019  1 
2 2014-02-28 12:27:37.213  1 
3 2014-02-28 12:28:35.246  2 
4 2014-02-28 12:29:37.283  2 
5 2014-02-28 13:27:37.213  1 
6 2014-02-28 14:28:35.246  1 
7 2014-02-28 14:29:37.283  2 
group_by_hour = time_df.groupby(time_df.Date.dt.hour) 
cluster_ids_hour = group_by_hour.cluster.\ 
    apply(lambda arr: list(range(0,(arr+1).max()+1))) 
cluster_ratio_hour = group_by_hour.cluster.\ 
    apply(lambda arr: 1.0*np.bincount(arr+1)/len(arr)) 

print (cluster_ids_hour) 
Date 
12 [0, 1, 2, 3] 
13  [0, 1, 2] 
14 [0, 1, 2, 3] 
Name: cluster, dtype: object 

print (cluster_ratio_hour) 
Date 
12 [0.0, 0.0, 0.6, 0.4] 
13   [0.0, 0.0, 1.0] 
14 [0.0, 0.0, 0.5, 0.5] 
Name: cluster, dtype: object 

#create DataFrames from both columns and concate them 
df1 = pd.DataFrame(cluster_ids_hour.values.tolist(), index=cluster_ids_hour.index) 
#print (df1) 

df2 = pd.DataFrame(cluster_ratio_hour.values.tolist(), index=cluster_ratio_hour.index) 
#print (df2) 
df = pd.concat([df1, df2], axis=1, keys=('clusters','weights')) 
print (df) 
    clusters   weights    
      0 1 2 3  0 1 2 3 
Date           
12   0 1 2 3.0  0.0 0.0 0.6 0.4 
13   0 1 2 NaN  0.0 0.0 1.0 NaN 
14   0 1 2 3.0  0.0 0.0 0.5 0.5 
#reshape, cast clusters column to integer  
df = df.stack().reset_index(level=1, drop=True).reset_index() 
df['clusters'] = df['clusters'].astype(int) 
#pivoting, fill NaN by 0 
df = df.pivot(index='clusters', columns='Date', values='weights').fillna(0) 

df.index.name = None 
df.columns.name = None 
print (df) 
    12 13 14 
0 0.0 0.0 0.0 
1 0.0 0.0 0.0 
2 0.6 1.0 0.5 
3 0.4 0.0 0.5 
+0

謝謝,這工作正常! –

+0

我想知道,這種方法給一天的集羣權重。我會運行幾天,然後把它們全部結合起來。 在某些日子裏,我只有部分時間(例如12,13,14),而其他時間會包括所有的時間, 我如何使用不同數量的列連接數據幀? –

+0

對不起,我不確定我是否理解你。你需要[concat](http://pandas.pydata.org/pandas-docs/stable/merging.html#set-logic-on-the-other-axes)函數嗎? – jezrael

0
import pandas as pd 
import numpy as np 

time_df = pd.DataFrame({'cluster': {0: 1, 1: 1, 2: 1, 3: 2, 4: 2, 5: 1, 6: 1, 7: 2}, 
         'Date': {0: pd.Timestamp('2014-02-28 12:24:59.535000'), 
           1: pd.Timestamp('2014-02-28 12:26:35.019000'), 
           2: pd.Timestamp('2014-02-28 12:27:37.213000'), 
           3: pd.Timestamp('2014-02-28 12:28:35.246000'), 
           4: pd.Timestamp('2014-02-28 12:29:37.283000'), 
           5: pd.Timestamp('2014-02-28 13:27:37.213000'), 
           6: pd.Timestamp('2014-02-28 14:28:35.246000'), 
           7: pd.Timestamp('2014-02-28 14:29:37.283000')}}) 

print (time_df) 
time_df_group = time_df.groupby([time_df.Date.dt.hour,time_df.cluster]).size() 
cluster_hour_df = time_df_group.unstack(level=0) 
cluster_hour_df = cluster_hour_df[cluster_hour_df.columns.values].apply(lambda row: row/row.sum(), axis=0) 
cluster_hour_df 


Date 12 13 14 
cluster   
1 0.6 1.0 0.5 
2 0.4 NaN 0.5