2015-12-02 22 views
1

我有以下表:PostgreSQL的複數求和查詢

video (id, name) 

keyframe (id, name, video_id) /*video_id has fk on video.id*/ 

detector (id, concepts) 

score (detector_id, keyframe_id, score) /*detector_id has fk on detector .id and keyframe_id has fk on keyframe.id*/ 

在本質上,一個視頻具有與它相關聯的多個關鍵幀,並且每個關鍵幀已經被所有的檢測器評分。每個探測器都有一串概念,它會對關鍵幀進行評分。

現在,我想找到,在一個單一的查詢,如果可能的話,以下幾點:

鑑於探測器ID的數組(比方說,最多5個),返回前10名的影片有這些的最好成績探測器組合。通過平均每個檢測器每個視頻的關鍵幀分數來對它們進行評分,然後將檢測器分數相加。

示例: 對於具有與下列分數3點相關聯的關鍵幀的視頻2個檢測器:

detector_id | keyframe_id | score 
1    1    0.0281 
1    2    0.0012 
1    3    0.0269 
2    1    0.1341 
2    2    0.9726 
2    3    0.7125 

這將給出分數的視頻:

sum(avg(0.0281, 0.0012, 0.0269), avg(0.1341, 0.9726, 0.7125)) 

最後,我想以下結果:

video_id | score 
1   0.417328 
2   ... 

它必須是這樣的我想,但我還沒有應用:

select 
    (select 
     (select sum(avg_score) summed_score 
     from 
     (select 
      avg(s.score) avg_score 
     from score s 
     where s.detector_id = ANY(array[1,2,3,4,5]) and s.keyframe_id = kf.id) x) 
    from keyframe kf 
    where kf.video_id = v.id) y 
from video v 

我的成績表是相當大的(100M行),所以我想它是儘可能快(所有其他選項我試過拿分去完成)。每個視頻總共有約3000個視頻,500個探測器和大約15個關鍵幀。

如果不可能在少於2秒的時間內做到這一點,那麼我也開放了重構數據庫模式的方法。數據庫中可能根本沒有插入/刪除操作。

編輯

感謝GabrielsMessanger我有一個答案,這裏是查詢計劃:

EXPLAIN (analyze, verbose) 
SELECT 
    v_id, sum(fd_avg_score) 
FROM (
    SELECT 
     v.id as v_id, k.id as k_id, d.id as d_id, 
     avg(s.score) as fd_avg_score 
    FROM 
     video v 
     JOIN keyframe k ON k.video_id = v.id 
     JOIN score s ON s.keyframe_id = k.id 
     JOIN detector d ON d.id = s.detector_id 
    WHERE 
     d.id = ANY(ARRAY[1,2,3,4,5]) /*here goes detector's array*/ 
    GROUP BY 
     v.id, 
     k.id, 
     d.id 
) sub 
GROUP BY 
    v_id 
; 

"GroupAggregate (cost=1865513.09..1910370.09 rows=200 width=12) (actual time=52141.684..52908.198 rows=2991 loops=1)" 
" Output: v.id, sum((avg(s.score)))" 
" Group Key: v.id" 
" -> GroupAggregate (cost=1865513.09..1893547.46 rows=1121375 width=20) (actual time=52141.623..52793.184 rows=1121375 loops=1)" 
"  Output: v.id, k.id, d.id, avg(s.score)" 
"  Group Key: v.id, k.id, d.id" 
"  -> Sort (cost=1865513.09..1868316.53 rows=1121375 width=20) (actual time=52141.613..52468.062 rows=1121375 loops=1)" 
"    Output: v.id, k.id, d.id, s.score" 
"    Sort Key: v.id, k.id, d.id" 
"    Sort Method: external merge Disk: 37232kB" 
"    -> Hash Join (cost=11821.18..1729834.13 rows=1121375 width=20) (actual time=120.706..51375.777 rows=1121375 loops=1)" 
"     Output: v.id, k.id, d.id, s.score" 
"     Hash Cond: (k.video_id = v.id)" 
"     -> Hash Join (cost=11736.89..1711527.49 rows=1121375 width=20) (actual time=119.862..51141.066 rows=1121375 loops=1)" 
"       Output: k.id, k.video_id, s.score, d.id" 
"       Hash Cond: (s.keyframe_id = k.id)" 
"       -> Nested Loop (cost=4186.70..1673925.96 rows=1121375 width=16) (actual time=50.878..50034.247 rows=1121375 loops=1)" 
"        Output: s.score, s.keyframe_id, d.id" 
"        -> Seq Scan on public.detector d (cost=0.00..11.08 rows=5 width=4) (actual time=0.011..0.079 rows=5 loops=1)" 
"          Output: d.id, d.concepts" 
"          Filter: (d.id = ANY ('{1,2,3,4,5}'::integer[]))" 
"          Rows Removed by Filter: 492" 
"        -> Bitmap Heap Scan on public.score s (cost=4186.70..332540.23 rows=224275 width=16) (actual time=56.040..9961.040 rows=224275 loops=5)" 
"          Output: s.detector_id, s.keyframe_id, s.score" 
"          Recheck Cond: (s.detector_id = d.id)" 
"          Rows Removed by Index Recheck: 34169904" 
"          Heap Blocks: exact=192845 lossy=928530" 
"          -> Bitmap Index Scan on score_index (cost=0.00..4130.63 rows=224275 width=0) (actual time=49.748..49.748 rows=224275 loops=5)" 
"           Index Cond: (s.detector_id = d.id)" 
"       -> Hash (cost=3869.75..3869.75 rows=224275 width=8) (actual time=68.924..68.924 rows=224275 loops=1)" 
"        Output: k.id, k.video_id" 
"        Buckets: 16384 Batches: 4 Memory Usage: 2205kB" 
"        -> Seq Scan on public.keyframe k (cost=0.00..3869.75 rows=224275 width=8) (actual time=0.003..33.662 rows=224275 loops=1)" 
"          Output: k.id, k.video_id" 
"     -> Hash (cost=46.91..46.91 rows=2991 width=4) (actual time=0.834..0.834 rows=2991 loops=1)" 
"       Output: v.id" 
"       Buckets: 1024 Batches: 1 Memory Usage: 106kB" 
"       -> Seq Scan on public.video v (cost=0.00..46.91 rows=2991 width=4) (actual time=0.005..0.417 rows=2991 loops=1)" 
"        Output: v.id" 
"Planning time: 2.136 ms" 
"Execution time: 52914.840 ms" 

回答

1

免責聲明:

我最後的答案是基於評析,並與筆者聊天延長討論。有一兩件事,shoult注意:每keyframe_id只分配給一個視頻

原來的答覆:

這不是那麼簡單如下面的查詢?:

SELECT 
    v_id, sum(fd_avg_score) 
FROM (
    SELECT 
     v.id as v_id, k.id as k_id, s.detector_id as d_id, 
     avg(s.score) as fd_avg_score 
    FROM 
     video v 
     JOIN keyframe k ON k.video_id = v.id 
     JOIN score s ON s.keyframe_id = k.id 
    WHERE 
     s.detector_id = ANY(ARRAY[1,2,3,4,5]) /*here goes detector's array*/ 
    GROUP BY 
     v.id, 
     k.id, 
     detector_id 
) sub 
GROUP BY 
    v_id 
LIMIT 10 
; 
子查詢

首先,我們將視頻與關鍵幀和關鍵幀一起分數加入。我們計算每個視頻的平均得分,這些視頻的每個關鍵幀和每個檢測器(如您所說)。最後在主查詢中,我們總結每個視頻的avg_score。

性能

正如作家指出他PRIMARY KEYS對每個表id列,也對錶score(detector_id, keyrame_id)綜合指數。這可以足以快速運行此查詢。

但是,測試作者需要進一步優化。所以兩兩件事:

  1. Remeber 總是對錶執行VACUUM ANALYZE esspecially如果插入100M行(如score表)。所以至少執行VACUUM ANALYZE score
  2. 要嘗試優化更多,我們可以將score(detector_id, keyrame_id)上的複合索引更改爲score(detector_id, keyrame_id, score)上的複合索引。它可能允許PostgreSQL在計算平均值時使用Index Only Scan
+0

什麼是f這裏,你的意思是k?我收到一個錯誤:'錯誤:缺少FROM表項「f」的條目 – appel

+0

是的,我的意思是'k'。答案已更正。 –

+0

謝謝,這似乎給出了很好的結果,但是查詢需要很長時間(20s)。你有關於如何提高速度的任何提示(我聽說索引可能有幫助)? – appel