2010-08-06 66 views
6

鎖定他們在PostgreSQL我有一個像將從1米排表中刪除25萬行下面的查詢:刪除多行不

DELETE FROM table WHERE key = 'needle'; 

的查詢需要一個多小時才能執行,在這段時間內,受影響的行被鎖定以便寫入。這是不好的,因爲這意味着很多更新查詢必須等待大刪除查詢才能完成(然後它們將失敗,因爲這些行從它們下面消失了,但沒關係)。我需要一種將這個大型查詢分割成多個部分的方法,以便儘可能地減少對更新查詢的干擾。例如,如果刪除查詢可以拆分爲分別具有1000行的塊,則其他更新查詢最多不得不等待涉及1000行的刪除查詢。

DELETE FROM table WHERE key = 'needle' LIMIT 10000; 

該查詢將很好地工作,但可惜它不存在於postgres中。

回答

19

嘗試子查詢,並使用了得天獨厚的條件:

DELETE FROM 
    table 
WHERE 
    id IN (SELECT id FROM table WHERE key = 'needle' LIMIT 10000); 
+0

完美! (我不相信我自己沒有想到這個) – 2010-09-14 11:49:44

0

Frak's answer是好的,但是這可能是快,但需要8.4由於窗口函數的支持(僞):

result = query('select 
    id from (
     select id, row_number(*) over (order by id) as row_number 
     from mytable where key=? 
    ) as _ 
    where row_number%8192=0 order by id', 'needle'); 
// result contains ids of every 8192nd row which key='needle' 
last_id = 0; 
result.append(MAX_INT); // guard 
for (row in result) { 
    query('delete from mytable 
     where id<=? and id>? and key=?', row.id, last_id, 'needle'); 
    // last_id is used to hint query planner, 
    // that there will be no rows with smaller id 
    // so it is less likely to use full table scan 
    last_id = row.id; 
} 

這是過早的優化—邪惡的東西。謹防。