2014-02-23 71 views
1

我有一個300萬行和1.3GB大小的表。在我的筆記本電腦上使用4GB RAM運行Postgres 9.3。Postgres緩慢的查詢(慢速索引掃描)

explain analyze 
select act_owner_id from cnt_contacts where act_owner_id = 2 

我已經cnt_contacts.act_owner_id B樹鍵定義爲:

CREATE INDEX cnt_contacts_idx_act_owner_id 
    ON public.cnt_contacts USING btree (act_owner_id, status_id); 

查詢約5秒鐘

 
Bitmap Heap Scan on cnt_contacts (cost=2598.79..86290.73 rows=6208 width=4) (actual time=5865.617..5875.302 rows=5444 loops=1) 
    Recheck Cond: (act_owner_id = 2) 
    -> Bitmap Index Scan on cnt_contacts_idx_act_owner_id (cost=0.00..2597.24 rows=6208 width=0) (actual time=5865.407..5865.407 rows=5444 loops=1) 
     Index Cond: (act_owner_id = 2) 
Total runtime: 5875.684 ms" 
爲什麼要花這麼長時間運行?

work_mem = 1024MB; 
shared_buffers = 128MB; 
effective_cache_size = 1024MB 
seq_page_cost = 1.0   # measured on an arbitrary scale 
random_page_cost = 15.0   # same scale as above 
cpu_tuple_cost = 3.0 
+0

「cnt_contacts_idx_act_owner_id」索引的定義是什麼? –

+0

CREATE INDEX cnt_contacts_idx_act_owner_id ON public.cnt_contacts 使用btree(act_owner_id,status_id); –

+0

你應該創建另一個只有'act_owner_id'的索引。 – frlan

回答

2

您正在筆記本電腦上選擇分散在1.3 GB桌子上的5444條記錄。 預計需要多長時間?

看起來您的索引沒有被緩存,要麼是因爲它無法在緩存中持續存在,要麼是因爲這是您第一次使用該部分索引。如果重複運行完全相同的查詢,會發生什麼情況?相同的查詢,但具有不同的常量?

在「explain(analyze,buffers)」下運行查詢將有助於獲得更多信息,特別是如果您先開啓了track_io_timing。

0

好吧,你有大的表格,索引和長時間執行PG的平原。讓我們思考如何改進你的計劃和縮短時間。你寫和刪除行。 PG編寫和刪除元組以及表和索引可能會變得臃腫。爲了好搜索,PG將索引加載到共享緩衝區。你需要保持你的索引儘可能乾淨。對於選擇,PG將讀取到共享緩衝區而不是搜索。嘗試設置緩衝區內存並減少索引和表格膨脹,保持數據庫清理。

您做些什麼,想一想:

1)只要檢查指標重複,並具有很好的選擇,你的索引:

WITH table_scans as (
    SELECT relid, 
     tables.idx_scan + tables.seq_scan as all_scans, 
     (tables.n_tup_ins + tables.n_tup_upd + tables.n_tup_del) as writes, 
       pg_relation_size(relid) as table_size 
     FROM pg_stat_user_tables as tables 
), 
all_writes as (
    SELECT sum(writes) as total_writes 
    FROM table_scans 
), 
indexes as (
    SELECT idx_stat.relid, idx_stat.indexrelid, 
     idx_stat.schemaname, idx_stat.relname as tablename, 
     idx_stat.indexrelname as indexname, 
     idx_stat.idx_scan, 
     pg_relation_size(idx_stat.indexrelid) as index_bytes, 
     indexdef ~* 'USING btree' AS idx_is_btree 
    FROM pg_stat_user_indexes as idx_stat 
     JOIN pg_index 
      USING (indexrelid) 
     JOIN pg_indexes as indexes 
      ON idx_stat.schemaname = indexes.schemaname 
       AND idx_stat.relname = indexes.tablename 
       AND idx_stat.indexrelname = indexes.indexname 
    WHERE pg_index.indisunique = FALSE 
), 
index_ratios AS (
SELECT schemaname, tablename, indexname, 
    idx_scan, all_scans, 
    round((CASE WHEN all_scans = 0 THEN 0.0::NUMERIC 
     ELSE idx_scan::NUMERIC/all_scans * 100 END),2) as index_scan_pct, 
    writes, 
    round((CASE WHEN writes = 0 THEN idx_scan::NUMERIC ELSE idx_scan::NUMERIC/writes END),2) 
     as scans_per_write, 
    pg_size_pretty(index_bytes) as index_size, 
    pg_size_pretty(table_size) as table_size, 
    idx_is_btree, index_bytes 
    FROM indexes 
    JOIN table_scans 
    USING (relid) 
), 
index_groups AS (
SELECT 'Never Used Indexes' as reason, *, 1 as grp 
FROM index_ratios 
WHERE 
    idx_scan = 0 
    and idx_is_btree 
UNION ALL 
SELECT 'Low Scans, High Writes' as reason, *, 2 as grp 
FROM index_ratios 
WHERE 
    scans_per_write <= 1 
    and index_scan_pct < 10 
    and idx_scan > 0 
    and writes > 100 
    and idx_is_btree 
UNION ALL 
SELECT 'Seldom Used Large Indexes' as reason, *, 3 as grp 
FROM index_ratios 
WHERE 
    index_scan_pct < 5 
    and scans_per_write > 1 
    and idx_scan > 0 
    and idx_is_btree 
    and index_bytes > 100000000 
UNION ALL 
SELECT 'High-Write Large Non-Btree' as reason, index_ratios.*, 4 as grp 
FROM index_ratios, all_writes 
WHERE 
    (writes::NUMERIC/(total_writes + 1)) > 0.02 
    AND NOT idx_is_btree 
    AND index_bytes > 100000000 
ORDER BY grp, index_bytes DESC) 
SELECT reason, schemaname, tablename, indexname, 
    index_scan_pct, scans_per_write, index_size, table_size 
FROM index_groups; 

2)檢查是否有表和索引腹脹?

 SELECT 
     current_database(), schemaname, tablename, /*reltuples::bigint, relpages::bigint, otta,*/ 
     ROUND((CASE WHEN otta=0 THEN 0.0 ELSE sml.relpages::FLOAT/otta END)::NUMERIC,1) AS tbloat, 
     CASE WHEN relpages < otta THEN 0 ELSE bs*(sml.relpages-otta)::BIGINT END AS wastedbytes, 
     iname, /*ituples::bigint, ipages::bigint, iotta,*/ 
     ROUND((CASE WHEN iotta=0 OR ipages=0 THEN 0.0 ELSE ipages::FLOAT/iotta END)::NUMERIC,1) AS ibloat, 
     CASE WHEN ipages < iotta THEN 0 ELSE bs*(ipages-iotta) END AS wastedibytes 
    FROM (
     SELECT 
     schemaname, tablename, cc.reltuples, cc.relpages, bs, 
     CEIL((cc.reltuples*((datahdr+ma- 
      (CASE WHEN datahdr%ma=0 THEN ma ELSE datahdr%ma END))+nullhdr2+4))/(bs-20::FLOAT)) AS otta, 
     COALESCE(c2.relname,'?') AS iname, COALESCE(c2.reltuples,0) AS ituples, COALESCE(c2.relpages,0) AS ipages, 
     COALESCE(CEIL((c2.reltuples*(datahdr-12))/(bs-20::FLOAT)),0) AS iotta -- very rough approximation, assumes all cols 
     FROM (
     SELECT 
      ma,bs,schemaname,tablename, 
      (datawidth+(hdr+ma-(CASE WHEN hdr%ma=0 THEN ma ELSE hdr%ma END)))::NUMERIC AS datahdr, 
      (maxfracsum*(nullhdr+ma-(CASE WHEN nullhdr%ma=0 THEN ma ELSE nullhdr%ma END))) AS nullhdr2 
     FROM (
      SELECT 
      schemaname, tablename, hdr, ma, bs, 
      SUM((1-null_frac)*avg_width) AS datawidth, 
      MAX(null_frac) AS maxfracsum, 
      hdr+(
       SELECT 1+COUNT(*)/8 
       FROM pg_stats s2 
       WHERE null_frac<>0 AND s2.schemaname = s.schemaname AND s2.tablename = s.tablename 
      ) AS nullhdr 
      FROM pg_stats s, (
      SELECT 
       (SELECT current_setting('block_size')::NUMERIC) AS bs, 
       CASE WHEN SUBSTRING(v,12,3) IN ('8.0','8.1','8.2') THEN 27 ELSE 23 END AS hdr, 
       CASE WHEN v ~ 'mingw32' THEN 8 ELSE 4 END AS ma 
      FROM (SELECT version() AS v) AS foo 
     ) AS constants 
      GROUP BY 1,2,3,4,5 
     ) AS foo 
    ) AS rs 
     JOIN pg_class cc ON cc.relname = rs.tablename 
     JOIN pg_namespace nn ON cc.relnamespace = nn.oid AND nn.nspname = rs.schemaname AND nn.nspname <> 'information_schema' 
     LEFT JOIN pg_index i ON indrelid = cc.oid 
     LEFT JOIN pg_class c2 ON c2.oid = i.indexrelid 
    ) AS sml 
    ORDER BY wastedbytes DESC 

3)您是否從硬盤清理未使用的元組?真空是時候了嗎?

SELECT 
    relname AS TableName 
    ,n_live_tup AS LiveTuples 
    ,n_dead_tup AS DeadTuples 
FROM pg_stat_user_tables; 

4)想一想。如果你在db中有10條記錄,而10中有8條id = 2,那麼這意味着你的索引選擇性不好,這樣PG就會掃描所有8條記錄。但是,你嘗試使用ID!= 2索引將工作良好。嘗試設置良好的選擇索引。

5)使用正確的列類型爲您提供數據。如果您可以使用較少的kb類型爲您的列轉換它。

6)只要檢查你的數據庫和條件。檢查這個開始page 只是試圖看到你有在數據庫中未使用的數據在表中,索引必須清理,檢查選擇性爲您的索引。嘗試使用其他brin索引數據,嘗試重新創建索引。