我有,我試圖在Postgres的9.2優化一個相當複雜的查詢 - 的解釋分析給出this plan (explain.depesz.com):加入更多的索引來Postgres的原因「進行共享內存」錯誤
Merge Right Join (cost=194965639.35..211592151.26 rows=420423258 width=616) (actual time=15898.283..15920.603 rows=17 loops=1)
Merge Cond: ((((p.context -> 'device'::text)) = ((s.context -> 'device'::text))) AND (((p.context -> 'physical_port'::text)) = ((s.context -> 'physical_port'::text))))
-> Sort (cost=68925.49..69073.41 rows=59168 width=393) (actual time=872.289..877.818 rows=39898 loops=1)
Sort Key: ((p.context -> 'device'::text)), ((p.context -> 'physical_port'::text))
Sort Method: quicksort Memory: 27372kB
-> Seq Scan on ports__status p (cost=0.00..64235.68 rows=59168 width=393) (actual time=0.018..60.931 rows=41395 loops=1)
-> Materialize (cost=194896713.86..199620346.93 rows=284223403 width=299) (actual time=15023.710..15024.779 rows=17 loops=1)
-> Merge Left Join (cost=194896713.86..198909788.42 rows=284223403 width=299) (actual time=15023.705..15024.765 rows=17 loops=1)
Merge Cond: ((((s.context -> 'device'::text)) = ((l1.context -> 'device'::text))) AND (((s.context -> 'physical_port'::text)) = ((l1.context -> 'physical_port'::text))))
-> Sort (cost=194894861.42..195605419.92 rows=284223403 width=224) (actual time=14997.225..14997.230 rows=17 loops=1)
Sort Key: ((s.context -> 'device'::text)), ((s.context -> 'physical_port'::text))
Sort Method: quicksort Memory: 33kB
-> GroupAggregate (cost=100001395.98..122028709.71 rows=284223403 width=389) (actual time=14997.120..14997.186 rows=17 loops=1)
-> Sort (cost=100001395.98..100711954.49 rows=284223403 width=389) (actual time=14997.080..14997.080 rows=17 loops=1)
Sort Key: ((d.context -> 'hostname'::text)), ((a.context -> 'ip_address'::text)), ((a.context -> 'mac_address'::text)), ((s.context -> 'device'::text)), ((s.context -> 'physical_port'::text)), s.created_at, s.updated_at, d.created_at, d.updated_at
Sort Method: quicksort Memory: 33kB
-> Merge Join (cost=339026.99..9576678.30 rows=284223403 width=389) (actual time=14996.710..14996.749 rows=17 loops=1)
Merge Cond: (((a.context -> 'mac_address'::text)) = ((s.context -> 'mac_address'::text)))
-> Sort (cost=15038.32..15136.00 rows=39072 width=255) (actual time=23.556..23.557 rows=1 loops=1)
Sort Key: ((a.context -> 'mac_address'::text))
Sort Method: quicksort Memory: 25kB
-> Hash Join (cost=471.88..12058.33 rows=39072 width=255) (actual time=13.482..23.548 rows=1 loops=1)
Hash Cond: ((a.context -> 'ip_address'::text) = (d.context -> 'ip_address'::text))
-> Seq Scan on arps__arps a (cost=0.00..8132.39 rows=46239 width=157) (actual time=0.007..11.191 rows=46259 loops=1)
-> Hash (cost=469.77..469.77 rows=169 width=98) (actual time=0.035..0.035 rows=1 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 1kB
-> Bitmap Heap Scan on ipam__dns d (cost=9.57..469.77 rows=169 width=98) (actual time=0.023..0.023 rows=1 loops=1)
Recheck Cond: ((context -> 'hostname'::text) = 'zglast-oracle03.slac.stanford.edu'::text)
-> Bitmap Index Scan on ipam__dns_hostname_index (cost=0.00..9.53 rows=169 width=0) (actual time=0.017..0.017 rows=1 loops=1)
Index Cond: ((context -> 'hostname'::text) = 'blah'::text)
-> Sort (cost=323988.67..327625.84 rows=1454870 width=134) (actual time=14973.118..14973.120 rows=18 loops=1)
Sort Key: ((s.context -> 'mac_address'::text))
Sort Method: external sort Disk: 214176kB
-> Result (cost=0.00..175064.84 rows=1454870 width=134) (actual time=0.016..1107.604 rows=1265154 loops=1)
-> Append (cost=0.00..175064.84 rows=1454870 width=134) (actual time=0.013..796.578 rows=1265154 loops=1)
-> Seq Scan on spanning_tree__neighbour s (cost=0.00..0.00 rows=1 width=98) (actual time=0.000..0.000 rows=0 loops=1)
Filter: ((context -> 'physical_port'::text) IS NOT NULL)
-> Seq Scan on spanning_tree__neighbour__vlan38 s (cost=0.00..469.32 rows=1220 width=129) (actual time=0.011..1.019 rows=823 loops=1)
Filter: ((context -> 'physical_port'::text) IS NOT NULL)
Rows Removed by Filter: 403
-> Seq Scan on spanning_tree__neighbour__vlan3 s (cost=0.00..270.20 rows=1926 width=139) (actual time=0.017..0.971 rows=1882 loops=1)
Filter: ((context -> 'physical_port'::text) IS NOT NULL)
Rows Removed by Filter: 54
-> Seq Scan on spanning_tree__neighbour__vlan466 s (cost=0.00..131.85 rows=306 width=141) (actual time=0.032..0.340 rows=276 loops=1)
Filter: ((context -> 'physical_port'::text) IS NOT NULL)
Rows Removed by Filter: 32
-> Seq Scan on spanning_tree__neighbour__vlan465 s (cost=0.00..208.57 rows=842 width=142) (actual time=0.005..0.622 rows=768 loops=1)
Filter: ((context -> 'physical_port'::text) IS NOT NULL)
Rows Removed by Filter: 78
-> Seq Scan on spanning_tree__neighbour__vlan499 s (cost=0.00..245.04 rows=481 width=142) (actual time=0.017..0.445 rows=483 loops=1)
Filter: ((context -> 'physical_port'::text) IS NOT NULL)
-> Seq Scan on spanning_tree__neighbour__vlan176 s (cost=0.00..346.36 rows=2576 width=131) (actual time=0.008..1.443 rows=2051 loops=1)
Filter: ((context -> 'physical_port'::text) IS NOT NULL)
Rows Removed by Filter: 538
我是有點新手在閱讀計劃,但我認爲這一切都歸功於我擁有表spanning_tree__neighbour
(我已將其分成許多'vlan'表)。你可以看到它正在執行一個seq掃描。
所以我寫了一個快速和骯髒的bash腳本爲子表創建索引:
create index spanning_tree__neighbour__vlan1_physical_port_index ON spanning_tree__neighbour__vlan1((context->'physical_port')) wHERE ((context->'physical_port') IS NOT NULL);
create index spanning_tree__neighbour__vlan2_physical_port_index ON spanning_tree__neighbour__vlan2((context->'physical_port')) wHERE ((context->'physical_port') IS NOT NULL);
create index spanning_tree__neighbour__vlan3_physical_port_index ON spanning_tree__neighbour__vlan3((context->'physical_port')) wHERE ((context->'physical_port') IS NOT NULL);
...
但之後我創建了一個百元左右的人,任何查詢得到:
=> explain analyze select * from hosts where hostname='blah';
WARNING: out of shared memory
ERROR: out of shared memory
HINT: You might need to increase max_locks_per_transaction.
Time: 34.757 ms
會設置max_locks_per_transaction
居然有幫助?考慮到我的分區表有多達4096個子表,我應該使用什麼值?
還是讓我讀錯計劃?
您的統計信息似乎已關閉。 – wildplasser
你的PostgreSQL版本是什麼? **總是**在問題中包含您的PostgreSQL版本。 Wildplasser也是對的,你的數據完全在那裏。 'ANALYZE;'並確定爲什麼autovaccum沒有爲你做自動分析。另外,是的,如果你的查詢可能涉及大量的關係,你必須提高'max_locks_per_transaction'並支付共享內存使用中的相關成本。 –
感謝您的幫助分析鏈接... err分析!很有用。我使用9.2。解釋只是完整輸出的一部分 - 我已經鏈接到完整版本。所以如果我正確地閱讀它,它會花費大部分時間(80%)對數據進行排序......我怎麼能減少這種情況呢?乾杯, – yee379