我注意到,表格中的COUNT(*)
對於深度SQL而言並非優化查詢。使用postgresql COUNT *的速度太慢
這裏是我與
SELECT COUNT(*) FROM "items"
INNER JOIN (
SELECT c.* FROM companies c LEFT OUTER JOIN company_groups ON c.id = company_groups.company_id
WHERE company_groups.has_restriction IS NULL OR company_groups.has_restriction = 'f' OR company_groups.company_id = 1999 OR company_groups.group_id IN ('3','2')
GROUP BY c.id
) AS companies ON companies.id = stock_items.vendor_id
LEFT OUTER JOIN favs ON items.id = favs.item_id AND favs.user_id = 999 AND favs.is_visible = TRUE
WHERE "items"."type" IN ('Fashion') AND "items"."visibility" = 't' AND "items"."is_hidden" = 'f' AND (items.depth IS NULL OR (items.depth >= '0' AND items.depth <= '100')) AND (items.table IS NULL OR (items.table >= '0' AND items.table <= '100')) AND (items.company_id NOT IN (199,200,201))
工作的sql這個查詢正在從數據庫中35萬點的記錄數。
我使用Rails的爲框架,所以SQL我撰寫火災一COUNT
查詢每當我打電話results.count
,因爲我使用LIMIT
和OFFSET
所以基本結論是加載在更短的原始查詢比32.0ms(這是太快)
這裏的EXPLAIN ANALYSE
Merge Join (cost=70743.22..184962.02 rows=7540499 width=4) (actual time=4018.351..4296.963 rows=360323 loops=1)
Merge Cond: (c.id = items.company_id)
-> Group (cost=0.56..216.21 rows=4515 width=4) (actual time=0.357..5.165 rows=4501 loops=1)
Group Key: c.id
-> Merge Left Join (cost=0.56..204.92 rows=4515 width=4) (actual time=0.303..2.590 rows=4504 loops=1)
Merge Cond: (c.id = company_groups.company_id)
Filter: ((company_groups.has_restriction IS NULL) OR (NOT company_groups.has_restriction) OR (company_groups.company_id = 1999) OR (company_groups.group_id = ANY ('{3,2}'::integer[])))
Rows Removed by Filter: 10
-> Index Only Scan using companies_pkey on companies c (cost=0.28..128.10 rows=4521 width=4) (actual time=0.155..0.941 rows=4508 loops=1)
Heap Fetches: 3
-> Index Scan using index_company_groups_on_company_id on company_groups (cost=0.28..50.14 rows=879 width=9) (actual time=0.141..0.480 rows=878 loops=1)
-> Materialize (cost=70742.66..72421.11 rows=335690 width=8) (actual time=4017.964..4216.381 rows=362180 loops=1)
-> Sort (cost=70742.66..71581.89 rows=335690 width=8) (actual time=4017.955..4140.168 rows=362180 loops=1)
Sort Key: items.company_id
Sort Method: external merge Disk: 6352kB
-> Hash Left Join (cost=1.05..35339.74 rows=335690 width=8) (actual time=0.617..3588.634 rows=362180 loops=1)
Hash Cond: (items.id = favs.item_id)
-> Seq Scan on items (cost=0.00..34079.84 rows=335690 width=8) (actual time=0.504..3447.355 rows=362180 loops=1)
Filter: (visibility AND (NOT is_hidden) AND ((type)::text = 'Fashion'::text) AND (company_id <> ALL ('{199,200,201}'::integer[])) AND ((depth IS NULL) OR ((depth >= '0'::numeric) AND (depth <= '100'::nume (...)
Rows Removed by Filter: 5814
-> Hash (cost=1.04..1.04 rows=1 width=4) (actual time=0.009..0.009 rows=0 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 8kB
-> Seq Scan on favs (cost=0.00..1.04 rows=1 width=4) (actual time=0.008..0.008 rows=0 loops=1)
Filter: (is_visible AND (user_id = 999))
Rows Removed by Filter: 3
Planning time: 3.526 ms
Execution time: 4397.849 ms
輸出3210
請告訴我該如何讓它工作得更快!
P.S:所有列索引一樣type
,visibility
,is_hidden
,table
,提前depth
等
謝謝!
是否使用某種**分頁的速度更快的存儲**寶石(如'kaminari'或'will_paginate'),它射擊你的腿? –
@PavelMikhailyuk我們確實安裝了Kaminari,但爲此我們不使用Kaminari。相反,我們使用'LIMIT'和'OFFSET' –
這個想法是爲了避免'#count'關係。通常'#count'用於分頁。因此,您必須將基於'#count'的計算頁碼更改爲「無限滾動」。或在數據庫或內存中緩存總數。 –