我有3个表:请求,步骤和有效负载。每个请求有N个步骤(所以一对多),每个步骤有1个有效载荷(一对一)。
现在,每当我想按有效负载主体进行过滤时,执行时间都非常糟糕。
这是简化的请求:
select rh.id
from request_history_step_payload rhsp
join request_history_step rhs on rhs.id = rhsp.step_id
join request_history rh on rhs.request_id = rh.id
where rh.id> 35000 and rhs.step_type = 'CONSUMER_REQUEST' and rhsp.payload like '%09141%'
这是EXPLAIN ANALYZE
(在之后立即运行VACUUM ANALYZE
):
Nested Loop (cost=0.71..50234.28 rows=1 width=8) (actual time=120.093..2494.929 rows=12 loops=1)
-> Nested Loop (cost=0.42..50233.32 rows=3 width=8) (actual time=120.083..2494.900 rows=14 loops=1)
-> Seq Scan on request_history_step_payload rhsp (cost=0.00..50098.28 rows=16 width=8) (actual time=120.063..2494.800 rows=25 loops=1)
Filter: ((payload)::text ~~ '%09141%'::text)
Rows Removed by Filter: 164512
-> Index Scan using request_history_step_pkey on request_history_step rhs (cost=0.42..8.44 rows=1 width=16) (actual time=0.003..0.003 rows=1 loops=25)
Index Cond: (id = rhsp.step_id)
Filter: ((step_type)::text = 'CONSUMER_REQUEST'::text)
Rows Removed by Filter: 0
-> Index Only Scan using request_history_pkey on request_history rh (cost=0.29..0.32 rows=1 width=8) (actual time=0.001..0.001 rows=1 loops=14)
Index Cond: ((id = rhs.request_id) AND (id > 35000))
Heap Fetches: 0
Planning Time: 0.711 ms
Execution Time: 2494.964 ms
现在,我想以某种方式建议计划者将LIKE
操作应用为最后一条腿,但想不出任何办法。我尝试了各种连接,并对它们进行了重新排序,并将条件在ON
子句之间移到WHERE
子句之间,反之亦然。一切都无济于事!无论如何,它首先要查看payload
表中的所有行,考虑到其他条件,这显然是最糟糕的主意,它可能会大大减少LIKE
需要应用的操作数。因此,我希望可以首先应用该id
条件,该条件已经占所有记录的90%;那么它将适用的step_type
条件大约是其余条件的85%;因此只会将LIKE
条件应用于少于所有有效负载的5%。
我将如何处理?我正在使用Postgres 11。
UPD:建议我为这些表添加索引信息,因此:
request_history
-id
现场有2个唯一索引(我不知道为什么会有2个)request_history_step
-具有2个唯一索引,均在id
现场request_history_step_payload
-id
场上有1个唯一索引UPD2:step
和payload
表还定义了FK(分别在有效负载.step_id-> step.id和step.request_id-> request_id上)
我还尝试了一些(简化的)带有子选择的查询:
explain analyze select rhs.id from request_history_step rhs
join (select step_id from request_history_step_payload rhsp where rhsp.payload like '%09141%') rhsp on rhs.id = rhsp.step_id
where rhs.step_type = 'CONSUMER_REQUEST';
explain analyze select rhsp.step_id from request_history_step_payload rhsp
join (select id from request_history_step rhs where rhs.step_type = 'CONSUMER_REQUEST') rhs on rhs.id = rhsp.step_id
where rhsp.payload like '%09141%';
explain analyze select rhsp.step_id from request_history_step_payload rhsp
where rhsp.step_id in (select id from request_history_step rhs where rhs.step_type = 'CONSUMER_REQUEST')
and rhsp.payload like '%09141%';
(也使用JOIN LATERAL
而不是JOIN
)-它们每个都给出完全相同的计划,即嵌套循环,在其嵌套循环中,该循环中的“外部”(第一条)支路是SeqScan。这让我发疯。为什么要对最宽的行集执行最昂贵的操作?
UPD3:受原始问题下的评论启发,我进行了一些进一步的实验。我用更简单的查询解决了:
select rhs.request_id
from request_history_step_payload rhsp
join request_history_step rhs on rhs.id = rhsp.step_id
where rhs.step_type = 'CONSUMER_REQUEST' and rhsp.payload like '%09141%';
现在,它的执行计划基本上与原始计划相同,只减少了一个“嵌套循环”。
现在,我添加了一个索引payload.step_id
:
create index request_history_step_payload_step_id on request_history_step_payload(step_id);
跑VACUUM ANALYZE
; 运行查询explain analyze
-没有任何变化。嗯
现在我跑步了set enable_seqscan to off
。而现在,我们正在谈论:
Gather (cost=1000.84..88333.90 rows=3 width=8) (actual time=530.273..589.650 rows=14 loops=1)
Workers Planned: 1
Workers Launched: 1
-> Nested Loop (cost=0.84..87333.60 rows=2 width=8) (actual time=544.639..580.608 rows=7 loops=2)
-> Parallel Index Scan using request_history_step_pkey on request_history_step rhs (cost=0.42..15913.04 rows=20867 width=16) (actual time=0.029..28.667 rows=17620 loops=2)
Filter: ((step_type)::text = 'CONSUMER_REQUEST'::text)
Rows Removed by Filter: 64686
-> Index Scan using request_history_step_payload_step_id on request_history_step_payload rhsp (cost=0.42..3.41 rows=1 width=8) (actual time=0.031..0.031 rows=0 loops=35239)
Index Cond: (step_id = rhs.id)
Filter: ((payload)::text ~~ '%09141%'::text)
Rows Removed by Filter: 1
Planning Time: 0.655 ms
Execution Time: 589.688 ms
现在,随着SeqScan的成本不断攀升,我认为我们可以看到问题的要点:尽管执行时间实际上很长,但该执行计划的成本被认为比原始计划的成本高(88k vs 50k)。更短(590ms和2700ms)。尽管我竭尽全力说服了他,但Postgre计划者仍然选择“ SeqScan first”,这显然是原因。
我也尝试过为该step.step_type
字段添加索引;既hash
和btree
基础的。每个计划仍会生成成本超过50k的计划,因此将其enable_seqscan
设置为on
(默认值),计划人员将始终忽略这些计划。
有人知道有什么缓解措施吗?恐怕正确的解决方案可能需要更改计划者变量的权重,我当然不愿意这样做。但是,很高兴听到任何建议!
UPD4:我已经玩了更多游戏,现在我可以报告更多结果(enable_seqscan
设置为on
:
这很慢,即使在上有索引,也应用seqscan step.step_type
:
explain analyze
select rhsp.step_id
from (select request_id, id from request_history_step rhs2 where rhs2.step_type = 'CONSUMER_REQUEST') rhs
join request_history_step_payload rhsp on rhs.id = rhsp.step_id
where rhsp.payload like '%09141%';
这是根据O. Jones的建议,仍然很慢:
explain analyze
with rhs as (select request_id, id from request_history_step rhs2 where rhs2.step_type = 'CONSUMER_REQUEST')
select rhsp.step_id from request_history_step_payload rhsp
join rhs on rhs.id = rhsp.step_id
where rhsp.payload like '%09141%';
但是经过稍微修改的这一步很快:
explain analyze
with rhs as (select request_id, id from request_history_step rhs2 where rhs2.step_type = 'CONSUMER_REQUEST')
select rhsp.step_id
from request_history_step_payload rhsp
join rhs on rhs.id = rhsp.step_id
where rhsp.step_id in (select id from rhs) and rhsp.payload like '%09141%';
它的执行计划是:
Hash Join (cost=9259.55..10097.04 rows=2 width=8) (actual time=1157.984..1162.199 rows=14 loops=1)
Hash Cond: (rhs.id = rhsp.step_id)
CTE rhs
-> Bitmap Heap Scan on request_history_step rhs2 (cost=1169.28..6918.06 rows=35262 width=16) (actual time=3.899..19.093 rows=35241 loops=1)
Recheck Cond: ((step_type)::text = 'CONSUMER_REQUEST'::text)
Heap Blocks: exact=3120
-> Bitmap Index Scan on request_history_step_step_type_hash (cost=0.00..1160.46 rows=35262 width=0) (actual time=3.047..3.047 rows=35241 loops=1)
Index Cond: ((step_type)::text = 'CONSUMER_REQUEST'::text)
-> CTE Scan on rhs (cost=0.00..705.24 rows=35262 width=8) (actual time=3.903..5.976 rows=35241 loops=1)
-> Hash (cost=2341.39..2341.39 rows=8 width=16) (actual time=1153.976..1153.976 rows=14 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 9kB
-> Nested Loop (cost=793.81..2341.39 rows=8 width=16) (actual time=104.170..1153.919 rows=14 loops=1)
-> HashAggregate (cost=793.39..795.39 rows=200 width=8) (actual time=33.315..44.875 rows=35241 loops=1)
Group Key: rhs_1.id
-> CTE Scan on rhs rhs_1 (cost=0.00..705.24 rows=35262 width=8) (actual time=0.001..23.590 rows=35241 loops=1)
-> Index Scan using request_history_step_payload_step_id on request_history_step_payload rhsp (cost=0.42..7.72 rows=1 width=8) (actual time=0.031..0.031 rows=0 loops=35241)
Index Cond: (step_id = rhs_1.id)
Filter: ((payload)::text ~~ '%09141%'::text)
Rows Removed by Filter: 1
Planning Time: 1.318 ms
Execution Time: 1162.618 ms
同样,成本大幅下降,而执行时间却不那么多
从您的计划中看来,您step_type
对request_history_step_payload
业务的判断没有太大帮助。
因此,让我们尝试使用涵盖列INCLUDEd的text(trigram)索引来帮助加快搜索步骤。
CREATE INDEX CONCURRENTLY rhsp_type_payload
ON request_history_step_payload
USING GIN (step_type gin_trgm_ops)
INCLUDE (rhs_step_type, rhs_step_id);
有可能会有所帮助。试试看。
有了该索引后,您还可以尝试像这样重新处理查询:
select rh.id
from ( select step_id
from request_history_step_payload
where rhs.step_type = 'CONSUMER_REQUEST'
and rhsp.payload like '%09141%'
) rhsp
join request_history_step rhs on rhs.id = rhsp.step_id
join request_history rh on rhs.request_id = rh.id
where rh.id> 35000
这会将您对该request_history_step_payload
表的搜索移至子查询。当您尝试使所有查询对于应用程序足够快时,可以分别优化子查询。
并且,删除所有重复的索引。它们毫无理由地减慢了INSERT和UPDATE操作的速度。
本文收集自互联网,转载请注明来源。
如有侵权,请联系[email protected] 删除。
我来说两句