您好,登錄后才能下訂單哦!
Greenplum通常被用作OLAP,在一些用戶使用過程中,可能因為數據結構設計,SQL問題等原因導致性能不佳,雖然通過增加節點可以解決問題,但是如果能優化的話,可以節約不少硬件資源。
例如
1、對齊JOIN字段類型。如果等值JOIN的字段類型不一致,無法使用HASH JOIN。
2、對齊where條件字段類型。同上,無法使用HASH JOIN,或者索引掃描。
3、使用數組代替字符串,降低字符串處理開銷。如果字符串本身需要大量的格式化處理FILTER,那么使用數組的性能會好很多。
4、列存降低掃描開銷,統計型的SQL由于涉及的字段有限,使用列存比行存儲性能好很多。
1、這個查詢耗費230秒。
SELECT col4,count(DISTINCT c.col1) ptnum from tbl1 a INNER JOIN tbl2 b on b.col2=a.id inner join tbl3 t2 on t2.ID <= (length(b.col3) - length(replace(b.col3,',',''))+1) INNER JOIN tbl4 c on replace(replace(Split_part(reverse(Split_part(reverse(Split_part(b.col3,',',cast(t2.id as int))),',',1)),':',1),'{',''),'"','') = c.id INNER JOIN tbl5 s on a.col4=s.id where replace(replace(reverse(Split_part(Split_part(reverse(Split_part(b.col3,',',cast(t2.id as int))),',',1),':',1)),'"',''),'}','') >'0' and c.col1 not in ('xxxxxx') GROUP BY col4;
2、使用explain analyze分析瓶頸
3、問題:
3.1、JOIN類型不一致,導致未使用HASH JOIN。
3.2、有兩個表JOIN時產生笛卡爾積來進行不等于的判斷,數據量疊加后需要計算幾十萬億次。
tbl2.col3字符串格式如下(需要計算幾十萬億次)
{"2":"1","10":"1","13":"1","16":"1","21":"1","26":"1","28":"1","30":"1","32":"1","33":"1","34":"1","35":"1","36":"1","37":"1","39":"1","40":"1","99":"2","100":"2","113":"1","61":"1","63":"4","65":"2"}
3.3、使用了行存儲,查詢時掃描的量較大,并且無法使用向量計算。
1、使用列存代替行存(除nestloop的內表tbl3,繼續使用索引FILTER)
create table tmp_tbl1 (like tbl1) WITH (APPENDONLY=true, ORIENTATION=column); insert into tmp_tbl1 select * from tbl1; create table tmp_tbl4 (like tbl4) WITH (APPENDONLY=true, ORIENTATION=column); insert into tmp_tbl4 select * from tbl4; create table tmp_tbl5 ( like tbl5) WITH (APPENDONLY=true, ORIENTATION=column); insert into tmp_tbl5 select * from tbl5; create table tmp_tbl2 (like tbl2) WITH (APPENDONLY=true, ORIENTATION=column) distributed by (col2); insert into tmp_tbl2 select * from tbl2;
2、使用array代替text
alter table tmp_tbl2 alter column col3 type text[] using (case col3 when '[]' then '{}' else replace(col3,'"','') end)::text[];
修改后的類型、內容如下
digoal=> select col3 from tmp_tbl2 limit 2; col3 ------------------------------------------------------------------------------------------------------------------------ {63:1,65:1,70:1,71:1,73:1,75:1,77:1,45:3,78:1,54:2,44:1,80:1,36:1,84:1,96:2} {2:2,10:1,13:1,16:1,30:1,107:1,26:1,28:1,32:1,33:1,34:1,35:1,36:1,37:1,39:1,99:2,100:2,113:1,40:1,57:1,63:2,64:1,65:4} (2 rows)
3、join 字段保持一致
alter table tmp_tbl2 alter column col2 type int8;
4、將原來的查詢SQL修改成如下(字符串處理變成了數組)
(本例也可以使用二維數組,完全規避字符串處理。)
SELECT col4,count(DISTINCT c.col1) ptnum from tmp_tbl1 a INNER JOIN tmp_tbl2 b on b.col2=a.id inner join tbl3 t2 on t2.ID <= array_length(col3,1) -- 更改 INNER JOIN tmp_tbl4 c on split_part(b.col3[cast(t2.id as int)], ':', 1) = c.id INNER JOIN tmp_tbl5 s on a.col4=s.id where split_part(b.col3[cast(t2.id as int)], ':', 2) > '0' and c.col1 not in ('xxxxxx') GROUP BY col4;
執行計劃
QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Gather Motion 32:1 (slice7; segments: 32) (cost=543258065.87..543259314.50 rows=41621 width=12) -> GroupAggregate (cost=543258065.87..543259314.50 rows=1301 width=12) Group By: a.col4 -> Sort (cost=543258065.87..543258169.93 rows=1301 width=12) Sort Key: a.col4 -> Redistribute Motion 32:32 (slice6; segments: 32) (cost=542355803.38..543254872.50 rows=1301 width=12) Hash Key: a.col4 -> GroupAggregate (cost=542355803.38..543254040.08 rows=1301 width=12) Group By: a.col4 -> Sort (cost=542355803.38..542655042.19 rows=3740486 width=11) Sort Key: a.col4 -> Redistribute Motion 32:32 (slice5; segments: 32) (cost=6247.23..518770960.13 rows=3740486 width=11) Hash Key: c.col1 -> Hash Join (cost=6247.23..516377049.63 rows=3740486 width=11) Hash Cond: split_part(b.col3[t2.id::integer], ':'::text, 1) = c.id::text -> Nested Loop (cost=5494.14..476568597.41 rows=3852199 width=491) Join Filter: split_part(b.col3[t2.id::integer], ':'::text, 2) > '0'::text -> Broadcast Motion 32:32 (slice3; segments: 32) (cost=5494.14..115247.73 rows=277289 width=483) -> Hash Join (cost=5494.14..23742.36 rows=8666 width=483) Hash Cond: b.col2 = a.id -> Seq Scan on tmp_tbl2 b (cost=0.00..14088.89 rows=8666 width=487) -> Hash (cost=4973.86..4973.86 rows=1301 width=12) -> Redistribute Motion 32:32 (slice2; segments: 32) (cost=2280.93..4973.86 rows=1301 width=12) Hash Key: a.id -> Hash Join (cost=2280.93..4141.42 rows=1301 width=12) Hash Cond: s.id = a.col4 -> Append-only Columnar Scan on tmp_tbl5 s (cost=0.00..1220.97 rows=1491 width=4) -> Hash (cost=1760.66..1760.66 rows=1301 width=12) -> Redistribute Motion 32:32 (slice1; segments: 32) (cost=0.00..1760.66 rows=1301 width=12) Hash Key: a.col4 -> Append-only Columnar Scan on tmp_tbl1 a (cost=0.00..928.22 rows=1301 width=12) -> Index Scan using idx_codeid on tbl3 t2 (cost=0.00..23.69 rows=42 width=8) Index Cond: t2.id <= array_length(b.col3, 1) -> Hash (cost=364.69..364.69 rows=972 width=11) -> Broadcast Motion 32:32 (slice4; segments: 32) (cost=0.00..364.69 rows=972 width=11) -> Append-only Columnar Scan on tmp_tbl4 c (cost=0.00..44.26 rows=31 width=11) Filter: col1 <> 'xxxxxx'::text Settings: effective_cache_size=8GB; enable_nestloop=off; gp_statistics_use_fkeys=on Optimizer status: legacy query optimizer (39 rows)
原來SQL響應時間: 230秒
修改后SQL響應時間: < 16秒
1、JOIN時不等條件,必須使用笛卡爾的方式逐一判斷,所以如果FILTER條件很耗時(CPU),那么性能肯定好不到哪去。
2、原來大量的reverse, split, replace字符串計算,很耗時。剛好落在笛卡爾上,計算數十萬億次。
3、JOIN字段類型不一致。未使用HASH JOIN。
4、分析SQL,未使用列存儲。
1、array 代替字符串。
2、改寫SQL
3、對齊JOIN類型。
4、使用列存儲。
5、保留的NESTLOOP JOIN,內表保持行存儲,使用索引掃描。(如果是小表,可以使用物化掃描,更快)
6、
analyze table;
原文地址:https://github.com/digoal/blog/blob/master/201809/20180904_05.md
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。