COLUMN_NAME COL INDEX_NAME INDEX_OWNER TABLE_OWNER
------------------------- --- ------------------------------ -------------------- ---------------
UNDWRT_DATE 1 IX_ABBS_SETT_DTL_UWD_PRDNO_ANO ABBSDATA ABBSDATA
PRODUCTNO 2 IX_ABBS_SETT_DTL_UWD_PRDNO_ANO ABBSDATA ABBSDATA
AGENCYNO 3 IX_ABBS_SETT_DTL_UWD_PRDNO_ANO ABBSDATA ABBSDATA
select count(*) from abbs_sett_detail a1 where a1.UNDWRT_DATE >= date'2013-1-1' and a1.UNDWRT_DATE < date'2013-2-1'
and a1.agencyno like 'SCXN%'
--返回48582行。
所以要根据这48582的rowid进行回表操作。加上之前走索引的2332190行,近db file sequential read单块读就得IO将近28万行。
既然统计信息已经不准(GBS的统计信息收集策略是收集不启用),所以索引上的Clustering_factor也没有参考价值了。
而根据这个条件a1.deptno='G35'
select count(*) from abbs_sett_detail a1 where a1.deptno='G35';
--返回743820行。
GBS库db_file_multiblock_read_count参数是128,db_block_size=8192。
从128这个值看来,我们更倾向于走全表而不是走索引,在这个特殊场景里面,可以认为,我们应该更倾向于走分区消除而不是走索引。
我用no_index hint使其第一步走的是分区,并且将多余的表连接hint去除。
select /*+NO_INDEX(a1 IX_ABBS_SETT_DTL_UWD_PRDNO_ANO) no_index(a1 ux_abbs_sett_dtl_certno_ctsts) fwy1123 */
deptno as deptNo,
a1.agencyno as agencyNo,
productno as productNo,
count(certno) as totalNumber,
count(distinct billno) as billNumber,
count(decode(cert_sts, 'P', cert_sts, null)) as undwrtNumber,
sum(decode(is_together_insure,
'Y',
(decode(cert_sts, 'P', coins_tot_prem, 0)),
(decode(cert_sts, 'P', tot_prem, 0)))) as undwrtPrem,
count(decode(cert_sts, 'N', cert_sts, null)) as revokeNumber,
sum(decode(is_together_insure,
'Y',
(decode(cert_sts, 'N', coins_tot_prem, 0)),
decode(cert_sts, 'N', tot_prem, 0))) as revokePrem,
sum(decode(is_together_insure,
'Y',
(decode(cert_sts, 'P', coins_tot_prem, 'N', -coins_tot_prem)),
decode(cert_sts, 'P', tot_prem, 'N', -tot_prem))) as totalPrem
from abbs_sett_detail a1
where (a1.UNDWRT_DATE >= date'2013-1-1' and a1.UNDWRT_DATE < date'2013-2-1' and a1.cert_sts = 'P')
and a1.agencyno like 'SCXN%' and a1.deptNo = 'G35' group by deptno, productno, a1.agencyno order by deptno, a1.agencyno, productno
buffer get从原来的67459下降到19820,执行速度从20秒提升到7秒。