近日,在对一张百万数据的业务表进行去重时,去重操作竟然夯住了。下面就来简单回忆一下。
1、查询业务表数据量,查看到总共有200多w条
sql> select count(*) from tb_bj_banker_etl; 2552381
2、查询表内应该去掉的重复数据量,共80多w条
sql> select count(*) from tb_bj_banker_etl where (id) in (select id from tb_bj_banker_etl group by id having count(*)>1) and rowid not in(select max(rowid) from tb_bj_banker_etl group by id having count(*)>1); 830099
3、于是,在晚上下班前,执行了下面的语句脚本,为了去重
sql> delete from tb_bj_banker_etl where(id) in (select id from tb_bj_banker_etl group by id having count(*)>1) and rowid not in(select max(rowid) from tb_bj_banker_etl group by id having count(*)>1); sql> commit;
4、第二天,到达现场时,发现pl/sql developer工具中昨天晚上执行的语句仍在执行中
首先察觉,80多w的去重数据跑了一个晚上也没跑完?这肯定是哪里出了问题?
怀疑有锁表。
于是查询是否有锁表的用户。
select a.owner, --object所属用户 a.object_name, --object名称 b.xidusn, b.xidslot, b.xidsqn, b.session_id, --锁表用户的session b.oracle_username, --锁表用户的oracle用户名 b.os_user_name, --锁表用户的操作系统登陆用户名 b.process, b.locked_mode, c.machine, --锁表用户的计算机名称 c.status, --锁表状态 c.server, c.sid, c.serial#, c.program --锁表用户所用的数据库管理工具 from all_objects a, v$locked_object b, sys.gv_$session c where a.object_id = b.object_id and b.process = c.process order by 1,2
在下面结果中可以看到,锁表的只是去重语句的发起会话,并没有其它用户造成锁表,这说明语句仍然在执行嘛?带着疑问,开始尝试解决。
1 bjhyl tb_bj_banker_etl 15 18 9000 913 bjhyl administrator 4036:972 3 workgroup\backdb active dedicated 913 3381 plsqldev.exe
2 bjhyl tb_bj_banker_etl 15 18 9000 913 bjhyl administrator 4036:972 3 workgroup\backdb inactive dedicated 649 41791 plsqldev.exe
3 bjhyl tb_bj_banker_etl 15 18 9000 913 bjhyl administrator 4036:972 3 workgroup\backdb inactive dedicated 817 27777 plsqldev.exe
4 bjhyl tb_bj_banker_etl 15 18 9000 913 bjhyl administrator 4036:972 3 workgroup\backdb inactive dedicated 841 1981 plsqldev.exe
5、采用分批次,解决去重夯住问题
由于直接去重无法顺利进行,于是想到了分批次去重的方法,试一下。
第一次: delete from tb_bj_banker_etl where(id) in (select id from tb_bj_banker_etl group by id having count(*)>1) and rowid not in(select max(rowid) from tb_bj_banker_etl group by id having count(*)>1) and rownum<=100000; commit; 第二次: delete from tb_bj_banker_etl where(id) in (select id from tb_bj_banker_etl group by id having count(*)>1) and rowid not in(select max(rowid) from tb_bj_banker_etl group by id having count(*)>1) and rownum<=100000; commit; 。。。。。。。 。。。。。。。 。。。。。。。 第八次: delete from tb_bj_banker_etl where(id) in (select id from tb_bj_banker_etl group by id having count(*)>1) and rowid not in(select max(rowid) from tb_bj_banker_etl group by id having count(*)>1); commit;
结果:通过将80多万数据划分成以10w数据为单次进行去重操作,总共用时140多秒,完成了去重80万数据的目的。但为何直接处理出现夯死情况,有待后续跟踪分析。
以上就是临时处理去重80w数据时夯死现象的全部过程,希望可以帮到大家。