If you ever had to perform log mining in Oracle prior to 10g, you will be pleasantly surprised by how easy and fast it is in 10g. After using 10g for several years, I finally got a chance to use log miner in 10g. Until now, flashback queries were always enough to recover errant deletes and updates. Allow me to explain.First, the beauty of flashback queries is that once you have your database setup (more on that later), you can recover data with a simple query. Like this:
select * from myTable as of timestamp(systimestamp - interval '60' minute);
This query will select everything from your table as of 60 minutes ago. I usually select the results into a table so that I can sort through what I need and what I don’t. Depending on your hardware resources, you could store flashback info for a few minutes of transactions or several days. But when it’s the end of the month and Accounting calls to say they think someone deleted some payroll information… well, unless you have great gobs of disk space, you most likely won’t be able to flashback data from 20 or 30 days ago. This is where log miner comes in handy. Oh, and you’ll also need to keep your archive logs. You do have your database in archive log mode, right?
For the purposes of simplicity, we’ll assume we know the approximate time the deletion occurred. First, let’s check v$archived_log to find the log file we need based on the time the log started.
SELECT to_char(first_time,'dd-mon-rr HH24mi') as firstTime, name FROM V$ARCHIVED_LOG WHERE FIRST_TIME >= (sysdate-30); firstTime name -------------- ------------------------------------------------- 19-jul-09 0744 /oracle/10g/oradata/dg2/arc/SITE1/archivelog/2009_07_19/o1_mf_1_5727_5668t9pt_.arc 19-jul-09 0944 /oracle/10g/oradata/dg2/arc/SITE1/archivelog/2009_07_19/o1_mf_1_5728_566hvjpl_.arc
Based on the first_time column, we can see the time span for each archived log file. Assuming that the transaction took place at, let’s say, 0815, then the 5668t9pt_.arc file is the one we need to mine. After we locate the, there are essentially four steps:
- Load the log for mining.
- Start the miner.
- Select out what you need to recover.
- Stop the log miner.
First, we load the archive log file. Results may vary, but this usually takes only a second or two.
EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => '/oracle/10g/oradata/dg2/arc/SITE1/archivelog/2009_07_19/o1_mf_1_5727_5668t9pt_.arc', OPTIONS => DBMS_LOGMNR.NEW);
Next, start log miner using the following statement. If you want a discussion of all the options, please consult the documentation. Like the previous step, this usually takes about a second.
EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + DBMS_LOGMNR.COMMITTED_DATA_ONLY + DBMS_LOGMNR.PRINT_PRETTY_SQL);
Then, we’ll select the undo data. The key to successful mining is putting just enough information in the where clause. If the clause is too restrictive, we won’t get any of the data we need. If the clause is too broad, we’ll be sorting through results forever. In this case, we know the owner and table. Since there aren’t many DML statements run against this table, we’ll go with an otherwise broad scope. Depending on the size of your log files, this might take a while. This particular query ran in 7m:10s. (“set timing on;” in sqlplus is very handy.)
SELECT SQL_UNDO FROM V$LOGMNR_CONTENTS where SQL_UNDO like '%ACCT%PAYROLL%'; SQL_UNDO ------------------------------------------- UPDATE "HR"."EMP" set "PHONE1" = '3286', "PHONE2" = '000', "PHONE3" = NULL, "PAGER" = '0' where "ID" = 14782 and "PHONE1" = '3286' and "PHONE2" = '000' and "PHONE3" IS NULL and "PAGER" = '0' and ROWID = 'AAAI5JAAVAAAAZvAAi';
And there we have the undo statement required to undo whatever mistake was made. I typically lop off the rowid part of the statement because it may change depending on growth and row movement. Plus, if I have the ID column set up as the primary key, that is all I really need to ensure integrity.
Finally, when we’re done, we close up shop and go home.
EXECUTE DBMS_LOGMNR.END_LOGMNR();
Again, it’s important to refer to the official documentation to ensure proper syntax for your installation.
Leave a Reply
You must be logged in to post a comment.