Cloud Photo

Azure Data Architect | DBA

Log Mining in Oracle 10g

If you ever had to perform log mining in Oracle prior to 10g, you will be pleasantly surprised by how easy and fast it is in 10g. After using 10g for several years, I finally got a chance to use log miner in 10g. Until now, flashback queries were always enough to recover errant deletes and updates.  Allow me to explain.First, the beauty of flashback queries is that once you have your database setup (more on that later), you can recover data with a simple query. Like this:

select * from myTable as of timestamp(systimestamp - interval '60' minute);

This query will select everything from your table as of 60 minutes ago.  I usually select the results into a table so that I can sort through what I need and what I don’t. Depending on your hardware resources, you could store flashback info for a few minutes of transactions or several days. But when it’s the end of the month and Accounting calls to say they think someone deleted some payroll information… well, unless you have great gobs of disk space, you most likely won’t be able to flashback data from 20 or 30 days ago. This is where log miner comes in handy. Oh, and you’ll also need to keep your archive logs. You do have your database in archive log mode, right?

For the purposes of simplicity, we’ll assume we know the approximate time the deletion occurred. First, let’s check v$archived_log to find the log file we need based on the time the log started.

SELECT to_char(first_time,'dd-mon-rr HH24mi') as firstTime, name FROM V$ARCHIVED_LOG WHERE FIRST_TIME >= (sysdate-30);
firstTime      name
-------------- -------------------------------------------------
19-jul-09 0744 /oracle/10g/oradata/dg2/arc/SITE1/archivelog/2009_07_19/o1_mf_1_5727_5668t9pt_.arc
19-jul-09 0944 /oracle/10g/oradata/dg2/arc/SITE1/archivelog/2009_07_19/o1_mf_1_5728_566hvjpl_.arc

Based on the first_time column, we can see the time span for  each archived log file. Assuming that the transaction took place at, let’s say, 0815, then the 5668t9pt_.arc file is the one we need to mine.  After we locate the, there are essentially four steps:

  1. Load the log for mining.
  2. Start the miner.
  3. Select out what you need to recover.
  4. Stop the log miner.

First, we load the archive log file. Results may vary, but this usually takes only a second or two.

EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => '/oracle/10g/oradata/dg2/arc/SITE1/archivelog/2009_07_19/o1_mf_1_5727_5668t9pt_.arc', OPTIONS => DBMS_LOGMNR.NEW);

Next, start log miner using the following statement. If you want a discussion of all the options, please consult the documentation. Like the previous step, this usually takes about a second.


Then, we’ll select the undo data. The key to successful mining is putting just enough information in the where clause. If the clause is too restrictive, we won’t get any of the data we need. If the clause is too broad, we’ll be sorting through results forever. In this case, we know the owner and table. Since there aren’t many DML statements run against this table, we’ll go with an otherwise broad scope.  Depending on the size of your log files, this might take a while. This particular query ran in 7m:10s. (“set timing on;” in sqlplus is very handy.)


 "PHONE1" = '3286',
 "PHONE2" = '000',
 "PAGER" = '0'
 "ID" = 14782 and
 "PHONE1" = '3286' and
 "PHONE2" = '000' and
 "PAGER" = '0' and

And there we have the undo statement required to undo whatever mistake was made. I typically lop off the rowid part of the statement because it may change depending on growth and row movement. Plus, if I have the ID column set up as the primary key, that is all I really need to ensure integrity.

Finally, when we’re done, we close up shop and go home.


Again, it’s important to refer to the official documentation to ensure proper syntax for your installation.

Leave a Reply