Yes, we use Log Miner in some applications to replicate our data logically from one database to another. As we DBAs check LGWR performance, which is mostly write-based, we may also check Log Miner 's Read rate in these kinds of configurations while dealing with degradations in replication performance.
Here is an example for doing that;
We check our archivelogs and choose 5 archivelog to be used in our calculations..
note that , we choose 2 days old archived logs in this demo..
We take the first archived log and execute the following command:
EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -
LOGFILENAME => '<first_log_file>', -
OPTIONS => DBMS_LOGMNR.NEW);
DBMS_LOGMNR.START_LOGMNR
(STARTSCN => <first_change#>, ENDSCN => <next_change#>,
OPTIONS => DBMS_LOGMNR.SKIP_CORRUPTION+ DBMS_LOGMNR.NO_SQL_DELIMITER+DBMS_LOGMNR.NO_ROWID_IN_STMT+ DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);
We start our Sqlplus session and set timing on..
EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -
LOGFILENAME => '<first_log_file>', -
OPTIONS => DBMS_LOGMNR.NEW);
We then add the remaining 4 archive log files one by one by using the following command;
EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -
LOGFILENAME => '<next_log_file>', -
OPTIONS => DBMS_LOGMNR.ADDFILE);
EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -
LOGFILENAME => '<next_log_file>', -
OPTIONS => DBMS_LOGMNR.ADDFILE);
We get the the FIRST_CHANGE# and NEXT_CHANGE$ values from first log file and use it as STARTSCN and ENDSCN in the following command;
DBMS_LOGMNR.START_LOGMNR
(STARTSCN => <first_change#>, ENDSCN => <next_change#>,
OPTIONS => DBMS_LOGMNR.SKIP_CORRUPTION+ DBMS_LOGMNR.NO_SQL_DELIMITER+DBMS_LOGMNR.NO_ROWID_IN_STMT+ DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);
Next, we use the below query to read all the redos from those 5 archivelogs that we choosed in the beginning. At the end of this query execution, we get 'no rows found'. This is okay, as we can still calculate the query execution time/duration. This query execution time is actually the total time required for reading those 5 archived logs.
SELECT * FROM V$LOGMNR_CONTENTS WHERE TABLE_NAME = 'TABLENOTEXIT';
Well, the calculation is simple archived_log_count x archive_log_size / duration.
So if those 5 archived logs each are 2GB in size, and if the query above took 10 mins , then our Logminer Read Rate is (5*2)*(60/10) = 60GB/Hr.
Here is a quick tip for you :)
No comments :
Post a Comment
If you will ask a question, please don't comment here..
For your questions, please create an issue into my forum.
Forum Link: http://ermanarslan.blogspot.com.tr/p/forum.html
Register and create an issue in the related category.
I will support you from there.