Speed-up Bulk Delete on Logical Standby

Problem statement: Bulk delete operation performed on the primary database in data guard environment, results in stale in logical standby.

Set of delete statements executed on primary

SQL> delete from TABA where id=100;
1000000 Rows deleted.
SQL> delete from TABB where id=100;
1000000 Rows deleted.
SQL> delete from TABC where id=100;
1000000 Rows deleted.
SQL> delete from TABD where id=100;
1000000 Rows deleted.

How this change is applied on the logical standby?

Let us see the impact of

SQL> delete from TABA where id=100;
1000000 Rows deleted.

This statement deleted 1000000 rows from primary. This will result into 1000000 individual delete statements to be executed on logical standby

DELETE FROM “TEST”.”TABA” WHERE “ID” = 5135257 AND “AUDIT_TYPE” = ‘LOGON’ AND ROWID = ‘AAAAAAAAYAAAKGXAAi’;

This operation is time consuming & may result in stale logical standby.

How to fix this problem?

If we know the tentative time when these delete statements were executed on primary, we can speed-up the operation in much lesser time on logical standby.

We’ll be using DBMS_LOGMNR & DBMS_LOGSTDBY to speed up the entire process.

STEP 1

Extract the list of standby logs from the logical standby (not yet applied on standby), which will entirely cover the deletion time on primary.

SQL> SELECT THREAD#, TIMESTAMP, FILE_NAME FROM DBA_LOGSTDBY_LOG WHERE APPLIED ‘YES’ ORDER BY TIMESTAMP
/

THREAD#,TIMESTAMP,FILE_NAME

3,20/05/2008 10:32:59,+FLASH_DG/STDBY/SARCH/3_4687_641082275.dbf
2,20/05/2008 10:58:00,+FLASH_DG/STDBY/SARCH/2_3793_641082275.dbf
1,20/05/2008 11:00:48,+FLASH_DG/STDBY/SARCH/1_4574_641082275.dbf

STEP 2

Once we know the list of standby archive logs, which may have potential delete statements, we need to extract the XIDUSN, XIDSLT, XIDSQN for those specific transactions.

Following are some of consideration, before actually digging into the logs.

1. Primary database & logical standby should be using the same COMPATIBLE parameter, to avoid ORA-00331: log version incompatible with Oracle.

2. If we are using RAC, standby logs from individual threads to be processed separately to avoid the ORA-00600: internal error code, arguments: [krvxbpns01], [], [], [], [], [], [], [].

3. We’ll be using V$LOGMNR_CONTENTS view to extract XIDUSN, XIDSLT, and XIDSQN for the transactions. This view will be populated after we use dbms_logmnr.start_logmnr() to start the mining. This is a dynamic view; will loose its contents, once the session is closed. Contents are only visible in current session, where we started the logmnr process. To overcome this problem, we’ll create a temporary table to store the data from V$LOGMNR_CONTENTS after mining.

SQL> CREATE TABLE TEST_LOGMNR AS SELECT * FROM V$LOGMNR_CONTENTS WHERE 1=0;

4. As we’ll be mining the contents of the standby logs on logical standby, we need the data dictionary of the primary database. Which can be generated by EXECUTE DBMS_LOGMNR_D.BUILD(OPTIONS=> DBMS_LOGMNR_D.STORE_IN_REDO_LOGS);

To know the file name, we can use following set of commands

SQL> SELECT NAME FROM V$ARCHIVED_LOG WHERE DICTIONARY_BEGIN=’YES’;

SQL> SELECT NAME FROM V$ARCHIVED_LOG WHERE DICTIONARY_END=’YES’;

Once we know this file name, we need to wait till file is copied on to the logical standby. This file has to be part of every data mining process. Let’s assume the file name to be +FLASH_DG/CYPDR/SARCH/1_4579_641082275.dbf.

If we do not include this file as part of mining process, we’ll get following errors after starting the mining process.

ERROR at line 1:
ORA-01371: Complete LogMiner dictionary not found
ORA-06512: at “SYS.DBMS_LOGMNR”, line 58
ORA-06512: at line 1

ERROR at line 1:
ORA-01295: DB_ID mismatch between dictionary USE_ONLINE_CATALOG and logfiles
ORA-06512: at “SYS.DBMS_LOGMNR”, line 58
ORA-06512: at line 2

===================
LOG FROM THREAD# 3
===================

$ sqlplus / as sysdba

SQL> BEGIN
DBMS_LOGMNR.ADD_LOGFILE (‘+FLASH_DG/CYPDR/SARCH/1_4579_641082275.dbf’);
DBMS_LOGMNR.ADD_LOGFILE (‘+FLASH_DG/STDBY/SARCH/3_4687_641082275.dbf’);
END;
/
SQL> EXEC DBMS_LOGMNR.START_LOGMNR(OPTIONS => DBMS_LOGMNR.DICT_FROM_REDO_LOGS+ DBMS_LOGMNR.PRINT_PRETTY_SQL + DBMS_LOGMNR.NO_SQL_DELIMITER)
/
SQL> INSERT INTO TEST_LOGMNR SELECT * FROM V$LOGMNR_CONTENTS
/
SQL> COMMIT;
SQL> EXIT;

===================
LOG FROM THREAD# 2
===================

$ sqlplus / as sysdba

SQL> BEGIN
DBMS_LOGMNR.ADD_LOGFILE (‘+FLASH_DG/CYPDR/SARCH/1_4579_641082275.dbf’);
DBMS_LOGMNR.ADD_LOGFILE (‘+FLASH_DG/STDBY/SARCH/2_3793_641082275.dbf’);
END;
/
SQL> EXEC DBMS_LOGMNR.START_LOGMNR(OPTIONS => DBMS_LOGMNR.DICT_FROM_REDO_LOGS+ DBMS_LOGMNR.PRINT_PRETTY_SQL + DBMS_LOGMNR.NO_SQL_DELIMITER)
/
SQL> INSERT INTO TEST_LOGMNR SELECT * FROM V$LOGMNR_CONTENTS
/
SQL> COMMIT;
SQL> EXIT;

===================
LOG FROM THREAD# 1
===================

$ sqlplus / as sysdba

SQL> BEGIN
DBMS_LOGMNR.ADD_LOGFILE (‘+FLASH_DG/CYPDR/SARCH/1_4579_641082275.dbf’);
DBMS_LOGMNR.ADD_LOGFILE (‘+FLASH_DG/STDBY/SARCH/1_4574_641082275.dbf’);
END;
/
SQL> EXEC DBMS_LOGMNR.START_LOGMNR(OPTIONS => DBMS_LOGMNR.DICT_FROM_REDO_LOGS+ DBMS_LOGMNR.PRINT_PRETTY_SQL + DBMS_LOGMNR.NO_SQL_DELIMITER)
/
SQL> INSERT INTO TEST_LOGMNR SELECT * FROM V$LOGMNR_CONTENTS
/
SQL> COMMIT;
SQL> EXIT;

NOTE: Common file in all logmnr calls is the data dictionary received from primary.

STEP 3

Disable the data guard & stop the SQL APPLY process.

SQL> ALTER SESSION DISABLE GUARD;
SQL> ALTER DATABASE STOP LOGICAL STANDBY APPLY;

Delete the data from the designated tables in logical standby manually, using the same delete statement(s) used on the primary.

SQL> delete from TABA where id=100;
1000000 Rows deleted.
SQL> delete from TABB where id=100;
1000000 Rows deleted.
SQL> delete from TABC where id=100;
1000000 Rows deleted.
SQL> delete from TABD where id=100;
1000000 Rows deleted.

STEP 4

We extracted XIDUSN, XIDSLT, and XIDSQN in STEP 2. Now it’s time to use them to skip the delete transactions, using DBMS_LOGSTDBY.SKIP_TRANSACTION procedure

SQL> SELECT ‘EXEC DBMS_LOGSTDBY.SKIP_TRANSACTION (‘ XIDUSN’,’XIDSLT’,’XIDSQN’);’ FROM
TEST_LOGMNR WHERE SEG_OWNER = ‘TEST’ AND OPERATION = ‘DELETE’ GROUP BY XIDUSN, XIDSLT, XIDSQN
/

SQL> EXEC DBMS_LOGSTDBY.SKIP_TRANSACTION (62,39,14590);
SQL>EXEC DBMS_LOGSTDBY.SKIP_TRANSACTION (62,27,14605);
SQL>EXEC DBMS_LOGSTDBY.SKIP_TRANSACTION (32,12,80660);

Verify the skipped transaction from

SQL> SELECT * FROM DBA_LOGSTDBY_SKIP_TRANSACTION
/

STEP 5

Enable data guard & start SQL APPLY

SQL> ALTER SESSION ENABLE GUARD;
SQL> ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE;

STEP 6

Once the logical standby is in-sync with the primary database, we can once again disable the data guard, stop the SQL APPLY, un-skip the transactions, enable data guard & start the SQL APPLY.

SQL> ALTER SESSION DISABLE GUARD;
SQL> ALTER DATABASE STOP LOGICAL STANDBY APPLY;

SQL> EXEC DBMS_LOGSTDBY.UNSKIP_TRANSACTION (62,39,14590);
SQL> EXEC DBMS_LOGSTDBY.UNSKIP_TRANSACTION (62,27,14605);
SQL> EXEC DBMS_LOGSTDBY.UNSKIP_TRANSACTION (32,12,80660);

SQL> ALTER SESSION ENABLE GUARD;
SQL> ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE;

Conclusion: To speed up the deletion process on the logical standby, we have executed compensating manual delete statements on logical standby to avoid individual delete execution. This is implemented using in-built DBMS_LOGMNR & DBMS_LOGSTDBY packages.

Warning: If DBMS_LOGSTDBY.SKIP_TRANSACTION used with wrong XIDUSN, XIDSLT, XIDSQN, may result in corrupt logical standby. One may have to re-instate in doubt tables from primary.

Advertisements
This entry was posted in Oracle Data Guard, Oracle Logical Data Guard and tagged . Bookmark the permalink.

2 Responses to Speed-up Bulk Delete on Logical Standby

  1. Francisco Tejeda says:

    Hello Yogesh,

    I had a problem once with bulk delete on Logical Standby. I opened a SR and what they suggest me was to change the value of EAGER_SIZE wich didn’t help much.

    I tested your solution on a TEST Environment, and it work. But since the delete affected more than two millions of rows, I didn’t skip transactions but DML on the table. In this case the procedure would be :

    alter database stop logical standby apply;
    alter session disable guard;
    EXECUTE DBMS_LOGSTDBY.SKIP (stmt => ‘DML’, schema_name => ‘TEST’, object_name => ‘OBJECT’);
    alter session enable guard;
    alter database start logical standby apply immediate;

    After skipping the Bulk Delete, the next thing to do is reinstantiate the table.

    Thanks a lot for the idea Yogesh.

    Best Regards.

  2. ybhandarkar says:

    Sure … that is another way .. provided, your standby site is not active. In my case it was a reporting site, so I did not use reinstantiate.

    -Yogesh

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s