Friday, August 13, 2021

Exadata X8M-2 & PCA X8-2 -- Part 3 Migration / ONLINE REDEFINITION for Big LOB Tables

In the previous post, I mentioned that we decided to move the objects residing in the problematic tablespace ( having an offline datafile) to a new tablespace and we decided on a hybrid method for accomplishing this task. We planned to move the big LOB tables using the Online Redefinition .. In addition to that, we decided to use datapump (expdp - impdp) to move the small sized tables to the new tablespace with a minimal downtime.

In this blog post, I will take a look at the Online Redefinition side of the work and give you the general information about the process.

First of all , we couldn't move tables online with the alter table move online clause, since the database version was 12.1.

We couldn't use expdp-impdp to move the big LOB tables, because it required a big downtime +  the LOBs were Basicfile -- expdp can not leverage parallelism for the Basicfile lobs.

So, we decided to move those big lob tables (some of them were bigger than 1.5 TB) using online redefitinion and it almost perfectly worked!

Note that, no one needs to be afraid of the online redefinition as it is used with basic API calls and it is pretty straight forward. It is based on the materialized views (MVs) and refresh technology that we already have in Oracle Databases for a very long time.. 

What happens in the background is; a MV based refresh and alter table rename based switch operations..

So, a MV is created, then it is fully refreshed and then it is synched with the fast refreshes (by the help of the MV logs) and then the object names are switched -- just after the final last sync... 

All of these are done via API. So you don't need to know the internals to do the work..

Let's go through the process with an example;

We first created our target/new tablespace with the CRATE TABLESPACE caluse;

CREATE TABLESPACE ERMAN_NEW DATAFILE '+ERM' SIZE 32767M AUTOEXTEND ON NEXT 1G MAXSIZE 32767M, '+ERM' SIZE 32767M AUTOEXTEND ON NEXT 1G MAXSIZE 32767M, '+ERM' SIZE 32767M AUTOEXTEND ON NEXT 1G MAXSIZE 32767MLOGGING
DEFAULT NO INMEMORY ONLINE EXTENT MANAGEMENT LOCAL AUTOALLOCATE BLOCKSIZE 8K SEGMENT SPACE MANAGEMENT AUTO FLASHBACK ON;

We checked if we can do use online redefition for moving the table that we interested in.
BEGIN DBMS_REDEFINITION.CAN_REDEF_TABLE('ERMAN','S_ERMAN', DBMS_REDEFINITION.CONS_USE_ROWID); END; /

We checked the source LOB columns.. We didn't want to get ORA-01555 or ORA-22924 in the middle of our operation.. So we used a big enough LOB retention (based on undo retention) or PCTVERSION 100 for the source LOB columns.. ( we alter them that way..)

Note that; if you see ORA-22924 in the same error stack with ORA-01555 then it means; the problem is directly related with the LOBs.

Error: ORA-22924
Text: snapshot too old
---------------------------------------------------------------------------
Cause: The version of the LOB value needed for the consistent read was already overwritten by another
writer.

Note that, we use PCTVERSION or RETENTION, not both at the same time.

select table_name, segment_name, pctversion, retention
from dba_lobs where table_name in ('S_ERMAN');

In our case we used PCTVERSION.. In our case, using PCTVERSION was a more guarenteed method.

Some examples for altering the LOBs -> 

alter table ERMAN.S_ERMAN modify lob(VIDEO) (retention);
alter table SIEBEL.S_ERMAN modify lob(PICTURE) (PCTVERSION 100);

Well, after creating the new tablespace and altering the LOBs, we created our target table.. Target table had the same structure as the source ( ofcourse we could do some improvements in the definitions in the definition, like changing the LOBS from basicfile to securefile).. We created the new table in the new ta lespace..

CREATE TABLE ERMAN.S_ERMAN_NEW(
ROW_ID VARCHAR2(15 CHAR), CREATED DATE DEFAULT sysdate, CREATED_BY VARCHAR2(15 CHAR), LAST_UPD DATE DEFAULT sysdate,
.....
...

LOB (PICTURE) STORE AS SECUREFILE ( TABLESPACE ERMAN_NEW ENABLE STORAGE IN ROW CHUNK 8192 RETENTION AUTO NOCACHE LOGGING STORAGE ( INITIAL 64K NEXT 1M MINEXTENTS 1 MAXEXTENTS UNLIMITED PCTINCREASE 0 BUFFER_POOL DEFAULT ))
...
LOGGING NOCOMPRESS NOCACHE MONITORING ENABLE ROW MOVEMENT;

We checked our redologs and ensure they are aligned to support this heavy operation.

SELECT a.GROUP#, a.THREAD#, a.SEQUENCE#, a.ARCHIVED, a.STATUS, b.MEMBER AS REDOLOG_FILE_NAME, (a.BYTES/1024/1024) AS SIZE_MB FROM v$log a JOIN v$logfile b ON a.Group#=b.Group# ORDER BY a.GROUP#;

We might add some redologs if we thought that it is required for the operation.. ( Big tables would produce big redo -- and archivelogs)

Example: ALTER DATABASE ADD LOGFILE GROUP 4 ('+ERMAN','+RECO') SIZE 2048M;

Well, we started our redefitinion operation finally :) -> 
--we started it with 4 parallel in the case.

ALTER SESSION FORCE PARALLEL DML PARALLEL 4; 
ALTER SESSION FORCE PARALLEL QUERY PARALLEL 4; 
BEGIN DBMS_REDEFINITION.START_REDEF_TABLE( uname => 'ERMAN', orig_table => 'S_ERMAN', int_table => 'S_ERMAN_NEW', options_flag => DBMS_REDEFINITION.CONS_USE_ROWID);
 END; /

Note that, if we have a primary key in the source table, then we use CONS_USE_PK instead of CONS_USE_ROWID..

START_REDEF_TABLE takes some time as it is where the initialization/full copy happens.

After the START_REDEF_TABLE completed, we copied the dependent objects from the source table to target table.

DECLARE error_count pls_integer := 0; BEGIN DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS('ERMAN', 'S_ERMAN', 'S_ERMAN_NEW', dbms_redefinition.cons_orig_params, FALSE,TRUE,TRUE,FALSE, error_count); 
DBMS_OUTPUT.PUT_LINE('errors := ' || TO_CHAR(error_count)); END; /

Note that, if you  have materialized view logs on the table that you want to redef, then you should consider using copy_mvlog argument of copy_table_dependents.. It should be TRUE (by default it is false).. Alternatively, you should leave it FALSE , and recreate MV logs after you finish redef.

You can check the definition of the arguments used in COPY_TABLE_DEPENDENTS procedure, but one thing to point out for our case is that, we told the procedure to not to copy the triggers.. 

DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS( uname IN VARCHAR2, orig_table IN VARCHAR2, int_table IN VARCHAR2, copy_indexes IN PLS_INTEGER := 1, copy_triggers IN BOOLEAN := TRUE, copy_constraints IN BOOLEAN := TRUE, copy_privileges IN BOOLEAN := TRUE, ignore_errors IN BOOLEAN := FALSE, num_errors OUT PLS_INTEGER, copy_statistics IN BOOLEAN := FALSE);

We did so, because of a bug in our release , an ORA-600 produced in the later sync phase.. 
(ORA-00600: internal error code, arguments: [kglidinsi1], [2], [], [], [], [], [], [], [], [], [], [])

So we decided to create the triggers manually just after the redef is finished / just before letting applications to access the new-current table..

Just after copying the dependents objects, we put an exclusive lock in the table.. (ofcourse relevant application modules were stopped at the same time)  Normally, you don't have to do this but we just wanted to guarantee the no-access. We wanted to guarantee that because in case of a failure in the later stages of the redefitinion, we might want to continue the redefitinion manually.. I mean dropping the MV and making the new table the actual one manually. 

--again; it is not a must.
LOCK TABLE ERMAN.S_ERMAN IN EXCLUSIVE MODE;

So before finishing our redef , we compared the target and source tables and ensured they are structurally similar. We also checked the invalid object count just in case. We even checked the row counts -- just in case.

select * from dba_objects where status='INVALID'; 
select * from dba_triggers where table_name='S_ERMAN'; 
select * from dba_indexes where status!='VALID'; 
select count(*) from ERMAN.S_ERMAN;
select count(*) from ERMAN.S_ERMAN_NEW;
select count(*) , 'source_table' from dba_constraints where table_name='S_ERMAN' UNION ALL select count(*) ,'interim_table' from dba_constraints where table_name='S_ERMAN_NEW'; 
select count(*) , 'source_table' from dba_triggers where table_name='S_ERMAN' UNION ALL select count(*) ,'interim_table' from dba_triggers where table_name='S_ERMAN_NEW';
select count(*) , 'source_table' from dba_indexes where table_name='S_ERMAN' UNION ALL select count(*) ,'interim_table' from dba_indexes where table_name='S_ERMAN_NEW';

Well at this point we synchronized the new table; ( a fast refresh)
EXEC DBMS_REDEFINITION.sync_interim_table('ERMAN','S_ERMAN','S_ERMAN_NEW'); 

Lastly, we finished the redef, and our new table was actualized at this point.

.EXEC DBMS_REDEFINITION.finish_redef_table('ERMAN','S_ERMAN','S_ERMAN_NEW');

Note, in case you get any errors in this stage, you should check the detail of the error and get the information about which stage it failed and then you may find a way to continue manually from there.. However, what is recommended is to abort the redef, solve the problem and start another redef/retry.

Abort redef example:

EXEC DBMS_REDEFINITION.abort_redef_table('ERMAN','S_ERMAN','S_ERMAN_NEW');

Well, after the redef was finished, we checked the new table and ensured it is actualized and its name was switched with the old current table and it was now in the new tablespace..

select table_name,tablespace_name from dba_tables where table_name in('S_ERMAN','S_ERMAN_NEW');

Note that, S_ERMAN is the new table now and it is in the new tablespace.. So names/tables were switched!

2 important notes on 2 dangerous points;

1) Table statistics are important. If you do not copying statistics in the copy dependents phase, then you may get yourself in to trouble. So either copy them in the copy dependents phase or copy them before starting the redef phase -- manually (this is also a little adventurous --  I mean copying table stats from one table to another in the same database is a little tricky and requires a hack and it is not always successful..)
Statistics level is also important.. In some environments, column-level should also be there..

Stats collection example: 

EXEC dbms_stats.gather_table_stats('ERMAN','S_ERMAN', method_opt=> 'FOR ALL COLUMNS SIZE AUTO', granularity => 'ALL', degree => 8 ,cascade => true, estimate_percent => dbms_stats.auto_sample_size);

Note that, we also have copy_statistics argument available in copy_table_dependents.. So we may even make redefinition to that statistics task during our redef process...

2) Constraints may be created in the target table without enable validate.. So you might need to alter the target table to make them enable validate..

Well, before finishing the work we did the following checks;

--Check errors 
 SELECT OBJECT_NAME, BASE_TABLE_NAME, DDL_TXT FROM DBA_REDEFINITION_ERRORS;

- VERIFY THAT THE PRMARY KEY WAS COPIED TO THE REDEFINED TABLE 

 select DBMS_METADATA.GET_DDL('TABLE','S_ERNAN','TEST') FROM DUAL;

We might drop the old table.. But we didn't.. We didn't have a space pressure and the migration that we would do later was based on XTTS method. So we have the opportunity to tell the XTTS not to migrate the old tablespace in the first place.

-- DROP THE OLD TABLE (as it is no longer needed) 
 drop table S_ERMAN_NEW;

Okay. I suppose we are done for today :)

Next post will be about those remaining small tables which were moved using datapump.. 
I know using datapump is superclear for most of you, but we are telling our story here, story of a big migration project :) So I try to explain every stage and want to share real life cases as much as I can :) 

Stay tuned.

No comments :

Post a Comment

If you will ask a question, please don't comment here..

For your questions, please create an issue into my forum.

Forum Link: http://ermanarslan.blogspot.com.tr/p/forum.html

Register and create an issue in the related category.
I will support you from there.