Wednesday, August 11, 2021

Exadata X8M-2 & PCA X8-2 -- Part 3 Migration / OFFLINE Datafile preventing XTTS

Quick intro; Already migrated the DWH to Exadata X8-2M and saw a dramatic increase in ETL performance without even touching anything, without even doing any migration improvements :)

Today's blog post is about a different topic though.. I mean it is related with this migration project, but it is not directly related with the migration operations. So this is something that prevented us migrating a database to Exadata X8-2M using XTTS method. (Remember; in this project, our source platform is HP-UX )

The reason preventing us implementing our migration method was an offline datafile.. It was added into an important tablespace years ago, but then it was taken offline somehow .. As you may guess, we could not make it back online as we didn't have the archivelogs generated years ago..

In short, we had an offline datafile in the customer's database and it could not be recovered.. So, we couldn't drop that datafile. It needed recovery..

That datafile was already empty and it was just an unnecessary blocker..

Anyways.. Because of this datafile, XTTS method was encountering the error ORA-20001: TABLESPACE(S) IS READONLY OR, OFFLINE JUST CONVERT, COPY.

Basically, XTTS didn't like that file (because it was in offline status) and  that's why the perl scripts produced ORA-20001..

As you may guess, we modified the XTTS script.. (perl is my favorite language in these kinds of system programming tasks :)  In order to do this we needed to update the XTTS perl scripts, the one named "xttprep.tmpl" specifically..So we told XTTS to ignore this file in the relevant tablespace and migrated all the other files without getting any errors. Ofcourse we did it in the TEST environment.

We updated the xttprep.tmpl and made it not to get the problematic datafile in the first place..

We did the update in the related part of the file as follows;

$ELSE
gv$datafile
$END d
, v$tablespace t
WHERE d.ts# = t.ts#
AND d.file# not in (90) "******* <-- WE ADDED the FILE NO OF THE PROBLEMATIC FILE"
AND d.inst_id = USERENV('INSTANCE')
AND t.name IN (
%%TABLESPACES%%
)

With this action, ORA-20001 was not produced and we could continue working.. However; we couldn't take the necessary action for accomplishing the last task of the XTTS method.. We just couldn't make the source datafile offline for the final sync.. So modifying the script didn't solve the issue completely..

--Note that we still tried to be aligned with the documentation and XTTS method and we just didn't feel good with this modification already.

We had also another workaround and it was focused on getting rid of this file:)

That workaround was based on an update to the internal tables; 

<SQL>delete file$ where FILE#=65;
<SQL>commit;
<SQL>shut immediate
<SQL>startup

We implemented this workaround in the test environment and didn't encounter any problems.. However; we couldn't take Oracle Support's approval for this and that's why we didn't take it in to production.. A big risk right?..

Well, it was not recommended to manually modify the data dictionary (delete file$ where FILE#=65;) as this might lead to other inconsistencies in the database.

Let's see the facts;
  • If a datafile in a locally managed tablespace is offline, it cannot be dropped. DROP Datafile And Its Restrictions ( Doc ID 781225.1 )
  • BBED is not available anymore in latest versions. (BBED is one of my favorite, so it could save us here, but no luck..)
  • As a prerequisite for XTTS procedure , all the datafiles in source tablespace should be online.
  • The set of tablespaces being moved must all be online, and contain no offline data files. Tablespaces must be READ W9RITE. Doc : V4 Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup ( Doc ID 2471245.1 )
So we needed to follow the hard and log way to fix this problem.. Available options:
  • Use datapump for the Exadata migration.
  • Use Goldengate (or any other application that can do CDC -- Oracle to Oracle -- such as STRIIM) for Exadata migration.
  • Create a new tbs and use CTAS or alter table move to move the objects to the new tbs and lastly drop the old one, a tablespace-level drop. Once the tablespace is dropped, use XTTS for Exadata migration.
  • Create a new tbs. Use alter table move online (in our case it couldn't be implemented due to rdbms version) to the new tbs and lastly drop the old tbs.. Once the tablespace is dropped, use XTTS for Exadata migration.
  • Create a new tbs. Use ONLINE REDEFINITION (in our case it couldn't be implemented due to rdbms version) to the new tbs and lastly drop the old tbs.. Once the tablespace is dropped, use XTTS for Exadata migration.
  • Create a new tbs. Use DATAPUMP (in our case it couldn't be implemented due to rdbms version) to the new tbs and lastly drop the old tbs.. Once the tablespace is dropped, use XTTS for Exadata migration.

Well.. The available options... All had different advantages and disadvantages.. They all had different prereqs and downtime requirements.. Goldengate-like applications required License.. Datapump-based Exadata migration methods were introducing extra downtime..

So we decided to fix this problem in place.. We decided to get rid of that datafile and designed a hybrid solution based on 2 different solution options.. Well.. We used Datapump + ONLINE REDEFINITION to move the data from the problematic tablespace to the new one..

We had 2 TB to move and we had big big LOB columns there.. So we migrated the tables ,which had LOB columns, with online redefitinion (you can think about it as an online move operation) and then we used datapump to move the small-sized tables from the problematic tablespace to the newly created one.. (this required a short downtime) 
It was a long and exciting journey, which will be explained in my next blog post..

Stay tuned :)

No comments :

Post a Comment

If you will ask a question, please don't comment here..

For your questions, please create an issue into my forum.

Forum Link: http://ermanarslan.blogspot.com.tr/p/forum.html

Register and create an issue in the related category.
I will support you from there.