Tuesday, August 17, 2021

Exadata X8M-2 & PCA X8-2 -- Part 3 Migration / Datapump for relatively small tables and indexes + Datapump dump file I/O

We have used online redefinition for moving big tables with big LOB columns from one tablespace to another and we have seen it in the previous post.

In this post, we will take a look at the datapump process that we used for moving the small sized tables ( LOB-free tables smaller than 10GBs -- )

Datapump is a tool that we use in logical migrations, some logical backups and in some other daily operations, sometimes for the upgrades and I think you all know that. So this blog post is written for telling you the story, the general process -- not the details of the well known datapump tool. 

Let's start with an interesting event that we suddenly found ourself in..  That is, datapump export was very slow. We were exporting the data into an ACFS mount exported from an Exadata (using NFS --through a 10 gig network).. So it shouldn't be anywhere near slow.. But! suprisingly it was...

We saw datapump dump file I/O events in the parallel export sessions.. So we suspected from the ACFS..  We did a couple of dd tests on the server (HP UX in this case).. Pure sequential I/O to the NFS share was slow.. However; dd tests on Exadata side performed well.. Then we saw that a parallel & full rman backup was running on Exadata side. A backup software (through a rman channel) was transffering the blocks to a network storage.. There was no backup network there.. So it was the network bandwitdh causing the slowness.. Our ACFS I/O was based on NFS in the first place, so cancelled the backup and everything went fine!  -- an interesting event, right? 

Anyways, just for the quick info, we are moving tables from one tablespace to another just to get rid of an offline, unrecoverable datafile because it prevents us using XTTS for our main goal. That is, migrating one of our database from HPUX to Exadata X8-2M..

Let's tell this datapump part of the story like we're doing it right now;

We have a limited downtime, so the process should be quick.. This is a production and that's why the process should be complete & error-free as well..

Here is the process;

  • Before importing the tables, we disable the DDL trigger (usually we have at least one in almost every customer environment, and usually they create problems -- grant-related access problems --during import/impdp ..)
  • We also ensure that we have the required free space in the target tablespace and we also ensure the users of the objects that we are planning to move, have quota on the target tablespace.
  • We check the current situation, I mean row counts, table counts, constraints, MV logs, index counts, object validity, grants etc, and we note all the check that we do at this point.. ( we will compare them with the checks that we will do after we complete movimng the objects to the target tablespace)

Some easy examples for the check queries:

select table_name from dba_tables where tablespace_name='ERMAN_TBS';

select count(*) from dba_tables where tablespace_name='ERMAN_TBS'; 

select  index_name from dba_indexes where tablespace_name='ERMAN_TBS';

select  count(*) from dba_indexes where tablespace_name='ERMAN_TBS';

select index_name from dba_indexes where table_name in (select table_name from dba_tables where tablespace_name='ERMAN_TBS');

select count(*) from dba_indexes where table_name in (select table_name from dba_tables where tablespace_name='ERMAN_TBS');

select trigger_name from dba_triggers where table_name in (select table_name from dba_tables where tablespace_name='ERMAN_TBS');

select count(*) from dba_triggers where table_name in (select table_name from dba_tables where tablespace_name='ERMAN_TBS');

select * from dba_tab_privs where table_name in (select table_name from dba_tables where tablespace_name='ERMAN_TBS');

select count(*) from dba_tab_privs where table_name in (select table_name from dba_tables where tablespace_name='ERMAN_TBS');

select owner, object_type, count (*) from dba_objects where status='INVALID' group by owner, object_type order by 1,2;

  • We get our downtime, shutdown the apps tier, shutdown the db listener, ensure we have no sessions in the database, no session other than us.. Ofcourse we can also take other session-blocking-actions, but these are enough in lots of the cases..
  • We export the tables we interested in;
Example command:

nohup expdp \"/ as sysdba\" DIRECTORY=ERM_DIR  EXCLUDE=TABLE:\"IN\(\'S_ERMAN\',\'S_ERMAN_INTERIM\',\'S_ERM_EMP\',\'S_ERM_LOG\',\'S_ERM_ITEM\',\'S_ERM_RP_LOG\'\)\"   DUMPFILE=ERM_%U.dmp  LOGFILE=ERM_20210813.log TABLESPACES=ERMAN  PARALLEL=8 &

  • Then we import them into the new tablespace. I will not give all the details about the datapump arguments used in this example. I think you already understand what we are trying to do here, so the arguments are aligned with our goal. (If not read above once again + read the previous blog post..)
nohup impdp \"/ as sysdba\" DIRECTORY=ERM_DIR DUMPFILE=SDATA_%U.dmp LOGFILE=imp_ERM_20210813.log table_exists_action=replace remap_tablespace=ERMAN:ERMAN_NEW PARALLEL=8 &       

  • After the datapump import, we do our checks(object comparison etcs..), compile invalids, take the corrective actions for the errors that we may have during the import..
  • Finally, we enable the DDL trigger (if we have one) and start our listener + applications
That's it.. 

At this point, we can get rid of that problematic datafile residing in the old tablespace.. We moved everything from that tablespace to the new one, right? It is time to drop the old tablespace (optional) and to continue on our way with the Exadata XTTS migration.

No comments :

Post a Comment

If you will ask a question, please don't comment here..

For your questions, please create an issue into my forum.

Forum Link: http://ermanarslan.blogspot.com.tr/p/forum.html

Register and create an issue in the related category.
I will support you from there.