Wednesday, June 6, 2018

RDBMS -- datapump -- 3 interesting problems and 3 things to consider

Recently dealed with some critical and interesting datapump problems.
These problems were basically realated with corrupted dump files, uncompleted dump files, missing Master tables and performance problems during a partitioned table import.

Let's start with the corrupted dump files..
This is very clear.. If you have corrupt dumpfile in your datapump backupset, then you won't be able to correct them.. At least Oracle doesn't have a method or tool to correct them..
There are 1 or 2 third party tools, but I didn't test them. Those tools doesn't support compressed dumpfile , anyways..
On the other hand, you can still import even if you have corrupted dumpfiles in your dumpfile backupset..
Of course, you will get fatal errors during your import.. That is unavoidable, but you can restart the import process and resume your import by skipping those errors.. At the end, you may have your data imported in your database partially..

The other interesting problem was related with the uncompleted dump files..

In our case, someone killed the expdp process during the export.. One day after that, we tried to import the dumpfiles , but we saw the errors .. Some of the dumpfiles were not there.. Master table was not there.. Master table was actually in the database, but it wasn't in any of the files in the export backup set. So we couldn't import the tables.. Datapump was complaining about the master table..

Actually, all the critical tables were already exported before someone killed the expdp.. So, what we needed to do was to find a way to import these dumpfiles.. However, impdp was not doing anything.. It was complaining about the Master table..

What we did was interesting.. We resumed the expdp job.. (it was killed 24 hours ago) So expdp continued, finished all the other tables in the list and exported the MASTER TABLE (expdp exports this table at the end) and finished its work..

Of course, the tables exported in this run were not from the same timestamp as the other ones, but at the end we got our MASTER TABLE included in the backup set and could import those critical tables using this backup set.

The last thing that I want to share with you is related with impdp performance.. Impdp can't import parallel when a partition table is already created in the database. At least, this is the case for 11gR2.

So, if you are going to import a partitioned table using parallel argument, I suggest you to let impdp create that partitioned table.. If you create the table beforehand, you will see impdp running serially although you pass the parallel argument to it. This is based on a real life case.. Hope you 'll find it useful.

No comments :

Post a Comment

If you will ask a question, please don't comment here..

For your questions, please create an issue into my forum.

Forum Link: http://ermanarslan.blogspot.com.tr/p/forum.html

Register and create an issue in the related category.
I will support you from there.