Sometimes, during the database migrations especially, during the big-sized database migrations, we may need to change some of the configurations of our running datapump jobs.
While, stopping/cancelling the job and executing the datapump again, can be a solution for this, it can be costly in some environments where a full database export/import takes several hours.
In these kinds of situations, the datapump argument named attach comes to our help.
I see a need to write this, because we used it recently to increase the parallelism count of a running datapump export job and it performed well.
This attach feature is also useful when we start a job at one location and need to check on it at a later time from a different location.
Anyways, let's take a look at what we did for increasing the parallelism of our export job during an Exadata migration.
In the first glance; we predicted the optimal setting for the datapump parallelism as 8 but when we saw that the machine (in this case it was an Exadata x6-2 1/8) had plenty of I/O and CPU resources eventhough the 8 parallel expdp threads were running, we increased the parallelism to 24 and we did it using attach argument, without terminating and re-executing our datapump command.
It was as simple as follows;
While, stopping/cancelling the job and executing the datapump again, can be a solution for this, it can be costly in some environments where a full database export/import takes several hours.
In these kinds of situations, the datapump argument named attach comes to our help.
I see a need to write this, because we used it recently to increase the parallelism count of a running datapump export job and it performed well.
This attach feature is also useful when we start a job at one location and need to check on it at a later time from a different location.
Anyways, let's take a look at what we did for increasing the parallelism of our export job during an Exadata migration.
In the first glance; we predicted the optimal setting for the datapump parallelism as 8 but when we saw that the machine (in this case it was an Exadata x6-2 1/8) had plenty of I/O and CPU resources eventhough the 8 parallel expdp threads were running, we increased the parallelism to 24 and we did it using attach argument, without terminating and re-executing our datapump command.
It was as simple as follows;
- We found our jobs using "select * from dba_datapump_jobs; "
- We attached to our running job using "expdp usert/pass attach=job_name"
- We increased the parallelism to 24 using "export> parallel=24"
It is pretty handy, isn't it? Actually, we can do a lot more using this attach argument.. Here is the list of the activities, that we can do when we attach a running datapump job. (along with their definitions)
- ADD_FILE: Add additional dump files.
- CONTINUE_CLIENT: Exit interactive mode and enter logging mode.
- EXIT_CLIENT: Stop the export client session, but leave the job running.
- FILESIZE: Redefine the default size to be used for any subsequent dump files.
- HELP: Display a summary of available commands.
- KILL_JOB: Detach all currently attached client sessions and terminate the current job.
- PARALLEL: Increase or decrease the number of active worker processes for the current job. This command is valid only in the Enterprise Edition of Oracle Database 11g.
- START_JOB: Restart a stopped job to which you are attached.
- STATUS: Display detailed status for the current job and/or set status interval.
- STOP_JOB:Stop the current job for later restart.
No comments :
Post a Comment
If you will ask a question, please don't comment here..
For your questions, please create an issue into my forum.
Forum Link: http://ermanarslan.blogspot.com.tr/p/forum.html
Register and create an issue in the related category.
I will support you from there.