We have done several EBS migrations and most of the time we do these migration using datapump method. The reason we choose this method was always a need for an upgrade. That is , ofcourse standby switchover/failover based migration is faster , and smoother then a datapump based one, but once you need to upgrade the database tier in the source instance, the things become complicated and you most probaby face with more downtime in total.
As for the new generation Exadata X6-2 , the migration of EBS is quite similar. We just create our database in the target Exadata patch EBS and EXA(if necessary) and then use datapump to exp-imp EBS database tier to Exadata. After the migration of the database, we do couple of configurations using post clone and autoconfig sometimes and that's it.
This general procedure still applies to Exadata with its new generation hardware and software.
In this post, I will give some quick overview of Exadata X6, a eight rack version of it and then give some instructions for the readiness of the Exadata environment to be used as an EBS database tier.
I will give the explanation with a Question an Answer list and try to keep it simple.
What do we have in Exadata X6-2?
Exadata X6-2 Eight(1/8) Rack:
--------------------------------------------------------------------------------------------------------
-2 database servers:
Each server has:
*2 x 22-core processors, each with 11 cores enabled (Xeon E5-2699 v4 processors)
*256 default memory minimum(default) --> 256 GB (8 x 32 GB) RAM expandable to 512 GB (16 x 32 GB) or 768 GB (24 x 32 GB) with memory expansion kit
*4 x 600 GB 10K RPM SAS disks, hot swappable, expandable to 8x
*Disk controller HBA with 1 GB cache (no more batteries)
*2 x InfiniBand 4X QDR (40 Gb/s) ports (PCIe 3.0), both ports active
*4 x 1 GbE/10GbE Base-T Ethernet ports
*2 x 10 GbE Ethernet SFP+ ports (1 dual-port 10GbE PCIe 2.0 network card based on the Intel 82599 10 GbE controller technology)
*1 Ethernet port for Integrated Lights Out Manager (ILOM) for remote management
*Oracle Linux 6 Update 7 with Unbreakable Enterprise Kernel 2 or Oracle VM Server 3.2.9
-3 Exadata Storage Server X6-2 Servers:
Each server has:
*2 x 10-core processors(10-core Xeon E5-2630 v4 processors), each with 5 cores enabled
*128GB memory
*2 PCI flash cards (each 3.2 TB)+ 6x8 7200 rpm disk OR 8 PCI flash Drives(4 enabled, each 3.2 TB)
-2 Sun Datacenter InfiniBand Switch
-2 redundant PDUs (single phase or three phase, high voltage or low voltage)
-1 48-port Cisco Catalyst 4948E-F, model number WS-C4948E-F-S Ethernet switch
--
Raw PCI flash capacity: 38.4 TB for EF, or 19.2 TB for HC --> for EF(Extreme Flash) Calculation: 3.2 X 4 x 3 = 36 TB.
Raw hard disk capacity: 144 TB for high capacity disks --> for HC Calculation : 6 x 8 x 3 = 144TB.
Why is it called Extreme Flash (I mean Exadata Extreme Flash) ?
If we look at the spare flash drives of Exadata, we see the "SSD" clause on them.
This is because; actually SSD drives are Flash-based.
There is no difference between SSD and Flash. So altough it is called Extreme Flash, Exadata EF uses SSD disks, which actually use flash in the background.
How do we(Actually Oracle Field engineer) deploy Exadata for the first time, after all the cabling and other hardware related stuff is done?
The deployment of Exadata X6 is done using the script called install.sh.
With the latest version, following software components are deployed by the installer:
Database Software Version: 12.1.0.2
Grid Software Version: 12.1.0.2
Os version: OEL 6.7
Kernel: 2.6.38-400.277 UEK
If we want to use NFS exports from Exadata to store our EBS database exp dump files, what do we do?
NFS configuration from Exadata to the source system can be done by following:
Exadata: How to Create a NFS Mount on a Database Node (Doc ID 1900335.1)
Do we use Oracle Home that is deployed with Exadata to provision our EBS database?
Actually, no. We leave that home as is, as an example for us.
We create/clone a new home on Exadata and use it for our EBS database tier.
How do we clone our database home in Exadata? (as it is RAC and as it is an appliance, it may be question?)
A new Oracle Home is created using cloning technique, for this the following actions are taken.
In each node
-> Create a new oracle home by copying the existing one, use cp -rp to preverse the permissions.
-> Register Oracle home to inventory
for example:
cd $NEW_ORACLE_HOME
cd clone
cd bin
perl ./clone.pl ORACLE_BASE=$ORACLE_BASE \
ORACLE_HOME=<NEW_ORACLE_HOME_PATH> \
ORACLE_HOME_NAME=<NEW_ORACLE_HOME_NAME> \
'-O"CLUSTER_NODES={exa01,exa02}"'\
'-O"LOCAL_NODE=exa01"' --> we change it to second node when we are executing it in the second node.
Note that, this script should be run on all the nodes(by modifying the LOCAL_NODE appropriately) one by one after copying the oracle homes and lastly root.sh should be executed after these runs, as shown in the following example;
Here is a demo run for you:
[oracle@exadb01 bin]$ perl clone.pl ORACLE_BASE=/u01/app/oracle ORACLE_HOME=/u01/app/oracle/product/12.1.0.2/dbhome_prod ORACLE_HOME_NAME=OraDB12HomeProd '-O"CLUSTER_NODES={exadb01,exadb02}"' '-O"LOCAL_NODE=exadb01"'
./runInstaller -clone -waitForCompletion "ORACLE_BASE=/u01/app/oracle" "ORACLE_HOME=/u01/app/oracle/product/12.1.0.2/dbhome_prod" "ORACLE_HOME_NAME=OraDB12HomeProd" "CLUSTER_NODES={exadb01,exadb02}" "LOCAL_NODE=exadb01" -silent -paramFile /u01/app/oracle/product/12.1.0.2/dbhome_prod/clone/clone_oraparam.ini
Starting Oracle Universal Installer...
Checking Temp space: must be greater than 500 MB. Actual 16660 MB Passed
Checking swap space: must be greater than 500 MB. Actual 23123 MB Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2016-09-07_11-12-23AM. Please wait ...You can find the log of this install session at:
/u01/app/oraInventory/logs/cloneActions2016-09-07_11-12-23AM.log
.................................................. 5% Done.
.................................................. 10% Done.
.................................................. 15% Done.
.................................................. 20% Done.
.................................................. 25% Done.
.................................................. 30% Done.
.................................................. 35% Done.
.................................................. 40% Done.
.................................................. 45% Done.
.................................................. 50% Done.
.................................................. 55% Done.
.................................................. 60% Done.
.................................................. 65% Done.
.................................................. 70% Done.
.................................................. 75% Done.
.................................................. 80% Done.
.................................................. 85% Done.
..........
Copy files in progress.
Copy files successful.
Link binaries in progress.
Link binaries successful.
Setup files in progress.
Setup files successful.
Setup Inventory in progress.
Setup Inventory successful.
Finish Setup successful.
The cloning of OraDB12HomeProd was successful.
Please check '/u01/app/oraInventory/logs/cloneActions2016-09-07_11-12-23AM.log' for more details.
Setup Oracle Base in progress.
Setup Oracle Base successful.
.................................................. 95% Done.
As a root user, execute the following script(s):
1. /u01/app/oracle/product/12.1.0.2/dbhome_prod/root.sh
Execute /u01/app/oracle/product/12.1.0.2/dbhome_prod/root.sh on the following nodes:
[exadb01]
.................................................. 100% Done.
[root@exadb01 ~]# sh /u01/app/oracle/product/12.1.0.2/dbhome_prod/root.sh
Check /u01/app/oracle/product/12.1.0.2/dbhome_prod/install/root_erman_2016-09-07_11-13-30.log for the output of root script
What is next?
The next thing is to the migration :) by following Oracle Support documents (export import process of EBS, EBS - 12 C RDBMS interoperability and EBS on Exadata whitepapers) and ofcourse reading my blog posts which are about real life example of EBS - Exadata migrations:)
So that 's it for now. I will write a seperate blog post for migration EBS on Exadata X6 (altough it will not be so different than migrating to earlier releases of Exadata ;)