Thursday, January 12, 2017

ODA & RDBMS-- A look to the prerequisites for "oakcli database upgrade" command and some tips for the RAC Standby Databases

As you already may know, oakcli has a database upgrade feature as well.
That is , 'oakcli upgrade database'  command can be used for doing one command database upgrades on ODA.

Command Syntax:
oakcli upgrade database [-db db_names | -from source_home] -to destination_home [-h]

However, this command requires some prereqs.
That is, we need to use the correct source and target homes. Also, our database should be registered in CRS and /etc/oratab etc..

In this blog post,  I will be consantrated on CRS registry part.

Well... In order to be able to use oakcli upgrade database command, we need to have our database registered with the CRS on ODA.

That is, we should be able to see our database when we issue oakcli show databases command... Well...In order to see our database with oakcli show databases command, we need our database be registered in the cluster registry. (CRS/srvctl configuration)

In other words; our database and instance/instances should be added to the cluster repository.
Once our database is added , we can use oakcli upgrade database command for upgrading it.

Here is a demo for adding a standby to the cluster registry on ODA.

In this example, I m adding a standby database which is residing on an ODA X5 to the cluster registry and then use the oakcli show databases command to see it.
(note that, in this example; I only add a single standby instance...)

At first, the standby database is not registered at all;

[root@marsdb1 bin]# oakcli show version -detail
[root@marsdb1 bin]# oakcli show databases -detail
Name Type Storage HomeName HomeLocation Version HostName
----- ------ -------- -------------- ----------------- ----------- --------------


Then I register the standby database as a physical standby database in to cluster registry as follows;

[oracle@marsdb1 dbs]$ srvctl add database -d PROD -o /u01/app/oracle/product/11.2.0.3/dbhome_1 -p '+DATA/spfilePROD' -r PHYSICAL_STANDBY -s MOUNT

[oracle@marsdb1 dbs]$ srvctl add instance -d PROD -i PROD2 -n marsdb1
[oracle@marsdb1 ~]$ srvctl config database -d PROD
Database unique name: PROD
Database name:
Oracle home: /u01/app/oracle/product/11.2.0.3/dbhome_1
Oracle user: oracle
Spfile: /u01/app/oracle/product/11.2.0.3/dbs/initPROD2.ora
Domain:
Start options: mount
Stop options: immediate
Database role: PHYSICAL_STANDBY
Management policy: AUTOMATIC
Server pools: PROD
Database instances: PROD2
Disk Groups:
Mount point paths:
Services:
Type: RAC
Database is administrator managed


Now, I execute "oakcli show databases" command again and this time it shows me my standby database as seen below;

[root@marsdb1 bin]# oakcli show databases -detail

Name Type Storage HomeName HomeLocation Version HostName
----- ------ -------- -------------- ----------------- ----------- --------------
PROD RAC OraDb11203_home1 /u01/app/oracle/product/11.2.0.3/dbhome_1 11.2.0.3.15(20760997,17592127)

At this point; my standby database can be upgraded using oakcli upgrade database command.. (supposing all the other prereqs are in place)

I want to remind you that, I have used a single instance standby database in this example.
On the other hand, ODA let us have a multi-instance RAC standby database as well.

Remember the following if you are implementing a RAC standby database ;

1) All the standby instances can be started in mount mode (physical standby mount mode), but only one of them can be in recovery mode.

2) Standby Redolog files should be added to the Standby database, for each thread. (multi instance)

3) Primary instances( Primary database environment) must be configured to send redo data to the standby database. (to all the standby database instances or to one of the standby database instances, which can receive the redo data --RFS).. So if we want to send redo data to all of the standby instances, then there should be an address list for connecting to all of the standby nodes/instances in Prod environments tnsnames.ora file.
Note that, even if we configure the address lists on tns files of our primary nodes,  RFS will start only on one of the nodes. (the node which is in the first line of the address list).
In addition, MRP will be started manually by us on the instance of our choice.
That is, MRP will run on one of the instances. (currently MRP can't be enabled on multiple instances -- maybe in the future it will be... maybe in RDBMS 12.2..)

4) On a multi instance (RAC) Standby environment, a node failure may require a human intervention. (if not using DG Broker) . I mean, if we lose the node on which the RFS is running , then we need to do log_archive_dest_state_% = > defer/enable in primary and if we lose the node on which the MRP is running, then we need to start managed recovery on the surviving node.

5) The archives (archive dest) in the standby environment should be reachable by all of the instance.

Oopps, this is going beyond the scope.  I have started to write things about Dataguard configuration.. RAC-standby related info in this level should be the topic of another blog post, right?
Besides, RAC-standby deserves a seperate article.
Well...  I m putting a dot to this blog post, and start writing the new one (RAC-Standby related one) then :)

No comments :

Post a Comment