Wednesday, March 30, 2016

Exadata-- Applying bundle patches for PSU upgrades

Recently had to upgrade an Exadata X3 enviroınment to supply the requirements of ODA X5, which was positioned as the Disaster recovery environment. ODA X5 was requiring at least an 11.2.0.3.15 (PSU 15) RDBMS, because of its ACFS mount points. So, we had to upgrade the Oracle Home of the source environment, which was this Exadata, from 11.2.0.3.1 to 11.2.0.3.15. So basically, we had to apply PSU 15 to Exadata's RDBMS home , and maybe Grid Home as well.
If it was not an Exadata, the patching route The patching route should be via CPU/PSUs.
What we did was applying the bundle patch actually.
As ,Bundle patches are cumulative. Bundle patches contain a recently released Patch Set Update (PSU), which, in turn, contain a recently released Critical Patch Update (CPU).

So, in order to apply have our Grid and DB version upgraded 11.2.0.3.15 we applied QUARTERLY DATABASE PATCH FOR EXADATA (JULY2015 - 11.2.0.3.28) - Patch 21166803, as the PSU 15 was included there
Included Content - The following patch bundles were included in 11.2.0.3 BP28 for Exadata

Patch 20760997 - DATABASE PATCH SET UPDATE 11.2.0.3.15 (INCLUDES CPUJUL2015)
Patch 18906063 - CRS PATCH FOR EXADATA (JUL 2014 - 11.2.0.3.24)
Patch 17592127 - GRID INFRASTRUCTURE PATCH SET UPDATE 11.2.0.3.9 (GI COMPONENTS)
Patch 17380185 - Database 11.2.0.2 Bundle Patch 22 for Exadata
Patch 16824987 - Database 11.2.0.2 Bundle Patch 21 for Exadata
Patch 20621256 - QUARTERLY DATABASE PATCH FOR EXADATA (APR 2014 - 11.2.0.3.27)


Confirming PSU 15, after applying the bundle patch:

[oracle@osrvdb01 ~]$ $ORACLE_HOME/OPatch/opatch lsinventory -bugs_fixed | grep -i -E 'DATABASE PSU|DATABASE PATCH SET UPDATE'
13343438   21025813  Sun Mar 27 12:10:03 EEST 2016  DATABASE PATCH SET UPDATE 11.2.0.3.1
13696216   21025813  Sun Mar 27 12:10:03 EEST 2016  DATABASE PATCH SET UPDATE 11.2.0.3.2 (INCLUDES 
13923374   21025813  Sun Mar 27 12:10:03 EEST 2016  DATABASE PATCH SET UPDATE 11.2.0.3.3 (INCLUDES 
14275605   21025813  Sun Mar 27 12:10:03 EEST 2016  DATABASE PATCH SET UPDATE 11.2.0.3.4 (INCLUDES SPU
14727310   21025813  Sun Mar 27 12:10:03 EEST 2016  DATABASE PATCH SET UPDATE 11.2.0.3.5 (INCLUDES CPU
16056266   21025813  Sun Mar 27 12:10:03 EEST 2016  DATABASE PATCH SET UPDATE 11.2.0.3.6 (INCLUDES CPU
16619892   21025813  Sun Mar 27 12:10:03 EEST 2016  DATABASE PATCH SET UPDATE 11.2.0.3.7 (INCLUDES CPU
16902043   21025813  Sun Mar 27 12:10:03 EEST 2016  DATABASE PATCH SET UPDATE 11.2.0.3.8 (INCLUDES CPU
17540582   21025813  Sun Mar 27 12:10:03 EEST 2016  DATABASE PATCH SET UPDATE 11.2.0.3.9 (INCLUDES CPU
18031683   21025813  Sun Mar 27 12:10:03 EEST 2016  DATABASE PATCH SET UPDATE 11.2.0.3.10 (INCLUDES CP
18522512   21025813  Sun Mar 27 12:10:03 EEST 2016  DATABASE PATCH SET UPDATE 11.2.0.3.11 (INCLUDES CP
19121548   21025813  Sun Mar 27 12:10:03 EEST 2016  DATABASE PATCH SET UPDATE 11.2.0.3.12 (INCLUDES CP
19769496   21025813  Sun Mar 27 12:10:03 EEST 2016  DATABASE PATCH SET UPDATE 11.2.0.3.13 (INCLUDES CP
20299017   21025813  Sun Mar 27 12:10:03 EEST 2016  DATABASE PATCH SET UPDATE 11.2.0.3.14 (INCLUDES CP
"20760997   21025813  Sun Mar 27 12:10:03 EEST 2016  DATABASE PATCH SET UPDATE 11.2.0.3.15 (INCLUDES CP" --> Targeted patch level for DB.

So, what we recommend here? We recommend following the release specific Exa patch boundle document, find the bundle patch that includes your desired target patch level and apply that bundle patch to increase the PSU or CPU levels if you are using an Exadata.
Ofcourse this does not mean , applying bundles for Exa is a must, as Patches in addition to bundle patches may be recommended or required (according to the Oracle Support note: Exadata Patching Overview and Patch Testing Guidelines (Doc ID 1262380.1) )

Here are some useful documents for 11gR2,

Bug Fix List: the 11.2.0.4 Patch Bundles for Oracle Exadata Database Machine(Doc ID 1601749.1)
For the list of fixes included in the 11.2.0.3 Bundle Patches for Exadata, please refer to Document 1393410.1.
For the list of fixes included in the 11.2.0.2 Bundle Patches for Exadata, please refer to Document 1314319.1.
For the list of fixes included in the 11.2.0.1 Bundle Patches for Exadata, please refer to Document 1316026.1.

Tuesday, March 29, 2016

EBS 12.2 -- FRM-40734 PLSQL Internal Error, ORA-600, RDBMS Bug 17892268

Here is a quick tip for you;

In EBS 12.2 (definitely seen in EBS 12.2.4 running on 11.2.0.4 Database ,) you may encounter FRM-40734 PLSQL Internal Error in QPXPRMLS.fmx form. In our case it was caused by customizations , but still it is a database issue and should be resolved. It is EBS related and could be encountered in anywhere else, in any form or oaf page which is designed to perform a similar database activity.


After analyzing the log files (alert log and trace), you will end up something like the following

DDE: Problem Key 'ORA 600 [koklread1-callback with conv flg]' was flood controlled (0x4) (incident: 455690)
ORA-00600: dahili hata kodu, ba~_ımsız de~_i~_kenler: [koklread1-callback with conv flg], [], [], [], [], [], [], [], [], [], [], []
DDE: Problem Key 'ORA 600 [koklread1-callback with conv flg]' was flood controlled (0x4) (incident: 455691)
ORA-00600: dahili hata kodu, ba~_ımsız de~_i~_kenler: [koklread1-callback with conv flg], [], [], [], [], [], [], [], [], [], [], []

This will direct you to - > "Bug 17892268 - ORA-600 [koklread1-callback with conv flg] using 10.1 PLSQL client (Doc ID 17892268.8)"

Then, you will apply the patch for the solution -> Patch 17892268: ORA-600 [KOKLREAD1-CALLBACK WITH CONV FLG] RUNNING SOME PL/SQL CODE

Friday, March 25, 2016

RDBMS -- Dataguard performing a "Switch Over" on a Standby environment which is consisting of 6 nodes

We have seen how to create a standby environment consisting of 6 nodes in the previous post: http://ermanarslan.blogspot.com.tr/2016/03/rdbms-dataguard-physical-standby.html

In this post, we will see how to perform a failover on that physical standby environment.
I will give you the switchover instructions and initialization parameters for this operation.

So lets recall our current (source) and target(after switchover) standby topologies;

The current topology is as follows;

DB1(primary)--->DB2--->DB3---->DB4
 |                                              | ----->DB5
 |
 |--->DB6


The target topology in case of a switchover scenario which may be implemented for planned downtime is as follows;

DB5
|
|
|->DB3(primary)------->DB1----->DB2
      |                                   |    
      | ->DB4                       |------->DB6


SWITCH OVER CONSIDERATIONS: 


  • Ensure there is no delay in applying redo on the standby database which is planned to be the new primary environment.
  • Ensure that the initialization parameters defined in the primary database is appropriate for the possbile future role as a standby database in the context of the overall protection mode.
  • Ensure that standby redo log files are configured on the primary database.
  • For each temporary table, verify that temporary files associated with that table on the primary database also exist on the standby database.
  • Before performing a switchover from an Oracle RAC primary database to a physical standby database, shut down all but one primary database instance. Any primary database instances shut down at this time can be started after the switchover completes.
  • Before performing a switchover or a failover to an Oracle RAC physical standby database, shut down all but one standby database instance. Any standby database instances shut down at this time can be restarted after the role transition completes.



SWITCH OVER PLAN: 


  • Ensure DB3 init.ora/spfile as it configured to transport redo to DB1, DB4 and DB5 (in standby role)
  • Ensure DB1 init.ora/spfile to transport redo to DB2 and DB6  (in standby role)
  • Change log_archive_dest_2 on DB1 from SYNC to ASYNC. 
  • DISABLE Dest3 on DB2 -> LOG_ARCHIVE_DEST_3= 'SERVICE=DB3  VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE)  DB_UNIQUE_NAME=DB3'
  • Ensure DB4 and DB5 is in sync with DB1 and stop application services.
  • Stop managed recovery on DB1 to stop transporting redo to DB2 and DB6.
  • Switch over DB1 with DB3 and make DB3 the new primary.
  • Control all the standby databases in the configuration and ensure they are in sync with the new primary (DB3)

[oracle@demoorcl ~]$ . setDB1.env
[oracle@demoorcl ~]$ sqlplus "/as sysdba"

SQL*Plus: Release 11.2.0.3.0 Production on Tue Mar 22 15:37:28 2016

Copyright (c) 1982, 2011, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> select switchover_status from v$database;

SWITCHOVER_STATUS
--------------------
TO STANDBY

SQL> alter database commit to switchover to physical standby with session shutdown;

Database altered.

SQL>shutdown immediate
SQL>startup nomount
SQL>alter database mount standby database;

SQL> alter system set log_archive_dest_state_2=defer scope=memory;

System altered.

SQL> alter system set log_archive_dest_state_6=defer scope=memory;

System altered.

[oracle@demoorcl ~]$ . setDB3.env
[oracle@demoorcl ~]$ sqlplus "/as sysdba"

SQL*Plus: Release 11.2.0.3.0 Production on Tue Mar 22 15:48:02 2016

Copyright (c) 1982, 2011, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> select switchover_status from v$database;

SWITCHOVER_STATUS
--------------------
TO PRIMARY

[oracle@demoorcl ~]$ . setDB3.env
[oracle@demoorcl ~]$ sqlplus "/as sysdba"

SQL*Plus: Release 11.2.0.3.0 Production on Tue Mar 22 15:48:02 2016

Copyright (c) 1982, 2011, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> select switchover_status from v$database;

SWITCHOVER_STATUS
--------------------
TO PRIMARY

SQL> alter database commit to switchover to primary;

Database altered.

SQL> shutdown immediate
ORA-01109: database not open


Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area  626327552 bytes
Fixed Size                  2230952 bytes
Variable Size             184550744 bytes
Database Buffers          432013312 bytes
Redo Buffers                7532544 bytes
Database mounted.
Database opened.
SQL> exit


[oracle@demoorcl ~]$ . setDB1.env
[oracle@demoorcl ~]$ sqlplus "/as sysdba"
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;

Database altered.

SQL> select * from v$instance;

INSTANCE_NUMBER INSTANCE_NAME
--------------- ----------------
HOST_NAME
----------------------------------------------------------------
VERSION           STARTUP_T STATUS       PAR    THREAD# ARCHIVE LOG_SWITCH_WAIT
----------------- --------- ------------ --- ---------- ------- ---------------
LOGINS     SHU DATABASE_STATUS   INSTANCE_ROLE      ACTIVE_ST BLO
---------- --- ----------------- ------------------ --------- ---
              1 DB3
demoorcl.dardanel.com
11.2.0.3.0        22-MAR-16 OPEN         NO           1 STARTED
ALLOWED    NO  ACTIVE            PRIMARY_INSTANCE   NORMAL    NO


SQL> select database_role from v$database;

DATABASE_ROLE
----------------
PRIMARY

SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

[oracle@demoorcl trace]$ cd
[oracle@demoorcl ~]$ . setDB1.env
[oracle@demoorcl ~]$ sqlplus "/as sysdba"

SQL*Plus: Release 11.2.0.3.0 Production on Tue Mar 22 16:11:28 2016

Copyright (c) 1982, 2011, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> select database_role from v$database;

DATABASE_ROLE
----------------
PHYSICAL STANDBY


DB3 INIT ORA:
----------------
LOG_ARCHIVE_DEST_1='LOCATION=/u01/ERMAN/db3_archive VALID_FOR=(ALL_LOGFILES,ALL_ROLES)  DB_UNIQUE_NAME=DB3'

log_archive_dest_2='SERVICE=DB1 LGWR ASYNC  VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)   DB_UNIQUE_NAME=DB1'

log_archive_dest_5='SERVICE=DB5 LGWR ASYNC  VALID_FOR=(ONLINE_LOGFILES,ALL_ROLES)   DB_UNIQUE_NAME=DB5'

log_archive_dest_4='SERVICE=DB4 LGWR ASYNC  VALID_FOR=(ONLINE_LOGFILES,ALL_ROLES)   DB_UNIQUE_NAME=DB4'

LOG_ARCHIVE_CONFIG= 'DG_CONFIG=(DB1,DB2,DB3,DB4,DB5,DB6)'
fal_client=''
fal_server=''

DB4 INIT ORA:
----------------
LOG_ARCHIVE_DEST_1='LOCATION=/u01/ERMAN/db4_archive VALID_FOR=(ALL_LOGFILES,ALL_ROLES)  DB_UNIQUE_NAME=DB4'

LOG_ARCHIVE_CONFIG= 'DG_CONFIG=(DB1,DB2,DB3,DB4,DB5,DB6)'

FAL_SERVER=DB3

FAL_CLIENT=DB4

DB5 INIT ORA:
----------------
LOG_ARCHIVE_DEST_1='LOCATION=/u01/ERMAN/db5_archive VALID_FOR=(ALL_LOGFILES,ALL_ROLES)  DB_UNIQUE_NAME=DB5'

LOG_ARCHIVE_CONFIG= 'DG_CONFIG=(DB1,DB2,DB3,DB4,DB5,DB6)'

FAL_SERVER=DB3

FAL_CLIENT=DB5

DB1 INIT ORA:
----------------
log_archive_dest_6='SERVICE=DB6  VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE)  DB_UNIQUE_NAME=DB6'

log_archive_dest_2='SERVICE=DB2  VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE)  DB_UNIQUE_NAME=DB2'

log_archive_dest_1='LOCATION=/u01/ERMAN/db1_archive VALID_FOR=(ALL_LOGFILES,ALL_ROLES)  DB_UNIQUE_NAME=DB1'

log_archive_config='DG_CONFIG=(DB3,DB1,DB2,DB4,DB5,DB6)

fal_server=DB3

fal_client=DB1

DB2 INIT ORA:
----------------
LOG_ARCHIVE_CONFIG= 'DG_CONFIG=(DB1,DB2,DB3,DB4,DB5,DB6)'
log_archive_dest_1='LOCATION=/u01/ERMAN/db2_archive VALID_FOR=(ALL_LOGFILES,ALL_ROLES)  DB_UNIQUE_NAME=DB2'
FAL_SERVER=DB1
FAL_CLIENT=DB2

DB6 INIT ORA:
----------------
log_archive_config='DG_CONFIG=(DB3,DB1,DB2,DB4,DB5,DB6)'
log_archive_dest_1 ='LOCATION=/u01/ERMAN/db6_archive VALID_FOR=(ALL_LOGFILES,ALL_ROLES)  DB_UNIQUE_NAME=DB6'
FAL_SERVER=DB1
FAL_CLIENT=DB6


CHECK IF LOG APPLY SERVICES WORKS:
----------------------------------------------------------------

[oracle@demoorcl ~]$ . setDB3.env
[oracle@demoorcl ~]$ sqlplus "/as sysdba"

SQL*Plus: Release 11.2.0.3.0 Production on Thu Mar 24 15:11:06 2016

Copyright (c) 1982, 2011, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> alter system switch logfile;

System altered.

SQL> /

System altered.

SQL> /

System altered.

SQL> !sleep 30;

SQL>
SELECT THREAD#, MAX(SEQUENCE#)
FROM V$LOG_HISTORY
WHERE RESETLOGS_CHANGE# =
(SELECT RESETLOGS_CHANGE#
FROM V$DATABASE_INCARNATION
WHERE STATUS = 'CURRENT')
GROUP BY THREAD#;SQL>   2    3    4    5    6    7

   THREAD# MAX(SEQUENCE#)
---------- --------------
         1             82

SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
[oracle@demoorcl ~]$ . setDB1.env
[oracle@demoorcl ~]$ sqlplus "/as sysdba"

SQL*Plus: Release 11.2.0.3.0 Production on Thu Mar 24 15:12:13 2016

Copyright (c) 1982, 2011, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> select al.thrd "Thread", almax "Last Seq Received", lhmax "Last Seq Applied"
from (select thread# thrd, max(sequence#) almax
      from v$archived_log
      where resetlogs_change#=(select resetlogs_change# from v$database)
      group by thread#) al,
     (select thread# thrd, max(sequence#) lhmax
      from v$log_history
      where first_time=(select max(first_time) from v$log_history)
      group by thread#) lh
where al.thrd = lh.thrd;  2    3    4    5    6    7    8    9   10

    Thread Last Seq Received Last Seq Applied
---------- ----------------- ----------------
         1                82               82

SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
[oracle@demoorcl ~]$ . setDB5.env
[oracle@demoorcl ~]$ sqlplus "/as sysdba"

SQL*Plus: Release 11.2.0.3.0 Production on Thu Mar 24 15:13:06 2016

Copyright (c) 1982, 2011, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> select al.thrd "Thread", almax "Last Seq Received", lhmax "Last Seq Applied"
from (select thread# thrd, max(sequence#) almax
      from v$archived_log
      where resetlogs_change#=(select resetlogs_change# from v$database)
      group by thread#) al,
     (select thread# thrd, max(sequence#) lhmax
      from v$log_history
      where first_time=(select max(first_time) from v$log_history)
      group by thread#) lh
where al.thrd = lh.thrd;  2    3    4    5    6    7    8    9   10

    Thread Last Seq Received Last Seq Applied
---------- ----------------- ----------------
         1                82               82

SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
[oracle@demoorcl ~]$ . setDB4.env
[oracle@demoorcl ~]$ sqlplus "/as sysdba"

SQL*Plus: Release 11.2.0.3.0 Production on Thu Mar 24 15:13:12 2016

Copyright (c) 1982, 2011, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> select al.thrd "Thread", almax "Last Seq Received", lhmax "Last Seq Applied"
from (select thread# thrd, max(sequence#) almax
      from v$archived_log
      where resetlogs_change#=(select resetlogs_change# from v$database)
      group by thread#) al,
     (select thread# thrd, max(sequence#) lhmax
      from v$log_history
      where first_time=(select max(first_time) from v$log_history)
      group by thread#) lh
where al.thrd = lh.thrd;  2    3    4    5    6    7    8    9   10

    Thread Last Seq Received Last Seq Applied
---------- ----------------- ----------------
         1                82               82

SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
[oracle@demoorcl ~]$ . setDB2.env
[oracle@demoorcl ~]$ sqlplus "/as sysdba"

SQL*Plus: Release 11.2.0.3.0 Production on Thu Mar 24 15:13:46 2016

Copyright (c) 1982, 2011, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> select al.thrd "Thread", almax "Last Seq Received", lhmax "Last Seq Applied"
from (select thread# thrd, max(sequence#) almax
      from v$archived_log
      where resetlogs_change#=(select resetlogs_change# from v$database)
      group by thread#) al,
     (select thread# thrd, max(sequence#) lhmax
      from v$log_history
      where first_time=(select max(first_time) from v$log_history)
      group by thread#) lh
where al.thrd = lh.thrd;  2    3    4    5    6    7    8    9   10

    Thread Last Seq Received Last Seq Applied
---------- ----------------- ----------------
         1                82               82

SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
[oracle@demoorcl ~]$ . setDB6.env
[oracle@demoorcl ~]$ sqlplus "/as sysdba"

SQL*Plus: Release 11.2.0.3.0 Production on Thu Mar 24 15:13:53 2016

Copyright (c) 1982, 2011, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> select al.thrd "Thread", almax "Last Seq Received", lhmax "Last Seq Applied"
from (select thread# thrd, max(sequence#) almax
      from v$archived_log
      where resetlogs_change#=(select resetlogs_change# from v$database)
      group by thread#) al,
     (select thread# thrd, max(sequence#) lhmax
      from v$log_history
      where first_time=(select max(first_time) from v$log_history)
      group by thread#) lh
where al.thrd = lh.thrd;  2    3    4    5    6    7    8    9   10

    Thread Last Seq Received Last Seq Applied
---------- ----------------- ----------------
         1                82               82

RDBMS -- Dataguard, Physical Standby - Creating a standby environment consisting of 6 nodes , Cascaded and Cascading standby databases.

In this blog post, I will show you a physical standby configuration consisting of 6 nodes.
In order to create this demo environment, an enterprise edition database(primary) was created using the dbca (11gR2) and 5 standby databases are created by cloning this enterprise edition primary database.

Nodes and Roles:

1 Primary (DB1)
1 Cascading Physical Standby (DB2)
1 Cascading and Cascaded Physical Standby (DB3)
2 Cascaded Physical Standby  (DB4, DB5)
1 Phsyical Standby (DB6)

All the Standby except DB2 is operating using LGWR ASYNC method. Only the transport between DB1 and DB2 is in LGWR SYNC mode.

The topology is as follows;

DB1(primary)--->DB2--->DB3---->DB4
 |                                              | ----->DB5
 |
 |--->DB6


The target topology in case of a switchover scenario which may be implemented for planned downtime is as follows;

DB5
|
|
|->DB3(primary)------->DB1----->DB2
      |                                   |      
      | ->DB4                       |------->DB6

So, I m sharing the scenario, because this 6 node standby environment will be configured according to this switchover scenario, in other words to give minimum effort in case of a planned switchover scenario.

Here is the init.ora parameters used for building this dataguard environment. This is the most important part actually, as creating standby environment by cloning the primary, configured tnsnames.ora files and the listener are well-known things, already.

"""""""""""""""""""""""PRIMARY --DB1""""""""""""""""""""""""""""

*.compatible='11.2.0.0.0'

#The Compatible parameter is used to control the formats of oracle data blocks and redo streams. It is basically controlling what is written to disk.
#COMPATIBLE initialization parameter should be set to the same value on both the primary and standby databases.

*.db_name='DB1'
#DATABASE NAME, THIS IS SAME(DB1) ON ALL THE STANDBYs AS WELL.

*.db_unique_name='DB1'
#DATABASE UNIQUE NAME, THIS CHANGES ACCORDING TO THE SID OF THE STANDBY DATABASES

*.log_archive_dest_1='LOCATION=/u01/ERMAN/db1_archive VALID_FOR=(ALL_LOGFILES,ALL_ROLES)  DB_UNIQUE_NAME=DB1'
#LOCAL ARCHIVAL DEST, it is used for specifying the local archive dest.
#"VALID_FOR" is an optional argument
#ALL_LOGFILES— This destination is valid when archiving either online redo log files or standby redo log files.
#ALL_ROLES—This destination is valid when the database is running in either the primary or the standby role.

*.log_archive_dest_2='SERVICE=DB2 LGWR SYNC  VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)   DB_UNIQUE_NAME=DB2'

#IT S THE REMOTE ARCHIVE DEST, IT IS REACHED VIA TNS entry CALLED DB2, so it is for archiving to the standby database named DB2
#"VALID_FOR" is an optional argument
#ONLINE_LOGFILE—This destination is valid only when archiving online redo log files.
#PRIMARY_ROLE—This destination is valid only when the database is running in the primary role.


*.log_archive_dest_6='SERVICE=DB6 LGWR ASYNC  VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)   DB_UNIQUE_NAME=DB6'

#IT S THE REMOTE ARCHIVE DEST, IT IS REACHED VIA TNS entry CALLED DB6, so it is for archiving to the standby database named DB6
#"VALID_FOR" is an optional argument
#ONLINE_LOGFILE—This destination is valid only when archiving online redo log files.
#PRIMARY_ROLE—This destination is valid only when the database is running in the primary role.

*.log_archive_dest_state_1='ENABLE'

#DEFAULT is ENABLE,this parameter specifies that a valid log archive destination can be used for a subsequent archiving operation (automatic or manual)
Other values are;
defer: Specifies that valid destination information and attributes are preserved, but the destination is excluded from archiving operations until re-enabled.
alternate:Specifies that a log archive destination is not enabled but will become enabled if communications to another destination fail.

*.log_archive_dest_state_2='ENABLE'

#DEFAULT is ENABLE,this parameter specifies that a valid log archive destination can be used for a subsequent archiving operation (automatic or manual)
#Other values are;
#defer: Specifies that valid destination information and attributes are preserved, but the destination is excluded from archiving operations until re-enabled.
#alternate:Specifies that a log archive destination is not enabled but will become enabled if communications to another destination fail.


*.log_archive_dest_state_6='ENABLE'

#DEFAULT is ENABLE,this parameter specifies that a valid log archive destination can be used for a subsequent archiving operation (automatic or manual)
#Other values are;
#defer: Specifies that valid destination information and attributes are preserved, but the destination is excluded from archiving operations until re-enabled.
#alternate:Specifies that a log archive destination is not enabled but will become enabled if communications to another destination fail.

*.remote_login_passwordfile='EXCLUSIVE'

#This parameter must be set in order to make it possible to connect to the database remotely using SYS user.
#It should be set the same password for SYS on both the primary and standby databases. The recommended setting is either EXCLUSIVE or SHARED
#EXCLUSIVE means: The password file can be used by only one database. The password file can contain SYS as well as non-SYS users
#SHARED means:One or more databases can use the password file. The password file can contain SYS as well as non-SYS users.
#none(not setting at all) means: Oracle ignores any password file. Therefore, privileged users must be authenticated by the operating system.
#Note that, a password file must be present for this to be active, else you will end up with the OS authentication.

*.LOG_ARCHIVE_CONFIG='DG_CONFIG=(DB1,DB2,DB3,DB4,DB5,DB6)'
#This parameter is used for enabling or disabling sending of redo logs to remote destinations and the receipt of remote redo logs.
#DG_CONFIG is used to specify  a list of unique database names (DB_UNIQUE_NAME) for all of the databases in the Data Guard configuration.
#This parameter is basically saying: I allow connections between the databases that are on my list.

*.LOG_ARCHIVE_FORMAT=%t_%s_%r.arc
#This parameter specifies archive log naming format.
#%s log sequence number
#%S log sequence number, zero filled
#%t thread number
#%T thread number, zero filled
#%a activation ID
#%d database ID
#%r resetlogs ID that ensures unique names are constructed for the archived log files across multiple incarnations of the database
#So , the archives are created in the full path of LOG_ARCHIVE_DEST/LOG_ARCHIVE_FORMAT
#Example:/u01/ERMAN/db1_archive/1_49_906569100.arc

*.LOG_ARCHIVE_MAX_PROCESSES=30
#specifies  the number of archiver background processes
#min value 1, max value: 30, default value :2

####PARAMETER BELOW is USED WHEN THE PRIMARY DATABASE BECOME A STANDBY, SO WE SET THEM FOR THE PREPARATION OF SWITCHOVER.SO SETTING THESE FOR PRIMARY ARE RECOMMENDED, BUT NOT REQURIED ACTUALLY ###

*.FAL_SERVER=DB3
#FAL Server means the primary database, so this parameter is used to determine the primary database after a switch over operation. So, when a switchover happens, DB1 will be the new standby and DB1 will fetch the archivelogs from the new primary, DB3.(in case DB3 cant send the archivelogs itself)

*.FAL_CLIENT=DB1
# This parameter is used to determine the standby database after a switch over operation. So, when log switch happens, DB1 will be the new standby and DB2 will send the log files using this info.

*.DB_FILE_NAME_CONVERT='/u01/ERMAN/db2_data/DB2/','/u01/ERMAN/db1_data/DB1/
##This parameter is used when a new datafile is created in the primary database, It basically converts the filename of a new datafile on the primary database to a filename on the standby database.

*.LOG_FILE_NAME_CONVERT= '/u01/ERMAN/db2_data/DB2/,'/u01/ERMAN/db1_data/DB1/'
# this parameter used when a new redolog file is created in the primary. It basically converts the filename of a new log file on the primary database to the filename of a log file on the standby database.

*.STANDBY_FILE_MANAGEMENT=AUTO
#this parameter makes Oracle to automatically create files on the standby database when a file is created on the primary and automatically drop files on the standby when dropped from primary.



"""""""""""""""""""""""CASCADING STANDBY --DB2""""""""""""""""""""


*.compatible='11.2.0.0.0'
#The Compatible parameter is used to control the formats of oracle data blocks and redo streams. It is basically controlling what is written to disk.
#COMPATIBLE initialization parameter should be set to the same value on both the primary and standby databases.

*.db_name='DB1'
#DATABASE NAME, THIS IS SAME(DB1) ON ALL THE STANDBYs AS WELL.

*.CONTROL_FILES=/tmp/DB2.ctl
#This is STANDBY CONTROLFILE.
#Standby Controlfiles can be created after taking database backups used for creating the standby.
#The control file must be created after the latest timestamp for the backup datafiles.
#It is the type of controlfile used in physical standby databases.

*.DB_UNIQUE_NAME='DB2'
#SEE , IT IS UNIQUE NAME DIFFERENT THAN Primary.

*.DB_FILE_NAME_CONVERT='/u01/ERMAN/db1_data/DB1/','/u01/ERMAN/db2_data/DB2/'
#This parameter is used when a new datafile is created in the primary database, It basically converts the filename of a new datafile on the primary database to a filename on the standby database.

*.LOG_FILE_NAME_CONVERT='/u01/ERMAN/db1_data/DB1','/u01/ERMAN/db2_data/DB2/'
# this parameter used when a new redolog file is created in the primary. It basically converts the filename of a new log file on the primary database to the filename of a log file on the standby database.

*.LOG_ARCHIVE_DEST_3= 'SERVICE=DB3  VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE)  DB_UNIQUE_NAME=DB3'
#This parameter makes Standby db named DB2 to cascade the redo data received from DB1 to a cascaded database named DB3. This parameter is only active when DB2 is in standby mode.

*.LOG_ARCHIVE_CONFIG= 'DG_CONFIG=(DB1,DB2,DB3,DB4,DB5,DB6)'
#This parameter is used for enabling or disabling sending of redo logs to remote destinations and the receipt of remote redo logs.
#DG_CONFIG is used to specify  a list of unique database names (DB_UNIQUE_NAME) for all of the databases in the Data Guard configuration.
#This parameter is basically saying: I allow connections between the databases that are on my list.

*.FAL_SERVER=DB1
#FAL Server means the primary database, so this parameter is used to determine the primary database .

*.FAL_CLIENT=DB2
# This parameter is used to determine the standby database.

*.log_archive_format='%t_%s_%r.dbf'
#This parameter specifies archive log naming format.
#%s log sequence number
#%S log sequence number, zero filled
#%t thread number
#%T thread number, zero filled
#%a activation ID
#%d database ID
#%r resetlogs ID that ensures unique names are constructed for the archived log files across multiple incarnations of the database
#So , the archives are created in the full path of LOG_ARCHIVE_DEST/LOG_ARCHIVE_FORMAT
#Example:/u01/ERMAN/db2_archive/1_49_906569100.arc


*.remote_login_passwordfile='EXCLUSIVE'
This parameter must be set in order to make it possible to connect to the database remotely using SYS user.
#It should be set the same password for SYS on both the primary and standby databases. The recommended setting is either EXCLUSIVE or SHARED
#EXCLUSIVE means: The password file can be used by only one database. The password file can contain SYS as well as non-SYS users
#SHARED means:One or more databases can use the password file. The password file can contain SYS as well as non-SYS users.
#none(not setting at all) means: Oracle ignores any password file. Therefore, privileged users must be authenticated by the operating system.
#Note that, a password file must be present for this to be active, else you will end up with the OS authentication.

""""""""""""""CASCADED and CASCADING STANDBY -- DB3""""""""""""""""

#Note:This database is a cascaded standby which also cascades the redo received to another standby databases. The cascading database is DB3, which transports the redo data to DB4 and DB5(cascaded databases).
#I will not share all the standby related parameters, but the ones which are important,in these cases.

*.CONTROL_FILES=/tmp/DB3.ctl
#Every standby database must have a unique standby controlfile, which should be created after taking the backup of primary.

*.DB_UNIQUE_NAME='DB3'
#unique database name as expected.

*.DB_FILE_NAME_CONVERT='/u01/ERMAN/db1_data/DB1/','/u01/ERMAN/db3_data/DB3/'
#Already explained, the conversion should be based on the primary file locations.

*.LOG_FILE_NAME_CONVERT='/u01/ERMAN/db1_data/DB1','/u01/ERMAN/db3_data/DB3/'
#Already explained, the conversion should be based on the primary file locations.

LOG_ARCHIVE_DEST_1='LOCATION=/u01/ERMAN/db3_archive VALID_FOR=(ALL_LOGFILES,ALL_ROLES)  DB_UNIQUE_NAME=DB3'
LOG_ARCHIVE_DEST_4= 'SERVICE=DB4  VALID_FOR=(STANDBY_LOGFILES,ALL_ROLES)  DB_UNIQUE_NAME=DB4'  #for primary and standby roles
LOG_ARCHIVE_DEST_5= 'SERVICE=DB5  VALID_FOR=(STANDBY_LOGFILES,ALL_ROLES)  DB_UNIQUE_NAME=DB5' #for primary and standby roles
LOG_ARCHIVE_DEST_2='SERVICE=DB1 LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)   DB_UNIQUE_NAME=DB1' #for primary role

#This parameter set but it is currently ignored. This parameter is activated when DB3 becomes the primary(as it is set PRIMARY_ROLE). So when DB3 becomes primary, it will start to transmit redo to DB1, without any modification.

LOG_ARCHIVE_CONFIG= 'DG_CONFIG=(DB1,DB2,DB3,DB4,DB5,DB6)'
FAL_SERVER=DB2
FAL_CLIENT=DB3

*.STANDBY_FILE_MANAGEMENT=AUTO
#this parameter makes Oracle to automatically create files on the standby database when a file is created on the primary and automatically drop files on the standby when dropped from primary.

"""""""""""""""""""""""CASCADED STANDBY -- DB4""""""""""""""""""""

#Note: This database is a cascaded database.
*.db_name='DB1'
CONTROL_FILES=/tmp/DB4.ctl
DB_UNIQUE_NAME='DB4'
DB_FILE_NAME_CONVERT='/u01/ERMAN/db1_data/DB1/','/u01/ERMAN/db4_data/DB4/'
LOG_FILE_NAME_CONVERT='/u01/ERMAN/db1_data/DB1','/u01/ERMAN/db4_data/DB4/'
LOG_ARCHIVE_DEST_1='LOCATION=/u01/ERMAN/db4_archive VALID_FOR=(ALL_LOGFILES,ALL_ROLES)  DB_UNIQUE_NAME=DB4'
LOG_ARCHIVE_CONFIG= 'DG_CONFIG=(DB1,DB2,DB3,DB4,DB5,DB6)'
FAL_SERVER=DB3
FAL_CLIENT=DB4

"""""""""""""""""""""""CASCADED STANDBY -- DB5""""""""""""""""""""

*.db_name='DB1'
CONTROL_FILES=/tmp/DB5.ctl
DB_UNIQUE_NAME='DB5'
DB_FILE_NAME_CONVERT='/u01/ERMAN/db1_data/DB1/','/u01/ERMAN/db5_data/DB5/'
LOG_FILE_NAME_CONVERT='/u01/ERMAN/db1_data/DB1','/u01/ERMAN/db5_data/DB5/'
LOG_ARCHIVE_DEST_1='LOCATION=/u01/ERMAN/db5_archive VALID_FOR=(ALL_LOGFILES,ALL_ROLES)  DB_UNIQUE_NAME=DB5'
LOG_ARCHIVE_CONFIG= 'DG_CONFIG=(DB1,DB2,DB3,DB4,DB5,DB6)'
FAL_SERVER=DB3
FAL_CLIENT=DB5


"""""STANDBY DATABASE (NOT CASCADED OR CASCADING) --DB6""""""""

*.db_name='DB1'
CONTROL_FILES=/tmp/DB6.ctl
DB_UNIQUE_NAME='DB6'
DB_FILE_NAME_CONVERT='/u01/ERMAN/db1_data/DB1/','/u01/ERMAN/db6_data/DB6/'
LOG_FILE_NAME_CONVERT='/u01/ERMAN/db1_data/DB1','/u01/ERMAN/db6_data/DB6/'
LOG_ARCHIVE_DEST_1='LOCATION=/u01/ERMAN/db6_archive VALID_FOR=(ALL_LOGFILES,ALL_ROLES)  DB_UNIQUE_NAME=DB6'
LOG_ARCHIVE_CONFIG= 'DG_CONFIG=(DB1,DB2,DB3,DB4,DB5,DB6)'
FAL_SERVER=DB1
FAL_CLIENT=DB6
*.STANDBY_FILE_MANAGEMENT=AUTO

Proof Of Concept:

In order to show that the dataguard works properly and as planned, we do 3 test.

1)We create a table named ERMANNEW in primary and switch the logfiles. Then we control the standby environments and ensure that the latest redo data is transffered and applied.
2)We create a datafile named systemFileDemo2.dbf in primary and ensure that it is getting created in the standby environments also. In this test, we also ensure that standby_file_management and db file convert parameters are working properly.
3)We stop managed recovery of DB4, which is a cascaded standby and create a database file named systemFileDemo3.dbf in primary. The purpose of this test is to show that, the cascaded standby configuration is worked properly. In other words, in order to show that, in case of a redo apply problem in DB4, only DB4 is affected, because DB5 is not a cascaded standby of DB4, but DB5 is a cascaded standby of DB3.

[oracle@demoorcl ~]$ . setDB1.env
[oracle@demoorcl ~]$ sqlplus "/as sysdba"

SQL> create table ERMANNEW as select * from dba_objects;

Table created.

SQL> alter system switch logfile;

System altered.

SQL> archive log all;
ORA-00271: there are no logs that need archiving



SQL> SELECT THREAD#, MAX(SEQUENCE#)
FROM V$LOG_HISTORY
WHERE RESETLOGS_CHANGE# =
(SELECT RESETLOGS_CHANGE#
FROM V$DATABASE_INCARNATION
WHERE STATUS = 'CURRENT')
GROUP BY THREAD#;  2    3    4    5    6    7

   THREAD# MAX(SEQUENCE#)
---------- --------------
         1             44


[oracle@demoorcl ~]$ . setDB2.env
[oracle@demoorcl ~]$ sqlplus "/as sysdba"

SQL*Plus: Release 11.2.0.3.0 Production on Mon Mar 21 11:27:39 2016

Copyright (c) 1982, 2011, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> select al.thrd "Thread", almax "Last Seq Received", lhmax "Last Seq Applied"
from (select thread# thrd, max(sequence#) almax
      from v$archived_log
      where resetlogs_change#=(select resetlogs_change# from v$database)
      group by thread#) al,
     (select thread# thrd, max(sequence#) lhmax
      from v$log_history
      where first_time=(select max(first_time) from v$log_history)
      group by thread#) lh
where al.thrd = lh.thrd;  2    3    4    5    6    7    8    9   10

    Thread Last Seq Received Last Seq Applied
---------- ----------------- ----------------
         1                44               44


[oracle@demoorcl ~]$ . setDB3.env
[oracle@demoorcl ~]$ sqlplus "/as sysdba"

SQL*Plus: Release 11.2.0.3.0 Production on Mon Mar 21 11:29:08 2016

Copyright (c) 1982, 2011, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL>  select al.thrd "Thread", almax "Last Seq Received", lhmax "Last Seq Applied"
from (select thread# thrd, max(sequence#) almax
      from v$archived_log
      where resetlogs_change#=(select resetlogs_change# from v$database)
      group by thread#) al,
     (select thread# thrd, max(sequence#) lhmax
      from v$log_history
      where first_time=(select max(first_time) from v$log_history)
      group by thread#) lh
where al.thrd = lh.thrd;  2    3    4    5    6    7    8    9   10

    Thread Last Seq Received Last Seq Applied
---------- ----------------- ----------------
         1                44               44


[oracle@demoorcl ~]$ . setDB4.env
[oracle@demoorcl ~]$ sqlplus "/as sysdba"

SQL*Plus: Release 11.2.0.3.0 Production on Mon Mar 21 11:29:45 2016

Copyright (c) 1982, 2011, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL>  select al.thrd "Thread", almax "Last Seq Received", lhmax "Last Seq Applied"
from (select thread# thrd, max(sequence#) almax
      from v$archived_log
      where resetlogs_change#=(select resetlogs_change# from v$database)
      group by thread#) al,
     (select thread# thrd, max(sequence#) lhmax
      from v$log_history
      where first_time=(select max(first_time) from v$log_history)
      group by thread#) lh
where al.thrd = lh.thrd;  2    3    4    5    6    7    8    9   10

    Thread Last Seq Received Last Seq Applied
---------- ----------------- ----------------
         1                44               44


[oracle@demoorcl ~]$ . setDB5.env
[oracle@demoorcl ~]$ sqlplus "/as  sysdba"

SQL*Plus: Release 11.2.0.3.0 Production on Mon Mar 21 12:04:39 2016

Copyright (c) 1982, 2011, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL>  select al.thrd "Thread", almax "Last Seq Received", lhmax "Last Seq Applied"
from (select thread# thrd, max(sequence#) almax
      from v$archived_log
      where resetlogs_change#=(select resetlogs_change# from v$database)
      group by thread#) al,
     (select thread# thrd, max(sequence#) lhmax
      from v$log_history
      where first_time=(select max(first_time) from v$log_history)
      group by thread#) lh
where al.thrd = lh.thrd;   2    3    4    5    6    7    8    9   10

    Thread Last Seq Received Last Seq Applied
---------- ----------------- ----------------
         1                44               44


[oracle@demoorcl ~]$ . setDB6.env
[oracle@demoorcl ~]$ sqlplus "/as  sysdba"
SQL> select al.thrd "Thread", almax "Last Seq Received", lhmax "Last Seq Applied"
from (select thread# thrd, max(sequence#) almax
      from v$archived_log
      where resetlogs_change#=(select resetlogs_change# from v$database)
      group by thread#) al,
     (select thread# thrd, max(sequence#) lhmax
      from v$log_history
      where first_time=(select max(first_time) from v$log_history)
      group by thread#) lh
where al.thrd = lh.thrd;  2    3    4    5    6    7    8    9   10

    Thread Last Seq Received Last Seq Applied
---------- ----------------- ----------------
         1                44               44


[oracle@demoorcl ~]$ . setDB1.env
[oracle@demoorcl ~]$ sqlplus "/as  sysdba"
SELECT DEST_ID "ID",
  STATUS "DB_status",
  DESTINATION "Archive_dest",
  ERROR "Error"
  FROM V$ARCHIVE_DEST
where status!='INACTIVE';

        ID DB_status Archive_dest                                                                                                                                                                   Error
---------- --------- ----------------------------
         1 VALID     /u01/ERMAN/db1_archive
         2 VALID     DB2
         6 VALID     DB6


SQL> ALTER TABLESPACE SYSTEM
   ADD DATAFILE '/u01/ERMAN/db1_data/DB1/systemFileDemo2.dbf' SIZE 19M;  2

Tablespace altered.

[oracle@demoorcl ~]$. setDB1.env
[oracle@demoorcl ~]$ sqlplus "/as sysdba"

SQL> !ls -al /u01/ERMAN/db1_data/DB1/systemFileDemo.dbf
-rw-r----- 1 oracle oinstall 19931136 Mar 22 14:39 /u01/ERMAN/db1_data/DB1/systemFileDemo.dbf

SQL> !ls -al /u01/ERMAN/db2_data/DB2/systemFileDemo.dbf
-rw-r----- 1 oracle oinstall 19931136 Mar 22 14:42 /u01/ERMAN/db2_data/DB2/systemFileDemo.dbf

SQL> !ls -al /u01/ERMAN/db3_data/DB3/systemFileDemo.dbf
-rw-r----- 1 oracle oinstall 19931136 Mar 22 14:41 /u01/ERMAN/db3_data/DB3/systemFileDemo.dbf

SQL> !ls -al /u01/ERMAN/db4_data/DB4/systemFileDemo.dbf
-rw-r----- 1 oracle oinstall 19931136 Mar 22 14:42 /u01/ERMAN/db4_data/DB4/systemFileDemo.dbf

SQL> !ls -al /u01/ERMAN/db5_data/DB5/systemFileDemo.dbf
-rw-r----- 1 oracle oinstall 19931136 Mar 22 14:42 /u01/ERMAN/db5_data/DB5/systemFileDemo.dbf

SQL> !ls -al /u01/ERMAN/db6_data/DB6/systemFileDemo.dbf
-rw-r----- 1 oracle oinstall 19931136 Mar 22 14:41 /u01/ERMAN/db6_data/DB6/systemFileDemo.dbf


[oracle@demoorcl ~]$. setDB1.env
[oracle@demoorcl ~]$ sqlplus "/as sysdba"
SQL> ALTER TABLESPACE SYSTEM
   ADD DATAFILE '/u01/ERMAN/db1_data/DB1/systemFileDemo3.dbf' SIZE 19M;  2

Tablespace altered.


WE STOP RECOVERY ON DB4, and look what happens... Only DB4 is affected, because DB5 is not a cascaded standby of DB4, but DB5 is a cascaded standby of DB3.

[oracle@demoorcl ~]$ . setDB4.env
[oracle@demoorcl ~]$ sqlplus "/as sysdba"
SQL> alter database recover managed standby database cancel;

Database altered.

SQL> exit


[oracle@demoorcl ~]$. setDB1.env
[oracle@demoorcl ~]$ sqlplus "/as sysdba"

SQL> !ls -al /u01/ERMAN/db1_data/DB1/systemFileDemo3.dbf
-rw-r----- 1 oracle oinstall 19931136 Mar 22 15:25 /u01/ERMAN/db1_data/DB1/systemFileDemo3.dbf

SQL> !ls -al /u01/ERMAN/db2_data/DB2/systemFileDemo3.dbf
-rw-r----- 1 oracle oinstall 19931136 Mar 22 15:27 /u01/ERMAN/db2_data/DB2/systemFileDemo3.dbf

SQL> !ls -al /u01/ERMAN/db3_data/DB3/systemFileDemo3.dbf
-rw-r----- 1 oracle oinstall 19931136 Mar 22 15:26 /u01/ERMAN/db3_data/DB3/systemFileDemo3.dbf

SQL> !ls -al /u01/ERMAN/db4_data/DB4/systemFileDemo3.dbf
ls: cannot access /u01/ERMAN/db4_data/DB4/systemFileDemo3.dbf: No such file or directory

SQL> !ls -al /u01/ERMAN/db5_data/DB5/systemFileDemo3.dbf
-rw-r----- 1 oracle oinstall 19931136 Mar 22 15:27 /u01/ERMAN/db5_data/DB5/systemFileDemo3.dbf

SQL> !ls -al /u01/ERMAN/db6_data/DB6/systemFileDemo3.dbf

-rw-r----- 1 oracle oinstall 19931136 Mar 22 15:26 /u01/ERMAN/db6_data/DB6/systemFileDemo3.dbf

Well,  let's continue with our next task, the switchover...

Monday, March 21, 2016

RDBMS-- Listener Logs, __jdbc__, parsing the listener log file

Oracle 's listener log files include jdbc connection records, as well.
However; as for the jdbc connections, HOST parameter in the CONNECT_STRING shows __jdbc__ when the client connects to the database using the Jdbc thin driver.
So, in order to determine the the real host name of these jdbc clients, we should use the info recorded in the PROTOCOL_INFO.

Example Listener log record:

16-JUN-2015 13:55:26 * (CONNECT_DATA=(CID=(PROGRAM=JDBC Thin Client)(HOST=__jdbc__)(USER=ERMAN))(SERVICE_NAME=CLONE)) * (ADDRESS=(PROTOCOL=tcp)(HOST=10.10.32.234)(PORT=52172)) * establish * CLONE * 0

So, when we interpret ;

TIMESTAMP = 16-JUN-2015 13:55:26
*
CONNECT DATA= (CONNECT_DATA=(CID=(PROGRAM=JDBC Thin Client)(HOST=__jdbc__)(USER=ERMAN))(SERVICE_NAME=CLONE))
PROTOCOL INFO = "(ADDRESS=(PROTOCOL=tcp)(HOST=10.10.32.234)(PORT=52172)) "

So, if we need to find count of these jdbc connections grouped by their hosts(note that, this kind of work may be needed in a migration project, for determining the dependencies/the clients or application servers using the database), we can use a linux command like below;

Example Sort (Linux bash) command regexp etc.. (parsing listener.log file, pattern may change according to the dbms version)
--------------------------------------------------------------------------

[oracleerman@demoorcl trace]$ cat listener.log |grep "__jdbc__" | awk '{print $8}' | grep ADDRESS | sed 's/^.*\((HOST.*\)/\1/g'|sed 's/.PORT=.*//g'| sort | uniq -c

225791 (HOST=10.10.32.234)
22 (HOST=10.123.36.34)
42 (HOST=10.123.36.23)
7 (HOST=10.123.36.22)
45 (HOST=10.123.36.20)
30 (HOST=10.123.36.24)
12 (HOST=10.123.36.33)
7 (HOST=10.123.36.55)
29 (HOST=10.123.36.66)
21 (HOST=10.123.36.89)

Friday, March 18, 2016

RDBMS -- A High level look at Dataguard, also looking from migration perspective, explaining the terms..

This post will be a little different than the others, because I will give general terms used in Oracle Data guard environment. The information will be delivered using an QA based approach. The information provided below are filtered to give you a high level look by introducing the dataguard concepts. So , you may consider this as an introduction, as I will also make a demo of cascading standby databases and switchover operations in my next blog posts.

You may also read my other standby posts. Here are a few links for your;

What Dataguard requires? 
  • Requires source database to be in forcelogging mode. If the source database cant be in forcelogging (because of performance reasons) , then when a nologging operation happens, the datafiles should be syncronized with the primary using backups.an incremental backup created from the primary database can be applied or the affected standby data files can be replaced with a backup of the primary data files taken after the nologging operation .
  • Enterprise Edition license. So if the database is a standard edition, then it is impossible to use Dataguard. An alternative may be using manuel transport and recovery method...Note that: Active dataguard requires extra cost, while dataguard is included in the Enterprise Edition license. 
  • License for the standby database. 
  • Primary database must be in archivelog mode. 
  • Primary and Standby database hardware resources are recommended to be identical. (performance reasons) 
  • Heterogeneus configurations (for primary: Windows, standby: Linux) is supported, but the compatability matrix should be checked. For example: Primary can be windows x86 and standby can be Linux but the Database version should be >11g and patch 10 - Oracle 11g onward, Patch 13104881 is required. Check Document: Data Guard Support for Heterogeneous Primary and Physical Standbys in Same Data Guard Configuration (Doc ID 413484.1) for the details. 
  • Compatible parameter must be the same on physical and standby databases. 
What are the use cases for Dataguard (excluding migrations)?
  • Disaster Recovery , which is the main job of Dataguard 
  • Creating a sync environment and Refreshing Clone Test and Development Databases (refresh by one time rman restore , then using Dataguard Snapshot standby) 
  • Creating a sync environment and Refreshing TEST or Clone environments in legacy environments from Exadata environments, which does not support HCC(Hybrid Columnar Compression) 
  • Creating Reporting Databases from primary and refreshing it. 
  • Creating standby databases and synching them to be able to offload backups of Primary databases to Standby databases. 
  • Creating standby databases and synching them and using Active Dataguard option to offload the readonly workload to the standby databases. 
Why do we use Dataguard for database migrations? 
  • It is supported by Oracle 
  • It is minimizes planned downtimes 
  • When there is no storage replications (Actually, this is the reason why dataguard basede migrations are rarely used) 
  • It has the ability to make heterogeneous migrations. ( --source can be Windows and target can be Linux--) 
  • It gives the ability to easily fail back. 
  • In physical standby based approach(which is the standby db type used mostly), the migration is done physically, so less in-db work,thus effort. 
  • It has the ability to open the database without breaking the synchronization .(transport continues, apply waits -- Dataguard Snapshot Standby) 
  • It has the ability to cascade the redo shipping & apply services 
  • Automatic role transitions(actually not required in migrations) and Centrealized,simple managegement (DataGuard Broker) -- note that, Dataguard broker is not used with the cascaded standbys. 
What are the use cases for Dataguard based migrations?
  • Migrating Large Oracle Databases residing on commodity disks. 
  • Upgrading from an HP Oracle Database Machine running Oracle Database 11g Release 1, to a SUN Oracle Exadata Database Machine running Oracle Database 11g Release 2. 
  • Migrating a single instance Oracle Database to a new RAC environment. 
  • Migrating Oracle databases to Oracle Database Appliance Systems. 
  • Data Center moves (create standby and then switchover) 
  • Os upgrades 
  • Migrating RAC database from one hardware to another 
....

What are Standby Types?
  • Physical Standby: Physical identical copy of primary database (block-for-block basis). It is kept synchronized by applying the redo data received by primary database. (Attention Redo data, not archivelog) 
  • Logical Standby: Logically same with the primary database. It is syncronized using SQL apply method. The sql statements and data is converted when the redo generated at the primary database and then those SQL transactions are applied on the logical standby 
When to use a physical standby? 
  • When simplicity and reliability is required. 
  • When there is a very high redo generated in the source environment. 
  • When there is a requirement to have highest level of protection against corruption . 
  • When a standby database is required to be opened read only while it is sychronizing with the primary. (Active Data Guard) 
  • When there is a requirement to offload fast incremental backups to standby (requires Active Dataguard) 
  • When a snapsot standby database (read-write) is required. 
  • When there is a need to perform rolling database upgrades using transient logical standby database. 
When to use a logical standby? 
  • A need for using standby database in reporting. Altough the data maintained by the standby database cannot be modified, new tables, schemas, indexes and MWs can be created on standby databases. 
  • A need for a rolling database upgrade from Oracle 10g. Note that: physical standby-transient logical standby database is used in rolling upgrades from 11g. 
What are ARCH, LGWR ASYNC and LGWR SYNC Dataguard?
  • ARCH: After the online redolog is archived on the local (after a log switch), the redo from the local archived redolog files are transferred to the standby. On the standby server RFS writes redo data to an archived redolog file from the standby redo log file. Lastly, MRP(physical stdby-redo apply) or LSP(logical stdby-sql apply) apply the redo to the standby database. 
  • LGWR ASYNC: Rather then waiting a log switch and writing entire archived redolog at one, LGWR process uses standby redolog files at the standby database site and read the redo database generated in the primary databases and transmits the redo data to remote. If there are mulitple remote destinations, these transmit is done in parallel. In LGWR ASYNC method, LGWR works asyncronously and does not wait for the Network I/O to complete. 
  • LGWR SYNC: Rather then waiting a log switch and writing entire archived redolog at one, LGWR process uses standby redolog files at the standby database site and read the redo database generated in the primary databases and transmits the redo data to remote.In LGWR SYNC method, LGWR works syncronously and does all the Network I/O in conjuction with writing redo data to the local online redolog files. In each local online redolog write, LGWR also waits for the Network I/O to complete.Transactions are commited when the redo data is received in the standby destinations.LGWR triggers LNS process to do this network I/O. In the standby site, RFS process receives the redo data from the network and writes to the Standby Redolog files. 
Dataguard Data Protection Modes: 
  • Maximum Protection Mode : LGWR SYNC AFFIRM -- Primary database shuts down itself(Dataguard shuts it down) in case of a failure encountered writing its redo stream to one of remote standby redologs in all the standby databases.. This protection mode requires standby redolog files(sized as the same as primary) available in standby . Having 2 standby databases is recommended in this protection mode, as if 1 standby database fails, production will continue to work. 
  • Maximum Availability Mode : LGWR SYNC AFFIRM -- Primary does not shut down itself in case of a remote write failure , but the primary operates in Maximum Performance mode till the error is fixes and then resumes operating in maxium Availability mode. This protection mode requires standby redolog files(sized as the same as primary) available in standby . 
  • Maximum Performance Mode: Any failure in standby does not stop the primary from running. LGWR ASYNC or ARCH. This protection mode does not require standby redolog files available in standby, but having standby redolog files are recommended. This protection method has the minimal impact on primary performance and also note that, when the network is fast, the data protection provided with this method may reach the same level as the maximum availability mode. 
Note that: AFFIRM keyword specified in Max Availability and Protection modes is used to tell the redo transport destination, "acknowledges received redo data after writing it to the standby redo log".

NOAFFIRM is used to tell "acknowledges received redo data after before it to the standby redo log".

So, for SYNC -> AFFIRM is the default, for ASYNC -> NOAFFIRM is the default.

Real Time Apply and Delayed Apply: 
  • Real time Apply is enabled for making apply services to apply the received redo to the standby database without waiting the standby redolog file to be filled and archived. In physical standby databases, It is enabled using: "ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE;" 
  • Delayed Apply:In delayed apply, the redo data received is placed on standby redolog files of the standby database and once the standby redolog files are filled, they are archived and the ARCH process on standby apply these redo data from the archived log files. It is enabled using: "ALTER DATABASE RECOVER MANAGED STANDBY DATABASE" or "ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DELAY 30 --in minutes". 
What are Standby Options? 
  • Snapshot Standby Database: Using snapshot standby database feature, Standby Database can be opened read/write. A Standby database continues to receive redo from primary, but does not apply those redo unless the standby database will be converted back to the physical standby. All the updates that are done while the standby database is a snapshot standby database, are discarded automatically, when the standby database is converted to physical standby database. 
  • Cascaded Standby Database: A physical standby database can be configured to forward redo to a remote physical or logical standby database. 
Cascaded standby databases example: Primary Database > Physical Standby Database with cascaded destination (also called cascading Standby Database) > Physical Standby Database. A standby configuration can be cascaded up to 30 standby databases, as of RDBMS version 11.2. For Ex: PROD -> standby -> cascaded standby -> cascaded standby -> .... so on.

For the details:
https://docs.oracle.com/cd/E11882_01/server.112/e41134/log_transport.htm#SBYDB5122
6.3 Cascaded Redo Transport Destinations
Note:
To use the Oracle Data Guard cascading redo transport destination feature described in this section
6.3.1 Configuring a Cascaded Destination https://docs.oracle.com/cd/A97630_01/server.920/a96653/cascade_appx.htm
Note that, although this documentation is for 9.2, the concepts still apply to 11.2. 
  • Active Dataguard: Using Active Dataguard, Standby database can be opened read-only, while the standby is continuing to apply redo that received from primary. 
  • Transient Logical Standby: Used for converting an existing physical standby to a logical standby database, which can then be converted back to physical standby. Read, http://www.oracle.com/au/products/database/maa-wp-11g-transientlogicalrollingu-1-131927.pdf for the deails and the restrictions. This Transient Logical Standby is a recommended method for performing rolling database upgrades and here is the list of actions performed in such operations; 
    • Create a guranteed restore point on primary. 
    • install upgraded ORACLE_HOME on primary and standby nodes. 
    • Convert physical standby to Logical Standby 
    • Perform the upgrade on Logical Standby 
    • Switch over (make old logical standby -> primary, make old primary -> logical standby ) 
    • Flashback the logical standby (old primary) 
    • Mount the logical standby(old primary) under the new Oracle Home 
    • Convert logical standby(old primary) to physical standby( this may take time, as standby will be syncronized with the primary(new primary)) 
    • Switch the roles once again. 
    • Increase the compatible settings. 
What are the use cases for DataGuard in EXADATA? 
  • Migrations "from" and "to" Exadata. 
  • Offloading read-only workload. 
  • Database rolling upgrades/Standby First patches:Apply patches first to the physical standby.
  • Switch over the targeted database after validations. Fallback- switch back in case necessary. Oracle patch sets and major release upgrades do not apply.Exadata Database Bundle Patch,Patch Set Update (PSU),Critical Patch Update (CPU) and Interim (“one-off”) patches apply.Oracle patches applied to the grid home,Operating system patches and firmware,Storage patches and Network patches also apply. 
  • High Availability (HA) (Local Standby) 
  • Disaster Recovery (DR) (Remote Standby) 
To be continued...

EBS - Delivery Options Fax configuration on EBS R12, Delivery Manager

Here is a filtered information for you.
In order to be able to fax from EBS (Using Delivery Options>Fax), you should be able to "fax" from the EBS application servers.

So, there are 2 methods here,
1)Connect a fax modem to the Application server and make the fax confiuration on the Application Server OS and then edit the printer driver of EBS to execute the faxing command.
2)Find a software which can communicate with fax machine / IPP printer which support faxing, and execute that software in the EBS printer driver to send the fax instructions to the Faxing Device.
Note that, altough I didnt try yet, the software called Hylafax may be used for this..

What document you will need:

1) "Delivery Options" on the Submit Request form calls the  XML Publisher's Delivery Manager API.
So,
"Oracle XML Publisher Administration and Developer's Guide Release 12 Part No. B31412 01 https://docs.oracle.com/cd/B34956_01/current/acrobat/120xdoig.pdf"

2) Patch 13019389:R12.FND.B - BACKPORT OF 12.2 DELIVERY MANAGER AND BURSTING ENHANCEMENTS
FNDRSRUN.fmb 120.44.12010000.66

3) Oracle E-Business Suite System Administrator's Guide - Maintenance  Release 12.1  Part Number E12894-04 https://docs.oracle.com/cd/E18727_01/doc.121/e12894/T202991T202993.htm 
If you are on higher level, check a higher version of this document

4) A fax modem or a software which can send faxing instructions to the IPP printer which is capable of faxing.

ODA - Why ODA_BASE nodes are faster?

We have seen this in action... ODA_BASE performs much better in I/O, and it become more visible when you migrate a critical IO bound database from ODA_BASE to a Guest VM residing on ODA and compare its performance againts performance benchmark recorded when that database was running in ODA_BASE.
So, ODA_BASE is much faster than any other Virtual Machine which reside on ODA Virtualized environment, but what is the reason for that.

Well, the reason is,altough ODA_BASE is actually a HVM, it has paravirtualized drivers to enhance its operations.. Actually, it is not that :)  Paravirtualized drivers on ODA_BASE is used for eliminating the emulation for the Network devices, but for this devices there is another important thing..
so, ODA_BASE is a HVM with paravirtualized drivers, that 's for sure, but what makes ODA_BASE faster is that, in ODA_BASE domain, Oracle uses Intel Vt-d extentions and PCI passthrough to make ODA BASE virtual server to directly access to the underlying disk drives.

Look at the bios configuration of the hardware nodes, you will see;

-<IO_Virtualization>
<!-- VT-d -->
<!-- Description: Enable/Disable Intel(R) Virtualization Technology for Directed I/O. -->
<!-- Possible Values: "Disabled", "Enabled" -->
<VT-d>Enabled</VT-d>

Look at the dmesg output of ODA_BAE nodes, you will see;

Booting paravirtualized kernel on Xen HVM

Look at ODA BASE vm.cfg file residing in DOM0;

ODABASE VM.cfg:

vncunused = 1

kernel = '/usr/lib/xen/boot/hvmloader'
vnc = 1
name = 'oakDom1'
memory = 49152
timer_mode = 0
device_model = '/usr/lib64/xen/bin/qemu-dm'
builder = 'hvm'
vnclisten = '0.0.0.0'
cpus = '32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47'
on_crash = 'coredump-restart'
on_reboot = 'restart'
vcpus = 16
pci = ['30:00.0@15', '40:00.0@16'] -->PCI scsi controllers.
pae = 1
apic = 1
vif = ['type=netfront,bridge=priv1', 'type=netfront,bridge=net1', 'type=netfront,bridge=net2']
serial = 'pty'
disk = ['file:/OVS/Repositories/odabaseRepo/VirtualMachines/oakDom1/System.img,xvda,w', 'file:/OVS/Repositories/odabaseRepo/VirtualMachines/oakDom1/u01.img,xvdb,w', 'file:/OVS/Repositories/odabaseRepo/VirtualMachines/oakDom1/swap.img,xvdc,w']
acpi = 1
localtime = 1

Look at lspci output of ODA_BASE node, you will see;


30:00.0 Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] (rev 03)

40:00.0 Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] (rev 03)

Okay, when you look at the OS mount points of ODA_BASE , you will something different;

[root@odabasenode1 ~]# readlink /sys/block/xvda/device/driver
../../bus/xen/drivers/vbd

vbd driver indicates that ODA Base nodes are using paravirtualized drivers for accessing the OS mount points.

Conclusion;

ODA BASE is faster because, it is using Intel Vt-d extension + PCI passthrough for accessing the disk drivers on which it places ASM diskgroups and in turn Oracle Database files.
For OS mount points, ODA_BASE uses the same technology that any other Guest VMs on ODA use, the paravirtualized drivers.
So, this means if you place your database file on a standard mount point, then ODA_BASE is not so fast. This also means that, when you use OS mounts in ODA_BASE, then you will get the same performance that you get at any Guest VMs created on ODA environment. (I mean PVHVM Virtual Machines) , really?
Well, that s actually not true, because Guest VM are deployed on the Oracle Vm repository. So as the Oracle VM repository resides on ASM diskgroups of ODA_BASE and as the hypervisor is on dom0 and as Guest VMs are just user level processes on dom0 and as dom0 reaches these repositories through NFS exports, the Guest VMs residing on ODA are even slower.
Note that: oda base virtual machine files are located  in dom0.(locally)

Waiting for your comments.

Monday, March 14, 2016

EBS 12.2 -- EBS Weblogic administration -- AD scripts or Weblogic Console?

Altough it is not so certain in the Oracle Support, or in Oracle documents, Oracle development recommends using AD scripts (for ex: admanagedsrvctl.sh) to manage/control the EBS 12.2 Weblogic components. In this manner, if we need to restart oacore_server1, we need to execute admanagedsrvctl.sh .
On the other hand, FMW and Weblogic delivers an administration console utility called Weblogic Administrator Console and we can also restart EBS managed server using it.
Well, we have done 5 EBS 12.2 projects and currently have 4 EBS 12.2 support customers currently. To be honest, we restarted EBS managed server using weblogic console most of the time. We have restarted oacore after a hang caused by not having some resources, we have restart oacore caused by a development issue and we have done all these restarts using weblogic console. It is interesting that, we haven't encounter an issue altough we have done this activity using weblogic console approx. 1000 time until now.

Why Oracle development recommends this?

AD scripts does the following;
validation checks (for examle, they check, if the latest FORMSAPP.EAR is deployed)
sync the related changes between context file and weblogic conf files.

The flow of AD scripts follows is almost as follows;

For example: admanagedsrvctl.sh

AD script -> txrun.pl -> txkChkEBSDependecies.pl, then
AD script - > adProvisionEBS.pl -> oracle.apps.ad.tools.configuration.EBSProvisioner

What do oracle documents say?

The document: http://docs.oracle.com/cd/E26401_01/doc.122/e22953.pdf

The statement:

Commands to Manage Oracle E-Business Suite Service Processes
• Commands for managing processes on the Applications tier
The adstrtal and adstpall scripts can be used to start and stop all the
AutoConfig-managed application tier services in a single operation. Alternatively, it
is possible to administer the individual services separately using their respective
service control scripts. !!!" The oacore, oafm, forms and forms-c4ws services can also be
managed by starting and stopping the respective managed servers via the
WebLogic Server Administration Console.
"!!!

So would we still use Weblogic Console for these kinds of operations?

I still think that, Weblogic Console can be used. This is becasue dependency checks done by the AD scripts do not break the script execution, which make these checks not so important, in my opinion.
So, if we need a general summary;
I can say that,, when the environment is stable (no recent patches, no recent autoconfig changes), weblogic console can be used for managing EBS weblogic components. On the other hand, if there are recent changes in context file or in the FMW directory structure caused by let's say patching, then AD scripts should be used.

In my opinion AD scripts are recommended, but Weblogic Console can also be used for managing EBS 12.2 weblogic components.

Tuesday, March 8, 2016

EBS-- Cloud and EBS, Oracle Applications Unlimited

Cloud applications have started to be mentioned in every meeting and the questions like "Will EBS will be a history soon?" or "Will Oracle stop investing on applications like EBS?" come to our minds, an the answer is delivered by Oracle under the title of Oracle Applications Unlimited.
It seem as long as the Customer will use the EBS , Oracle will continue to enhance and support EBS.
Cloud application will continue to enhance , too. Application like EBS can coexist with the cloud application as well.

Here is the phrase;

"Oracle Applications Unlimited is Oracle's commitment to continuously innovate in current applications while also delivering the next generation of Cloud applications"

Check this pdf for more details -> 


Check Oracle Applications unlimited  for further details->


Watch this for getting a quick information ->


Lastly, here is a consolidated info for the EBS support dates;

Currently, it is stated that, EBS 12.2's premier support will end on Sep 2018 and the extended support will be till 2021. After the year 2021,  EBS 12.2 will be on sustaining support.
EBS 11.5.10 and 12.0 are already on sustaining support. 
EBS 12.1's premier support will end on Dec 2016 and  EBS 12.1 will be on extended support till Dec 2019. (these dates includes exceptions as well)

Wednesday, March 2, 2016

ODA X5-2 , Oracle Database 11.2.0.3.15 support, ACFS

The minimum supported database version in ODA X5-2 is 11.2.0.3.15. On the other hand, when you deploy ODA X5-2  (for instance a bare metal deployment), you will have a Grid 12c , also you can have a 12C rdbms using oakcli commands.
so in order to use 11.2.0.3.15 with the Grid 12c provided with the ODA deployment, you have to download  End User RDBMS Clone file for 11.2.0.3.15 Patch 14777276. You have to unpack it using oakcli and create the dbhome and the database using oakcli commands.
Altough the diskgroups delivered with ODA X5-2 are compatible with 12c only. (compatible.asm and compatible.rdbms are set to 12c), as ODA X5-2 uses ACFS for the filesystem, it is not a problem for having 11.2.0.3.15 for storing files on them.

Note that, In  ODA X5-2 , we have HDDs, cache SSDs and log SSDs at the bottom. ASM diskGroup reside on top of these disks and ACFS filesystems on the top of these ASM diskGroups..

Also note that, when you deploy ODA X5-2 and if you dont create an initial database during you deployment, then you will have no ACFS(for storing database files) on your ODA X5-2 machine.
But, it is not a problem, as the ACFS volumes and mount points will be created  automatically when you create your first database using oakcli.

Additional info: ODA versions/Release Dates

ODA V1 released on Oct 2011, ODA X3-2 on released Mar 2013, ODA X4-2 on released Dec 2013 and ODA X5-2 released on Feb 2015.



Tuesday, March 1, 2016

ODA X5- Bare Metal Deployment , oakcli deploy , Appliance Manager screens

Just finished deployment of ODA X5 bare metal and following are my notes;

  • deployment takes approx. 1 hour to complete.
  • Cabling is important, after cabling and validating storage, you are good to go with the oakcli unpack and oakcli deploy operations.
  • Color codes are not important , but the paths described in the deployment guide and the ODA poster must be the same.
  • Before configuring the firstnet (oakcli firsnet), network cables should be there on net0 and net1 ports of both of the oda compute nodes.
  • firstnet should be run twice, once on node 0 and then once on node1.
  • scan ips are also important so that scan ips and scan names should be configured on Dns, before starting the oakcli deploy.



  • Typical installation seems sufficient.
  • ILOM addresses are gathered automatically from Dhcp, so after the installation , you can find the these dhcp ILOM addresses by ipmitool.
  • Nothing more, just give the inputs to the Appliance Manager and wait.
  • Almost forgot, the latest ODA software version, such as 12.1.2.4.0 deploys an 12c Grid Home and sets the rdbms compatible diskgroup parameters of the Disk Groups to 12c.. This parameter can not be changed to a lower value, that's why be prepared for it.