In this blog post, I will show you a physical standby configuration consisting of 6 nodes.
In order to create this demo environment, an enterprise edition database(primary) was created using the dbca (11gR2) and 5 standby databases are created by cloning this enterprise edition primary database.
Nodes and Roles:
1 Primary (DB1)
1 Cascading Physical Standby (DB2)
1 Cascading and Cascaded Physical Standby (DB3)
2 Cascaded Physical Standby (DB4, DB5)
1 Phsyical Standby (DB6)
All the Standby except DB2 is operating using LGWR ASYNC method. Only the transport between DB1 and DB2 is in LGWR SYNC mode.
The topology is as follows;
DB1(primary)--->DB2--->DB3---->DB4
| | ----->DB5
|
|--->DB6
The target topology in case of a switchover scenario which may be implemented for planned downtime is as follows;
DB5
|
|
|->DB3(primary)------->DB1----->DB2
| |
| ->DB4 |------->DB6
So, I m sharing the scenario, because this 6 node standby environment will be configured according to this switchover scenario, in other words to give minimum effort in case of a planned switchover scenario.
Here is the init.ora parameters used for building this dataguard environment. This is the most important part actually, as creating standby environment by cloning the primary, configured tnsnames.ora files and the listener are well-known things, already.
"""""""""""""""""""""""PRIMARY --DB1""""""""""""""""""""""""""""
*.compatible='11.2.0.0.0'
#The Compatible parameter is used to control the formats of oracle data blocks and redo streams. It is basically controlling what is written to disk.
#COMPATIBLE initialization parameter should be set to the same value on both the primary and standby databases.
*.db_name='DB1'
#DATABASE NAME, THIS IS SAME(DB1) ON ALL THE STANDBYs AS WELL.
*.db_unique_name='DB1'
#DATABASE UNIQUE NAME, THIS CHANGES ACCORDING TO THE SID OF THE STANDBY DATABASES
*.log_archive_dest_1='LOCATION=/u01/ERMAN/db1_archive VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=DB1'
#LOCAL ARCHIVAL DEST, it is used for specifying the local archive dest.
#"VALID_FOR" is an optional argument
#ALL_LOGFILES— This destination is valid when archiving either online redo log files or standby redo log files.
#ALL_ROLES—This destination is valid when the database is running in either the primary or the standby role.
*.log_archive_dest_2='SERVICE=DB2 LGWR SYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=DB2'
#IT S THE REMOTE ARCHIVE DEST, IT IS REACHED VIA TNS entry CALLED DB2, so it is for archiving to the standby database named DB2
#"VALID_FOR" is an optional argument
#ONLINE_LOGFILE—This destination is valid only when archiving online redo log files.
#PRIMARY_ROLE—This destination is valid only when the database is running in the primary role.
*.log_archive_dest_6='SERVICE=DB6 LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=DB6'
#IT S THE REMOTE ARCHIVE DEST, IT IS REACHED VIA TNS entry CALLED DB6, so it is for archiving to the standby database named DB6
#"VALID_FOR" is an optional argument
#ONLINE_LOGFILE—This destination is valid only when archiving online redo log files.
#PRIMARY_ROLE—This destination is valid only when the database is running in the primary role.
*.log_archive_dest_state_1='ENABLE'
#DEFAULT is ENABLE,this parameter specifies that a valid log archive destination can be used for a subsequent archiving operation (automatic or manual)
Other values are;
defer: Specifies that valid destination information and attributes are preserved, but the destination is excluded from archiving operations until re-enabled.
alternate:Specifies that a log archive destination is not enabled but will become enabled if communications to another destination fail.
*.log_archive_dest_state_2='ENABLE'
#DEFAULT is ENABLE,this parameter specifies that a valid log archive destination can be used for a subsequent archiving operation (automatic or manual)
#Other values are;
#defer: Specifies that valid destination information and attributes are preserved, but the destination is excluded from archiving operations until re-enabled.
#alternate:Specifies that a log archive destination is not enabled but will become enabled if communications to another destination fail.
*.log_archive_dest_state_6='ENABLE'
#DEFAULT is ENABLE,this parameter specifies that a valid log archive destination can be used for a subsequent archiving operation (automatic or manual)
#Other values are;
#defer: Specifies that valid destination information and attributes are preserved, but the destination is excluded from archiving operations until re-enabled.
#alternate:Specifies that a log archive destination is not enabled but will become enabled if communications to another destination fail.
*.remote_login_passwordfile='EXCLUSIVE'
#This parameter must be set in order to make it possible to connect to the database remotely using SYS user.
#It should be set the same password for SYS on both the primary and standby databases. The recommended setting is either EXCLUSIVE or SHARED
#EXCLUSIVE means: The password file can be used by only one database. The password file can contain SYS as well as non-SYS users
#SHARED means:One or more databases can use the password file. The password file can contain SYS as well as non-SYS users.
#none(not setting at all) means: Oracle ignores any password file. Therefore, privileged users must be authenticated by the operating system.
#Note that, a password file must be present for this to be active, else you will end up with the OS authentication.
*.LOG_ARCHIVE_CONFIG='DG_CONFIG=(DB1,DB2,DB3,DB4,DB5,DB6)'
#This parameter is used for enabling or disabling sending of redo logs to remote destinations and the receipt of remote redo logs.
#DG_CONFIG is used to specify a list of unique database names (DB_UNIQUE_NAME) for all of the databases in the Data Guard configuration.
#This parameter is basically saying: I allow connections between the databases that are on my list.
*.LOG_ARCHIVE_FORMAT=%t_%s_%r.arc
#This parameter specifies archive log naming format.
#%s log sequence number
#%S log sequence number, zero filled
#%t thread number
#%T thread number, zero filled
#%a activation ID
#%d database ID
#%r resetlogs ID that ensures unique names are constructed for the archived log files across multiple incarnations of the database
#So , the archives are created in the full path of LOG_ARCHIVE_DEST/LOG_ARCHIVE_FORMAT
#Example:/u01/ERMAN/db1_archive/1_49_906569100.arc
*.LOG_ARCHIVE_MAX_PROCESSES=30
#specifies the number of archiver background processes
#min value 1, max value: 30, default value :2
####PARAMETER BELOW is USED WHEN THE PRIMARY DATABASE BECOME A STANDBY, SO WE SET THEM FOR THE PREPARATION OF SWITCHOVER.SO SETTING THESE FOR PRIMARY ARE RECOMMENDED, BUT NOT REQURIED ACTUALLY ###
*.FAL_SERVER=DB3
#FAL Server means the primary database, so this parameter is used to determine the primary database after a switch over operation. So, when a switchover happens, DB1 will be the new standby and DB1 will fetch the archivelogs from the new primary, DB3.(in case DB3 cant send the archivelogs itself)
*.FAL_CLIENT=DB1
# This parameter is used to determine the standby database after a switch over operation. So, when log switch happens, DB1 will be the new standby and DB2 will send the log files using this info.
*.DB_FILE_NAME_CONVERT='/u01/ERMAN/db2_data/DB2/','/u01/ERMAN/db1_data/DB1/
##This parameter is used when a new datafile is created in the primary database, It basically converts the filename of a new datafile on the primary database to a filename on the standby database.
*.LOG_FILE_NAME_CONVERT= '/u01/ERMAN/db2_data/DB2/,'/u01/ERMAN/db1_data/DB1/'
# this parameter used when a new redolog file is created in the primary. It basically converts the filename of a new log file on the primary database to the filename of a log file on the standby database.
*.STANDBY_FILE_MANAGEMENT=AUTO
#this parameter makes Oracle to automatically create files on the standby database when a file is created on the primary and automatically drop files on the standby when dropped from primary.
"""""""""""""""""""""""CASCADING STANDBY --DB2""""""""""""""""""""
*.compatible='11.2.0.0.0'
#The Compatible parameter is used to control the formats of oracle data blocks and redo streams. It is basically controlling what is written to disk.
#COMPATIBLE initialization parameter should be set to the same value on both the primary and standby databases.
*.db_name='DB1'
#DATABASE NAME, THIS IS SAME(DB1) ON ALL THE STANDBYs AS WELL.
*.CONTROL_FILES=/tmp/DB2.ctl
#This is STANDBY CONTROLFILE.
#Standby Controlfiles can be created after taking database backups used for creating the standby.
#The control file must be created after the latest timestamp for the backup datafiles.
#It is the type of controlfile used in physical standby databases.
*.DB_UNIQUE_NAME='DB2'
#SEE , IT IS UNIQUE NAME DIFFERENT THAN Primary.
*.DB_FILE_NAME_CONVERT='/u01/ERMAN/db1_data/DB1/','/u01/ERMAN/db2_data/DB2/'
#This parameter is used when a new datafile is created in the primary database, It basically converts the filename of a new datafile on the primary database to a filename on the standby database.
*.LOG_FILE_NAME_CONVERT='/u01/ERMAN/db1_data/DB1','/u01/ERMAN/db2_data/DB2/'
# this parameter used when a new redolog file is created in the primary. It basically converts the filename of a new log file on the primary database to the filename of a log file on the standby database.
*.LOG_ARCHIVE_DEST_3= 'SERVICE=DB3 VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE) DB_UNIQUE_NAME=DB3'
#This parameter makes Standby db named DB2 to cascade the redo data received from DB1 to a cascaded database named DB3. This parameter is only active when DB2 is in standby mode.
*.LOG_ARCHIVE_CONFIG= 'DG_CONFIG=(DB1,DB2,DB3,DB4,DB5,DB6)'
#This parameter is used for enabling or disabling sending of redo logs to remote destinations and the receipt of remote redo logs.
#DG_CONFIG is used to specify a list of unique database names (DB_UNIQUE_NAME) for all of the databases in the Data Guard configuration.
#This parameter is basically saying: I allow connections between the databases that are on my list.
*.FAL_SERVER=DB1
#FAL Server means the primary database, so this parameter is used to determine the primary database .
*.FAL_CLIENT=DB2
# This parameter is used to determine the standby database.
*.log_archive_format='%t_%s_%r.dbf'
#This parameter specifies archive log naming format.
#%s log sequence number
#%S log sequence number, zero filled
#%t thread number
#%T thread number, zero filled
#%a activation ID
#%d database ID
#%r resetlogs ID that ensures unique names are constructed for the archived log files across multiple incarnations of the database
#So , the archives are created in the full path of LOG_ARCHIVE_DEST/LOG_ARCHIVE_FORMAT
#Example:/u01/ERMAN/db2_archive/1_49_906569100.arc
*.remote_login_passwordfile='EXCLUSIVE'
This parameter must be set in order to make it possible to connect to the database remotely using SYS user.
#It should be set the same password for SYS on both the primary and standby databases. The recommended setting is either EXCLUSIVE or SHARED
#EXCLUSIVE means: The password file can be used by only one database. The password file can contain SYS as well as non-SYS users
#SHARED means:One or more databases can use the password file. The password file can contain SYS as well as non-SYS users.
#none(not setting at all) means: Oracle ignores any password file. Therefore, privileged users must be authenticated by the operating system.
#Note that, a password file must be present for this to be active, else you will end up with the OS authentication.
""""""""""""""CASCADED and CASCADING STANDBY -- DB3""""""""""""""""
#Note:This database is a cascaded standby which also cascades the redo received to another standby databases. The cascading database is DB3, which transports the redo data to DB4 and DB5(cascaded databases).
#I will not share all the standby related parameters, but the ones which are important,in these cases.
*.CONTROL_FILES=/tmp/DB3.ctl
#Every standby database must have a unique standby controlfile, which should be created after taking the backup of primary.
*.DB_UNIQUE_NAME='DB3'
#unique database name as expected.
*.DB_FILE_NAME_CONVERT='/u01/ERMAN/db1_data/DB1/','/u01/ERMAN/db3_data/DB3/'
#Already explained, the conversion should be based on the primary file locations.
*.LOG_FILE_NAME_CONVERT='/u01/ERMAN/db1_data/DB1','/u01/ERMAN/db3_data/DB3/'
#Already explained, the conversion should be based on the primary file locations.
LOG_ARCHIVE_DEST_1='LOCATION=/u01/ERMAN/db3_archive VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=DB3'
LOG_ARCHIVE_DEST_4= 'SERVICE=DB4 VALID_FOR=(STANDBY_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=DB4' #for primary and standby roles
LOG_ARCHIVE_DEST_5= 'SERVICE=DB5 VALID_FOR=(STANDBY_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=DB5' #for primary and standby roles
LOG_ARCHIVE_DEST_2='SERVICE=DB1 LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=DB1' #for primary role
#This parameter set but it is currently ignored. This parameter is activated when DB3 becomes the primary(as it is set PRIMARY_ROLE). So when DB3 becomes primary, it will start to transmit redo to DB1, without any modification.
LOG_ARCHIVE_CONFIG= 'DG_CONFIG=(DB1,DB2,DB3,DB4,DB5,DB6)'
FAL_SERVER=DB2
FAL_CLIENT=DB3
*.STANDBY_FILE_MANAGEMENT=AUTO
#this parameter makes Oracle to automatically create files on the standby database when a file is created on the primary and automatically drop files on the standby when dropped from primary.
"""""""""""""""""""""""CASCADED STANDBY -- DB4""""""""""""""""""""
#Note: This database is a cascaded database.
*.db_name='DB1'
CONTROL_FILES=/tmp/DB4.ctl
DB_UNIQUE_NAME='DB4'
DB_FILE_NAME_CONVERT='/u01/ERMAN/db1_data/DB1/','/u01/ERMAN/db4_data/DB4/'
LOG_FILE_NAME_CONVERT='/u01/ERMAN/db1_data/DB1','/u01/ERMAN/db4_data/DB4/'
LOG_ARCHIVE_DEST_1='LOCATION=/u01/ERMAN/db4_archive VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=DB4'
LOG_ARCHIVE_CONFIG= 'DG_CONFIG=(DB1,DB2,DB3,DB4,DB5,DB6)'
FAL_SERVER=DB3
FAL_CLIENT=DB4
"""""""""""""""""""""""CASCADED STANDBY -- DB5""""""""""""""""""""
*.db_name='DB1'
CONTROL_FILES=/tmp/DB5.ctl
DB_UNIQUE_NAME='DB5'
DB_FILE_NAME_CONVERT='/u01/ERMAN/db1_data/DB1/','/u01/ERMAN/db5_data/DB5/'
LOG_FILE_NAME_CONVERT='/u01/ERMAN/db1_data/DB1','/u01/ERMAN/db5_data/DB5/'
LOG_ARCHIVE_DEST_1='LOCATION=/u01/ERMAN/db5_archive VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=DB5'
LOG_ARCHIVE_CONFIG= 'DG_CONFIG=(DB1,DB2,DB3,DB4,DB5,DB6)'
FAL_SERVER=DB3
FAL_CLIENT=DB5
"""""STANDBY DATABASE (NOT CASCADED OR CASCADING) --DB6""""""""
*.db_name='DB1'
CONTROL_FILES=/tmp/DB6.ctl
DB_UNIQUE_NAME='DB6'
DB_FILE_NAME_CONVERT='/u01/ERMAN/db1_data/DB1/','/u01/ERMAN/db6_data/DB6/'
LOG_FILE_NAME_CONVERT='/u01/ERMAN/db1_data/DB1','/u01/ERMAN/db6_data/DB6/'
LOG_ARCHIVE_DEST_1='LOCATION=/u01/ERMAN/db6_archive VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=DB6'
LOG_ARCHIVE_CONFIG= 'DG_CONFIG=(DB1,DB2,DB3,DB4,DB5,DB6)'
FAL_SERVER=DB1
FAL_CLIENT=DB6
*.STANDBY_FILE_MANAGEMENT=AUTO
Proof Of Concept:
In order to show that the dataguard works properly and as planned, we do 3 test.
1)We create a table named ERMANNEW in primary and switch the logfiles. Then we control the standby environments and ensure that the latest redo data is transffered and applied.
2)We create a datafile named systemFileDemo2.dbf in primary and ensure that it is getting created in the standby environments also. In this test, we also ensure that standby_file_management and db file convert parameters are working properly.
3)We stop managed recovery of DB4, which is a cascaded standby and create a database file named systemFileDemo3.dbf in primary. The purpose of this test is to show that, the cascaded standby configuration is worked properly. In other words, in order to show that, in case of a redo apply problem in DB4, only DB4 is affected, because DB5 is not a cascaded standby of DB4, but DB5 is a cascaded standby of DB3.
[oracle@demoorcl ~]$ . setDB1.env
[oracle@demoorcl ~]$ sqlplus "/as sysdba"
SQL> create table ERMANNEW as select * from dba_objects;
Table created.
SQL> alter system switch logfile;
System altered.
SQL> archive log all;
ORA-00271: there are no logs that need archiving
SQL> SELECT THREAD#, MAX(SEQUENCE#)
FROM V$LOG_HISTORY
WHERE RESETLOGS_CHANGE# =
(SELECT RESETLOGS_CHANGE#
FROM V$DATABASE_INCARNATION
WHERE STATUS = 'CURRENT')
GROUP BY THREAD#; 2 3 4 5 6 7
THREAD# MAX(SEQUENCE#)
---------- --------------
1 44
[oracle@demoorcl ~]$ . setDB2.env
[oracle@demoorcl ~]$ sqlplus "/as sysdba"
SQL*Plus: Release 11.2.0.3.0 Production on Mon Mar 21 11:27:39 2016
Copyright (c) 1982, 2011, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> select al.thrd "Thread", almax "Last Seq Received", lhmax "Last Seq Applied"
from (select thread# thrd, max(sequence#) almax
from v$archived_log
where resetlogs_change#=(select resetlogs_change# from v$database)
group by thread#) al,
(select thread# thrd, max(sequence#) lhmax
from v$log_history
where first_time=(select max(first_time) from v$log_history)
group by thread#) lh
where al.thrd = lh.thrd; 2 3 4 5 6 7 8 9 10
Thread Last Seq Received Last Seq Applied
---------- ----------------- ----------------
1 44 44
[oracle@demoorcl ~]$ . setDB3.env
[oracle@demoorcl ~]$ sqlplus "/as sysdba"
SQL*Plus: Release 11.2.0.3.0 Production on Mon Mar 21 11:29:08 2016
Copyright (c) 1982, 2011, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> select al.thrd "Thread", almax "Last Seq Received", lhmax "Last Seq Applied"
from (select thread# thrd, max(sequence#) almax
from v$archived_log
where resetlogs_change#=(select resetlogs_change# from v$database)
group by thread#) al,
(select thread# thrd, max(sequence#) lhmax
from v$log_history
where first_time=(select max(first_time) from v$log_history)
group by thread#) lh
where al.thrd = lh.thrd; 2 3 4 5 6 7 8 9 10
Thread Last Seq Received Last Seq Applied
---------- ----------------- ----------------
1 44 44
[oracle@demoorcl ~]$ . setDB4.env
[oracle@demoorcl ~]$ sqlplus "/as sysdba"
SQL*Plus: Release 11.2.0.3.0 Production on Mon Mar 21 11:29:45 2016
Copyright (c) 1982, 2011, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> select al.thrd "Thread", almax "Last Seq Received", lhmax "Last Seq Applied"
from (select thread# thrd, max(sequence#) almax
from v$archived_log
where resetlogs_change#=(select resetlogs_change# from v$database)
group by thread#) al,
(select thread# thrd, max(sequence#) lhmax
from v$log_history
where first_time=(select max(first_time) from v$log_history)
group by thread#) lh
where al.thrd = lh.thrd; 2 3 4 5 6 7 8 9 10
Thread Last Seq Received Last Seq Applied
---------- ----------------- ----------------
1 44 44
[oracle@demoorcl ~]$ . setDB5.env
[oracle@demoorcl ~]$ sqlplus "/as sysdba"
SQL*Plus: Release 11.2.0.3.0 Production on Mon Mar 21 12:04:39 2016
Copyright (c) 1982, 2011, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> select al.thrd "Thread", almax "Last Seq Received", lhmax "Last Seq Applied"
from (select thread# thrd, max(sequence#) almax
from v$archived_log
where resetlogs_change#=(select resetlogs_change# from v$database)
group by thread#) al,
(select thread# thrd, max(sequence#) lhmax
from v$log_history
where first_time=(select max(first_time) from v$log_history)
group by thread#) lh
where al.thrd = lh.thrd; 2 3 4 5 6 7 8 9 10
Thread Last Seq Received Last Seq Applied
---------- ----------------- ----------------
1 44 44
[oracle@demoorcl ~]$ . setDB6.env
[oracle@demoorcl ~]$ sqlplus "/as sysdba"
SQL> select al.thrd "Thread", almax "Last Seq Received", lhmax "Last Seq Applied"
from (select thread# thrd, max(sequence#) almax
from v$archived_log
where resetlogs_change#=(select resetlogs_change# from v$database)
group by thread#) al,
(select thread# thrd, max(sequence#) lhmax
from v$log_history
where first_time=(select max(first_time) from v$log_history)
group by thread#) lh
where al.thrd = lh.thrd; 2 3 4 5 6 7 8 9 10
Thread Last Seq Received Last Seq Applied
---------- ----------------- ----------------
1 44 44
[oracle@demoorcl ~]$ . setDB1.env
[oracle@demoorcl ~]$ sqlplus "/as sysdba"
SELECT DEST_ID "ID",
STATUS "DB_status",
DESTINATION "Archive_dest",
ERROR "Error"
FROM V$ARCHIVE_DEST
where status!='INACTIVE';
ID DB_status Archive_dest Error
---------- --------- ----------------------------
1 VALID /u01/ERMAN/db1_archive
2 VALID DB2
6 VALID DB6
SQL> ALTER TABLESPACE SYSTEM
ADD DATAFILE '/u01/ERMAN/db1_data/DB1/systemFileDemo2.dbf' SIZE 19M; 2
Tablespace altered.
[oracle@demoorcl ~]$. setDB1.env
[oracle@demoorcl ~]$ sqlplus "/as sysdba"
SQL> !ls -al /u01/ERMAN/db1_data/DB1/systemFileDemo.dbf
-rw-r----- 1 oracle oinstall 19931136 Mar 22 14:39 /u01/ERMAN/db1_data/DB1/systemFileDemo.dbf
SQL> !ls -al /u01/ERMAN/db2_data/DB2/systemFileDemo.dbf
-rw-r----- 1 oracle oinstall 19931136 Mar 22 14:42 /u01/ERMAN/db2_data/DB2/systemFileDemo.dbf
SQL> !ls -al /u01/ERMAN/db3_data/DB3/systemFileDemo.dbf
-rw-r----- 1 oracle oinstall 19931136 Mar 22 14:41 /u01/ERMAN/db3_data/DB3/systemFileDemo.dbf
SQL> !ls -al /u01/ERMAN/db4_data/DB4/systemFileDemo.dbf
-rw-r----- 1 oracle oinstall 19931136 Mar 22 14:42 /u01/ERMAN/db4_data/DB4/systemFileDemo.dbf
SQL> !ls -al /u01/ERMAN/db5_data/DB5/systemFileDemo.dbf
-rw-r----- 1 oracle oinstall 19931136 Mar 22 14:42 /u01/ERMAN/db5_data/DB5/systemFileDemo.dbf
SQL> !ls -al /u01/ERMAN/db6_data/DB6/systemFileDemo.dbf
-rw-r----- 1 oracle oinstall 19931136 Mar 22 14:41 /u01/ERMAN/db6_data/DB6/systemFileDemo.dbf
[oracle@demoorcl ~]$. setDB1.env
[oracle@demoorcl ~]$ sqlplus "/as sysdba"
SQL> ALTER TABLESPACE SYSTEM
ADD DATAFILE '/u01/ERMAN/db1_data/DB1/systemFileDemo3.dbf' SIZE 19M; 2
Tablespace altered.
WE STOP RECOVERY ON DB4, and look what happens... Only DB4 is affected, because DB5 is not a cascaded standby of DB4, but DB5 is a cascaded standby of DB3.
[oracle@demoorcl ~]$ . setDB4.env
[oracle@demoorcl ~]$ sqlplus "/as sysdba"
SQL> alter database recover managed standby database cancel;
Database altered.
SQL> exit
[oracle@demoorcl ~]$. setDB1.env
[oracle@demoorcl ~]$ sqlplus "/as sysdba"
SQL> !ls -al /u01/ERMAN/db1_data/DB1/systemFileDemo3.dbf
-rw-r----- 1 oracle oinstall 19931136 Mar 22 15:25 /u01/ERMAN/db1_data/DB1/systemFileDemo3.dbf
SQL> !ls -al /u01/ERMAN/db2_data/DB2/systemFileDemo3.dbf
-rw-r----- 1 oracle oinstall 19931136 Mar 22 15:27 /u01/ERMAN/db2_data/DB2/systemFileDemo3.dbf
SQL> !ls -al /u01/ERMAN/db3_data/DB3/systemFileDemo3.dbf
-rw-r----- 1 oracle oinstall 19931136 Mar 22 15:26 /u01/ERMAN/db3_data/DB3/systemFileDemo3.dbf
SQL> !ls -al /u01/ERMAN/db4_data/DB4/systemFileDemo3.dbf
ls: cannot access /u01/ERMAN/db4_data/DB4/systemFileDemo3.dbf: No such file or directory
SQL> !ls -al /u01/ERMAN/db5_data/DB5/systemFileDemo3.dbf
-rw-r----- 1 oracle oinstall 19931136 Mar 22 15:27 /u01/ERMAN/db5_data/DB5/systemFileDemo3.dbf
SQL> !ls -al /u01/ERMAN/db6_data/DB6/systemFileDemo3.dbf
-rw-r----- 1 oracle oinstall 19931136 Mar 22 15:26 /u01/ERMAN/db6_data/DB6/systemFileDemo3.dbf
Well, let's continue with our next task, the switchover...
In order to create this demo environment, an enterprise edition database(primary) was created using the dbca (11gR2) and 5 standby databases are created by cloning this enterprise edition primary database.
Nodes and Roles:
1 Primary (DB1)
1 Cascading Physical Standby (DB2)
1 Cascading and Cascaded Physical Standby (DB3)
2 Cascaded Physical Standby (DB4, DB5)
1 Phsyical Standby (DB6)
All the Standby except DB2 is operating using LGWR ASYNC method. Only the transport between DB1 and DB2 is in LGWR SYNC mode.
The topology is as follows;
DB1(primary)--->DB2--->DB3---->DB4
| | ----->DB5
|
|--->DB6
The target topology in case of a switchover scenario which may be implemented for planned downtime is as follows;
DB5
|
|
|->DB3(primary)------->DB1----->DB2
| |
| ->DB4 |------->DB6
So, I m sharing the scenario, because this 6 node standby environment will be configured according to this switchover scenario, in other words to give minimum effort in case of a planned switchover scenario.
Here is the init.ora parameters used for building this dataguard environment. This is the most important part actually, as creating standby environment by cloning the primary, configured tnsnames.ora files and the listener are well-known things, already.
"""""""""""""""""""""""PRIMARY --DB1""""""""""""""""""""""""""""
*.compatible='11.2.0.0.0'
#The Compatible parameter is used to control the formats of oracle data blocks and redo streams. It is basically controlling what is written to disk.
#COMPATIBLE initialization parameter should be set to the same value on both the primary and standby databases.
*.db_name='DB1'
#DATABASE NAME, THIS IS SAME(DB1) ON ALL THE STANDBYs AS WELL.
*.db_unique_name='DB1'
#DATABASE UNIQUE NAME, THIS CHANGES ACCORDING TO THE SID OF THE STANDBY DATABASES
*.log_archive_dest_1='LOCATION=/u01/ERMAN/db1_archive VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=DB1'
#LOCAL ARCHIVAL DEST, it is used for specifying the local archive dest.
#"VALID_FOR" is an optional argument
#ALL_LOGFILES— This destination is valid when archiving either online redo log files or standby redo log files.
#ALL_ROLES—This destination is valid when the database is running in either the primary or the standby role.
*.log_archive_dest_2='SERVICE=DB2 LGWR SYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=DB2'
#IT S THE REMOTE ARCHIVE DEST, IT IS REACHED VIA TNS entry CALLED DB2, so it is for archiving to the standby database named DB2
#"VALID_FOR" is an optional argument
#ONLINE_LOGFILE—This destination is valid only when archiving online redo log files.
#PRIMARY_ROLE—This destination is valid only when the database is running in the primary role.
*.log_archive_dest_6='SERVICE=DB6 LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=DB6'
#IT S THE REMOTE ARCHIVE DEST, IT IS REACHED VIA TNS entry CALLED DB6, so it is for archiving to the standby database named DB6
#"VALID_FOR" is an optional argument
#ONLINE_LOGFILE—This destination is valid only when archiving online redo log files.
#PRIMARY_ROLE—This destination is valid only when the database is running in the primary role.
*.log_archive_dest_state_1='ENABLE'
#DEFAULT is ENABLE,this parameter specifies that a valid log archive destination can be used for a subsequent archiving operation (automatic or manual)
Other values are;
defer: Specifies that valid destination information and attributes are preserved, but the destination is excluded from archiving operations until re-enabled.
alternate:Specifies that a log archive destination is not enabled but will become enabled if communications to another destination fail.
*.log_archive_dest_state_2='ENABLE'
#DEFAULT is ENABLE,this parameter specifies that a valid log archive destination can be used for a subsequent archiving operation (automatic or manual)
#Other values are;
#defer: Specifies that valid destination information and attributes are preserved, but the destination is excluded from archiving operations until re-enabled.
#alternate:Specifies that a log archive destination is not enabled but will become enabled if communications to another destination fail.
*.log_archive_dest_state_6='ENABLE'
#DEFAULT is ENABLE,this parameter specifies that a valid log archive destination can be used for a subsequent archiving operation (automatic or manual)
#Other values are;
#defer: Specifies that valid destination information and attributes are preserved, but the destination is excluded from archiving operations until re-enabled.
#alternate:Specifies that a log archive destination is not enabled but will become enabled if communications to another destination fail.
*.remote_login_passwordfile='EXCLUSIVE'
#This parameter must be set in order to make it possible to connect to the database remotely using SYS user.
#It should be set the same password for SYS on both the primary and standby databases. The recommended setting is either EXCLUSIVE or SHARED
#EXCLUSIVE means: The password file can be used by only one database. The password file can contain SYS as well as non-SYS users
#SHARED means:One or more databases can use the password file. The password file can contain SYS as well as non-SYS users.
#none(not setting at all) means: Oracle ignores any password file. Therefore, privileged users must be authenticated by the operating system.
#Note that, a password file must be present for this to be active, else you will end up with the OS authentication.
*.LOG_ARCHIVE_CONFIG='DG_CONFIG=(DB1,DB2,DB3,DB4,DB5,DB6)'
#This parameter is used for enabling or disabling sending of redo logs to remote destinations and the receipt of remote redo logs.
#DG_CONFIG is used to specify a list of unique database names (DB_UNIQUE_NAME) for all of the databases in the Data Guard configuration.
#This parameter is basically saying: I allow connections between the databases that are on my list.
*.LOG_ARCHIVE_FORMAT=%t_%s_%r.arc
#This parameter specifies archive log naming format.
#%s log sequence number
#%S log sequence number, zero filled
#%t thread number
#%T thread number, zero filled
#%a activation ID
#%d database ID
#%r resetlogs ID that ensures unique names are constructed for the archived log files across multiple incarnations of the database
#So , the archives are created in the full path of LOG_ARCHIVE_DEST/LOG_ARCHIVE_FORMAT
#Example:/u01/ERMAN/db1_archive/1_49_906569100.arc
*.LOG_ARCHIVE_MAX_PROCESSES=30
#specifies the number of archiver background processes
#min value 1, max value: 30, default value :2
####PARAMETER BELOW is USED WHEN THE PRIMARY DATABASE BECOME A STANDBY, SO WE SET THEM FOR THE PREPARATION OF SWITCHOVER.SO SETTING THESE FOR PRIMARY ARE RECOMMENDED, BUT NOT REQURIED ACTUALLY ###
*.FAL_SERVER=DB3
#FAL Server means the primary database, so this parameter is used to determine the primary database after a switch over operation. So, when a switchover happens, DB1 will be the new standby and DB1 will fetch the archivelogs from the new primary, DB3.(in case DB3 cant send the archivelogs itself)
*.FAL_CLIENT=DB1
# This parameter is used to determine the standby database after a switch over operation. So, when log switch happens, DB1 will be the new standby and DB2 will send the log files using this info.
*.DB_FILE_NAME_CONVERT='/u01/ERMAN/db2_data/DB2/','/u01/ERMAN/db1_data/DB1/
##This parameter is used when a new datafile is created in the primary database, It basically converts the filename of a new datafile on the primary database to a filename on the standby database.
*.LOG_FILE_NAME_CONVERT= '/u01/ERMAN/db2_data/DB2/,'/u01/ERMAN/db1_data/DB1/'
# this parameter used when a new redolog file is created in the primary. It basically converts the filename of a new log file on the primary database to the filename of a log file on the standby database.
*.STANDBY_FILE_MANAGEMENT=AUTO
#this parameter makes Oracle to automatically create files on the standby database when a file is created on the primary and automatically drop files on the standby when dropped from primary.
"""""""""""""""""""""""CASCADING STANDBY --DB2""""""""""""""""""""
*.compatible='11.2.0.0.0'
#The Compatible parameter is used to control the formats of oracle data blocks and redo streams. It is basically controlling what is written to disk.
#COMPATIBLE initialization parameter should be set to the same value on both the primary and standby databases.
*.db_name='DB1'
#DATABASE NAME, THIS IS SAME(DB1) ON ALL THE STANDBYs AS WELL.
*.CONTROL_FILES=/tmp/DB2.ctl
#This is STANDBY CONTROLFILE.
#Standby Controlfiles can be created after taking database backups used for creating the standby.
#The control file must be created after the latest timestamp for the backup datafiles.
#It is the type of controlfile used in physical standby databases.
*.DB_UNIQUE_NAME='DB2'
#SEE , IT IS UNIQUE NAME DIFFERENT THAN Primary.
*.DB_FILE_NAME_CONVERT='/u01/ERMAN/db1_data/DB1/','/u01/ERMAN/db2_data/DB2/'
#This parameter is used when a new datafile is created in the primary database, It basically converts the filename of a new datafile on the primary database to a filename on the standby database.
*.LOG_FILE_NAME_CONVERT='/u01/ERMAN/db1_data/DB1','/u01/ERMAN/db2_data/DB2/'
# this parameter used when a new redolog file is created in the primary. It basically converts the filename of a new log file on the primary database to the filename of a log file on the standby database.
*.LOG_ARCHIVE_DEST_3= 'SERVICE=DB3 VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE) DB_UNIQUE_NAME=DB3'
#This parameter makes Standby db named DB2 to cascade the redo data received from DB1 to a cascaded database named DB3. This parameter is only active when DB2 is in standby mode.
*.LOG_ARCHIVE_CONFIG= 'DG_CONFIG=(DB1,DB2,DB3,DB4,DB5,DB6)'
#This parameter is used for enabling or disabling sending of redo logs to remote destinations and the receipt of remote redo logs.
#DG_CONFIG is used to specify a list of unique database names (DB_UNIQUE_NAME) for all of the databases in the Data Guard configuration.
#This parameter is basically saying: I allow connections between the databases that are on my list.
*.FAL_SERVER=DB1
#FAL Server means the primary database, so this parameter is used to determine the primary database .
*.FAL_CLIENT=DB2
# This parameter is used to determine the standby database.
*.log_archive_format='%t_%s_%r.dbf'
#This parameter specifies archive log naming format.
#%s log sequence number
#%S log sequence number, zero filled
#%t thread number
#%T thread number, zero filled
#%a activation ID
#%d database ID
#%r resetlogs ID that ensures unique names are constructed for the archived log files across multiple incarnations of the database
#So , the archives are created in the full path of LOG_ARCHIVE_DEST/LOG_ARCHIVE_FORMAT
#Example:/u01/ERMAN/db2_archive/1_49_906569100.arc
*.remote_login_passwordfile='EXCLUSIVE'
This parameter must be set in order to make it possible to connect to the database remotely using SYS user.
#It should be set the same password for SYS on both the primary and standby databases. The recommended setting is either EXCLUSIVE or SHARED
#EXCLUSIVE means: The password file can be used by only one database. The password file can contain SYS as well as non-SYS users
#SHARED means:One or more databases can use the password file. The password file can contain SYS as well as non-SYS users.
#none(not setting at all) means: Oracle ignores any password file. Therefore, privileged users must be authenticated by the operating system.
#Note that, a password file must be present for this to be active, else you will end up with the OS authentication.
""""""""""""""CASCADED and CASCADING STANDBY -- DB3""""""""""""""""
#Note:This database is a cascaded standby which also cascades the redo received to another standby databases. The cascading database is DB3, which transports the redo data to DB4 and DB5(cascaded databases).
#I will not share all the standby related parameters, but the ones which are important,in these cases.
*.CONTROL_FILES=/tmp/DB3.ctl
#Every standby database must have a unique standby controlfile, which should be created after taking the backup of primary.
*.DB_UNIQUE_NAME='DB3'
#unique database name as expected.
*.DB_FILE_NAME_CONVERT='/u01/ERMAN/db1_data/DB1/','/u01/ERMAN/db3_data/DB3/'
#Already explained, the conversion should be based on the primary file locations.
*.LOG_FILE_NAME_CONVERT='/u01/ERMAN/db1_data/DB1','/u01/ERMAN/db3_data/DB3/'
#Already explained, the conversion should be based on the primary file locations.
LOG_ARCHIVE_DEST_1='LOCATION=/u01/ERMAN/db3_archive VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=DB3'
LOG_ARCHIVE_DEST_4= 'SERVICE=DB4 VALID_FOR=(STANDBY_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=DB4' #for primary and standby roles
LOG_ARCHIVE_DEST_5= 'SERVICE=DB5 VALID_FOR=(STANDBY_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=DB5' #for primary and standby roles
LOG_ARCHIVE_DEST_2='SERVICE=DB1 LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=DB1' #for primary role
#This parameter set but it is currently ignored. This parameter is activated when DB3 becomes the primary(as it is set PRIMARY_ROLE). So when DB3 becomes primary, it will start to transmit redo to DB1, without any modification.
LOG_ARCHIVE_CONFIG= 'DG_CONFIG=(DB1,DB2,DB3,DB4,DB5,DB6)'
FAL_SERVER=DB2
FAL_CLIENT=DB3
*.STANDBY_FILE_MANAGEMENT=AUTO
#this parameter makes Oracle to automatically create files on the standby database when a file is created on the primary and automatically drop files on the standby when dropped from primary.
"""""""""""""""""""""""CASCADED STANDBY -- DB4""""""""""""""""""""
#Note: This database is a cascaded database.
*.db_name='DB1'
CONTROL_FILES=/tmp/DB4.ctl
DB_UNIQUE_NAME='DB4'
DB_FILE_NAME_CONVERT='/u01/ERMAN/db1_data/DB1/','/u01/ERMAN/db4_data/DB4/'
LOG_FILE_NAME_CONVERT='/u01/ERMAN/db1_data/DB1','/u01/ERMAN/db4_data/DB4/'
LOG_ARCHIVE_DEST_1='LOCATION=/u01/ERMAN/db4_archive VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=DB4'
LOG_ARCHIVE_CONFIG= 'DG_CONFIG=(DB1,DB2,DB3,DB4,DB5,DB6)'
FAL_SERVER=DB3
FAL_CLIENT=DB4
"""""""""""""""""""""""CASCADED STANDBY -- DB5""""""""""""""""""""
*.db_name='DB1'
CONTROL_FILES=/tmp/DB5.ctl
DB_UNIQUE_NAME='DB5'
DB_FILE_NAME_CONVERT='/u01/ERMAN/db1_data/DB1/','/u01/ERMAN/db5_data/DB5/'
LOG_FILE_NAME_CONVERT='/u01/ERMAN/db1_data/DB1','/u01/ERMAN/db5_data/DB5/'
LOG_ARCHIVE_DEST_1='LOCATION=/u01/ERMAN/db5_archive VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=DB5'
LOG_ARCHIVE_CONFIG= 'DG_CONFIG=(DB1,DB2,DB3,DB4,DB5,DB6)'
FAL_SERVER=DB3
FAL_CLIENT=DB5
"""""STANDBY DATABASE (NOT CASCADED OR CASCADING) --DB6""""""""
*.db_name='DB1'
CONTROL_FILES=/tmp/DB6.ctl
DB_UNIQUE_NAME='DB6'
DB_FILE_NAME_CONVERT='/u01/ERMAN/db1_data/DB1/','/u01/ERMAN/db6_data/DB6/'
LOG_FILE_NAME_CONVERT='/u01/ERMAN/db1_data/DB1','/u01/ERMAN/db6_data/DB6/'
LOG_ARCHIVE_DEST_1='LOCATION=/u01/ERMAN/db6_archive VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=DB6'
LOG_ARCHIVE_CONFIG= 'DG_CONFIG=(DB1,DB2,DB3,DB4,DB5,DB6)'
FAL_SERVER=DB1
FAL_CLIENT=DB6
*.STANDBY_FILE_MANAGEMENT=AUTO
Proof Of Concept:
In order to show that the dataguard works properly and as planned, we do 3 test.
1)We create a table named ERMANNEW in primary and switch the logfiles. Then we control the standby environments and ensure that the latest redo data is transffered and applied.
2)We create a datafile named systemFileDemo2.dbf in primary and ensure that it is getting created in the standby environments also. In this test, we also ensure that standby_file_management and db file convert parameters are working properly.
3)We stop managed recovery of DB4, which is a cascaded standby and create a database file named systemFileDemo3.dbf in primary. The purpose of this test is to show that, the cascaded standby configuration is worked properly. In other words, in order to show that, in case of a redo apply problem in DB4, only DB4 is affected, because DB5 is not a cascaded standby of DB4, but DB5 is a cascaded standby of DB3.
[oracle@demoorcl ~]$ . setDB1.env
[oracle@demoorcl ~]$ sqlplus "/as sysdba"
SQL> create table ERMANNEW as select * from dba_objects;
Table created.
SQL> alter system switch logfile;
System altered.
SQL> archive log all;
ORA-00271: there are no logs that need archiving
SQL> SELECT THREAD#, MAX(SEQUENCE#)
FROM V$LOG_HISTORY
WHERE RESETLOGS_CHANGE# =
(SELECT RESETLOGS_CHANGE#
FROM V$DATABASE_INCARNATION
WHERE STATUS = 'CURRENT')
GROUP BY THREAD#; 2 3 4 5 6 7
THREAD# MAX(SEQUENCE#)
---------- --------------
1 44
[oracle@demoorcl ~]$ . setDB2.env
[oracle@demoorcl ~]$ sqlplus "/as sysdba"
SQL*Plus: Release 11.2.0.3.0 Production on Mon Mar 21 11:27:39 2016
Copyright (c) 1982, 2011, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> select al.thrd "Thread", almax "Last Seq Received", lhmax "Last Seq Applied"
from (select thread# thrd, max(sequence#) almax
from v$archived_log
where resetlogs_change#=(select resetlogs_change# from v$database)
group by thread#) al,
(select thread# thrd, max(sequence#) lhmax
from v$log_history
where first_time=(select max(first_time) from v$log_history)
group by thread#) lh
where al.thrd = lh.thrd; 2 3 4 5 6 7 8 9 10
Thread Last Seq Received Last Seq Applied
---------- ----------------- ----------------
1 44 44
[oracle@demoorcl ~]$ . setDB3.env
[oracle@demoorcl ~]$ sqlplus "/as sysdba"
SQL*Plus: Release 11.2.0.3.0 Production on Mon Mar 21 11:29:08 2016
Copyright (c) 1982, 2011, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> select al.thrd "Thread", almax "Last Seq Received", lhmax "Last Seq Applied"
from (select thread# thrd, max(sequence#) almax
from v$archived_log
where resetlogs_change#=(select resetlogs_change# from v$database)
group by thread#) al,
(select thread# thrd, max(sequence#) lhmax
from v$log_history
where first_time=(select max(first_time) from v$log_history)
group by thread#) lh
where al.thrd = lh.thrd; 2 3 4 5 6 7 8 9 10
Thread Last Seq Received Last Seq Applied
---------- ----------------- ----------------
1 44 44
[oracle@demoorcl ~]$ . setDB4.env
[oracle@demoorcl ~]$ sqlplus "/as sysdba"
SQL*Plus: Release 11.2.0.3.0 Production on Mon Mar 21 11:29:45 2016
Copyright (c) 1982, 2011, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> select al.thrd "Thread", almax "Last Seq Received", lhmax "Last Seq Applied"
from (select thread# thrd, max(sequence#) almax
from v$archived_log
where resetlogs_change#=(select resetlogs_change# from v$database)
group by thread#) al,
(select thread# thrd, max(sequence#) lhmax
from v$log_history
where first_time=(select max(first_time) from v$log_history)
group by thread#) lh
where al.thrd = lh.thrd; 2 3 4 5 6 7 8 9 10
Thread Last Seq Received Last Seq Applied
---------- ----------------- ----------------
1 44 44
[oracle@demoorcl ~]$ . setDB5.env
[oracle@demoorcl ~]$ sqlplus "/as sysdba"
SQL*Plus: Release 11.2.0.3.0 Production on Mon Mar 21 12:04:39 2016
Copyright (c) 1982, 2011, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> select al.thrd "Thread", almax "Last Seq Received", lhmax "Last Seq Applied"
from (select thread# thrd, max(sequence#) almax
from v$archived_log
where resetlogs_change#=(select resetlogs_change# from v$database)
group by thread#) al,
(select thread# thrd, max(sequence#) lhmax
from v$log_history
where first_time=(select max(first_time) from v$log_history)
group by thread#) lh
where al.thrd = lh.thrd; 2 3 4 5 6 7 8 9 10
Thread Last Seq Received Last Seq Applied
---------- ----------------- ----------------
1 44 44
[oracle@demoorcl ~]$ . setDB6.env
[oracle@demoorcl ~]$ sqlplus "/as sysdba"
SQL> select al.thrd "Thread", almax "Last Seq Received", lhmax "Last Seq Applied"
from (select thread# thrd, max(sequence#) almax
from v$archived_log
where resetlogs_change#=(select resetlogs_change# from v$database)
group by thread#) al,
(select thread# thrd, max(sequence#) lhmax
from v$log_history
where first_time=(select max(first_time) from v$log_history)
group by thread#) lh
where al.thrd = lh.thrd; 2 3 4 5 6 7 8 9 10
Thread Last Seq Received Last Seq Applied
---------- ----------------- ----------------
1 44 44
[oracle@demoorcl ~]$ . setDB1.env
[oracle@demoorcl ~]$ sqlplus "/as sysdba"
SELECT DEST_ID "ID",
STATUS "DB_status",
DESTINATION "Archive_dest",
ERROR "Error"
FROM V$ARCHIVE_DEST
where status!='INACTIVE';
ID DB_status Archive_dest Error
---------- --------- ----------------------------
1 VALID /u01/ERMAN/db1_archive
2 VALID DB2
6 VALID DB6
SQL> ALTER TABLESPACE SYSTEM
ADD DATAFILE '/u01/ERMAN/db1_data/DB1/systemFileDemo2.dbf' SIZE 19M; 2
Tablespace altered.
[oracle@demoorcl ~]$. setDB1.env
[oracle@demoorcl ~]$ sqlplus "/as sysdba"
SQL> !ls -al /u01/ERMAN/db1_data/DB1/systemFileDemo.dbf
-rw-r----- 1 oracle oinstall 19931136 Mar 22 14:39 /u01/ERMAN/db1_data/DB1/systemFileDemo.dbf
SQL> !ls -al /u01/ERMAN/db2_data/DB2/systemFileDemo.dbf
-rw-r----- 1 oracle oinstall 19931136 Mar 22 14:42 /u01/ERMAN/db2_data/DB2/systemFileDemo.dbf
SQL> !ls -al /u01/ERMAN/db3_data/DB3/systemFileDemo.dbf
-rw-r----- 1 oracle oinstall 19931136 Mar 22 14:41 /u01/ERMAN/db3_data/DB3/systemFileDemo.dbf
SQL> !ls -al /u01/ERMAN/db4_data/DB4/systemFileDemo.dbf
-rw-r----- 1 oracle oinstall 19931136 Mar 22 14:42 /u01/ERMAN/db4_data/DB4/systemFileDemo.dbf
SQL> !ls -al /u01/ERMAN/db5_data/DB5/systemFileDemo.dbf
-rw-r----- 1 oracle oinstall 19931136 Mar 22 14:42 /u01/ERMAN/db5_data/DB5/systemFileDemo.dbf
SQL> !ls -al /u01/ERMAN/db6_data/DB6/systemFileDemo.dbf
-rw-r----- 1 oracle oinstall 19931136 Mar 22 14:41 /u01/ERMAN/db6_data/DB6/systemFileDemo.dbf
[oracle@demoorcl ~]$. setDB1.env
[oracle@demoorcl ~]$ sqlplus "/as sysdba"
SQL> ALTER TABLESPACE SYSTEM
ADD DATAFILE '/u01/ERMAN/db1_data/DB1/systemFileDemo3.dbf' SIZE 19M; 2
Tablespace altered.
WE STOP RECOVERY ON DB4, and look what happens... Only DB4 is affected, because DB5 is not a cascaded standby of DB4, but DB5 is a cascaded standby of DB3.
[oracle@demoorcl ~]$ . setDB4.env
[oracle@demoorcl ~]$ sqlplus "/as sysdba"
SQL> alter database recover managed standby database cancel;
Database altered.
SQL> exit
[oracle@demoorcl ~]$. setDB1.env
[oracle@demoorcl ~]$ sqlplus "/as sysdba"
SQL> !ls -al /u01/ERMAN/db1_data/DB1/systemFileDemo3.dbf
-rw-r----- 1 oracle oinstall 19931136 Mar 22 15:25 /u01/ERMAN/db1_data/DB1/systemFileDemo3.dbf
SQL> !ls -al /u01/ERMAN/db2_data/DB2/systemFileDemo3.dbf
-rw-r----- 1 oracle oinstall 19931136 Mar 22 15:27 /u01/ERMAN/db2_data/DB2/systemFileDemo3.dbf
SQL> !ls -al /u01/ERMAN/db3_data/DB3/systemFileDemo3.dbf
-rw-r----- 1 oracle oinstall 19931136 Mar 22 15:26 /u01/ERMAN/db3_data/DB3/systemFileDemo3.dbf
SQL> !ls -al /u01/ERMAN/db4_data/DB4/systemFileDemo3.dbf
ls: cannot access /u01/ERMAN/db4_data/DB4/systemFileDemo3.dbf: No such file or directory
SQL> !ls -al /u01/ERMAN/db5_data/DB5/systemFileDemo3.dbf
-rw-r----- 1 oracle oinstall 19931136 Mar 22 15:27 /u01/ERMAN/db5_data/DB5/systemFileDemo3.dbf
SQL> !ls -al /u01/ERMAN/db6_data/DB6/systemFileDemo3.dbf
-rw-r----- 1 oracle oinstall 19931136 Mar 22 15:26 /u01/ERMAN/db6_data/DB6/systemFileDemo3.dbf
Well, let's continue with our next task, the switchover...
No comments :
Post a Comment
If you will ask a question, please don't comment here..
For your questions, please create an issue into my forum.
Forum Link: http://ermanarslan.blogspot.com.tr/p/forum.html
Register and create an issue in the related category.
I will support you from there.