In this post, I will share some Active/Passive EBS 12.2 Architectures extended with simple DR solutions.
These configurations can support new EBS 12.2 environments up to 100 users and are scaled in a way that requires minimum hardware and software fees.
Note that, this configuration can be modified , extended , scaled up or down and scaled vertically according to the environmental needs. Sizing may also vary according to the expectations, needs and the EBS modules that are planned to be implemented.
As for the sizing;
These configurations can support new EBS 12.2 environments up to 100 users and are scaled in a way that requires minimum hardware and software fees.
Note that, this configuration can be modified , extended , scaled up or down and scaled vertically according to the environmental needs. Sizing may also vary according to the expectations, needs and the EBS modules that are planned to be implemented.
First of all our hardware resource can be sized as the following;
Note that, these are based on a standard EBS, that includes HR and standard Financial modules.
In the paragraphs below , I give active/passive + DR architecture as this post is aimed at this type of configuration. (cheap, redundant, stable in a way and fast enough)
Source site:
2 servers in the source site. 1 server active, 1 server passive.
Both of the source servers are connected to a shared storage (fiber/HBA connection preferred)
Failover operations can be done manually, or using a clustering software(Oracle Restart may be a solution, or in cases of IBM, HACMP will work)
2 servers in the source site. 1 server active, 1 server passive.
Both of the source servers are connected to a shared storage (fiber/HBA connection preferred)
Failover operations can be done manually, or using a clustering software(Oracle Restart may be a solution, or in cases of IBM, HACMP will work)
EBS apps and db tier is placed on the same server, altough it is not recommended. (we are aimed to be minimal)
DRC site.
DRC is one server.
The replication can be storage based or dataguard based or VM based, It depends on the architecture.
If using dataguard, the target server/DR server should be licensed as well.
Application filesystem synchronization can be done using an utility like rsync or using similar tools which do the same job.
Storage replication, on the other side, does not require any Oracle licenses or an utility for filesystem replication.
However, storage replication /snapshot /snapmirror utilities require their own licenses. (so this should be kept in mind)
Also, storage based replications work in block level, so they do not understand Oracle's block format. From this perspective Dataguard is the best solution for creating and feeding Oracle Replicas/Standby/DR servers. On the other hand, in these days, Storage vendors have solutions/integrated solutions for Oracle and when these solutions are implemented, Storage replications know that they are dealing with the Oracle Databases when replicating the storage level data.
DRC site.
The replication can be storage based or dataguard based or VM based, It depends on the architecture.
If using dataguard, the target server/DR server should be licensed as well.
Application filesystem synchronization can be done using an utility like rsync or using similar tools which do the same job.
Storage replication, on the other side, does not require any Oracle licenses or an utility for filesystem replication.
However, storage replication /snapshot /snapmirror utilities require their own licenses. (so this should be kept in mind)
Also, storage based replications work in block level, so they do not understand Oracle's block format. From this perspective Dataguard is the best solution for creating and feeding Oracle Replicas/Standby/DR servers. On the other hand, in these days, Storage vendors have solutions/integrated solutions for Oracle and when these solutions are implemented, Storage replications know that they are dealing with the Oracle Databases when replicating the storage level data.
Production Site:
For the Active node:
OS: Oracle Linux 6 64 bit, disks from Storage, Local disks can be minimum(300GB-500GB), 64GB memory, 16 cores, 1x HBA(2 HBA -- preffered for multipath)
For the Passive node:
OS: Oracle Linux 6 64 bit 64GB memory, disks from Storage, Local disks can be minimum (300GB-500GB), 16 cores, 1x HBA(2 HBA -- preffered for multipath)
Storage :
For the Active node:
OS: Oracle Linux 6 64 bit, disks from Storage, Local disks can be minimum(300GB-500GB), 64GB memory, 16 cores, 1x HBA(2 HBA -- preffered for multipath)
For the Passive node:
OS: Oracle Linux 6 64 bit 64GB memory, disks from Storage, Local disks can be minimum (300GB-500GB), 16 cores, 1x HBA(2 HBA -- preffered for multipath)
Storage :
A Storage with 2 controller, 15k disks (1.2 TB disk space -- usable space , which remains after raid configuration) -- disk size is dependent on the expected % of yearly growth .
Optionally, for automatic failover -> a clustering software, and for an additional replication -> a storage in the DR site , which can do replication/snapmirror (Netapp,EMC,Oracle ZFS etc..)
OS: Oracle Linux 6 64 bit, 1.2 TB disk space(local), 36GB memory, 8 cores.
Optinally a storage, in case storage replication is required, in addition to the rsync and Dataguard. If that's the case, a Storage with 1.2TB can do the job(ofcourse, it should be capable of doing replication/snapmirror kind of works)
In our first configuration we use an ODA machine for the source site. We use ODA in a way to reduce the RAC license needs. That is, we use a virtualized ODA to place our application tiers and databases.
We utilize ODA_BASE domain for placing our EBS databases, but we use only one node of ODA_BASE to keep our license fees minimal.
This configuration gives us the ability to run our EBS databases or applications in the second ODA node in case of a failure.
In case of the disaster, we use Dataguard (async). We choose our DR server to be a single node, which has the power to handle the workload(at least may handle the workload during 1 day) in case it should be used when a planned maintanence is required in the source servers or in case of a disaster in the source site.
By using the ODA's virtualization, we use the production machine as a consolidated environment, on which PROD, TEST and DEV instances of EBS (app + db ) run.
Note that, dataguard only replicates the Database file, so Oracle binaries, Oracle Homes , Application filesystem should be replicated in to that.. (using rsync or some tool like that)
Another alternative, may be using a storage based replication directly without having a dataguard. This configuration can be preferred in case one doesn't want to license the target/DRC environment.
In this case, the topology can be like the following;
The network between Prod and DRC is also important. A low latency will increase the recoverability.
As for the servers, here are some notes;
Production servers are preferred to be physical (For ex: HP servers, Sun Servers)
DRC server can be a VM (Oracle VM Server , ESX etc..)
Test environments can be placed on the passive production node, or they can be placed on Virtual server outside this configuration. For every TEST environment, 32GB Memory, 750 GB disk sapce and 8 core CPU is enough.
CPU Cores specified in this note are Intel. The latest generation intel cpus are prefferred.
There can be an additional storage in the Production Site, to increase the storage level fault tolerancy , as well.
Lastly, as for the "U" sizes, here are some notes;
ODA is a 4U Rack moutnable system.
The traditional servers supporting these sources can be between 1U and 4U sizes as well and if VMs are preffered, they dont even count for the U sizes, as they are virtual.Storage for this capacity can be handled with a 4U architecture and
With a quick calculation, 8U free space is okay for the production site. DR site can be somewhere between 1U-4U according to the configuration.
Optionally, for automatic failover -> a clustering software, and for an additional replication -> a storage in the DR site , which can do replication/snapmirror (Netapp,EMC,Oracle ZFS etc..)
DRC Site:
OS: Oracle Linux 6 64 bit, 1.2 TB disk space(local), 36GB memory, 8 cores.
Optinally a storage, in case storage replication is required, in addition to the rsync and Dataguard. If that's the case, a Storage with 1.2TB can do the job(ofcourse, it should be capable of doing replication/snapmirror kind of works)
In our first configuration we use an ODA machine for the source site. We use ODA in a way to reduce the RAC license needs. That is, we use a virtualized ODA to place our application tiers and databases.
We utilize ODA_BASE domain for placing our EBS databases, but we use only one node of ODA_BASE to keep our license fees minimal.
This configuration gives us the ability to run our EBS databases or applications in the second ODA node in case of a failure.
In case of the disaster, we use Dataguard (async). We choose our DR server to be a single node, which has the power to handle the workload(at least may handle the workload during 1 day) in case it should be used when a planned maintanence is required in the source servers or in case of a disaster in the source site.
By using the ODA's virtualization, we use the production machine as a consolidated environment, on which PROD, TEST and DEV instances of EBS (app + db ) run.
Note that, dataguard only replicates the Database file, so Oracle binaries, Oracle Homes , Application filesystem should be replicated in to that.. (using rsync or some tool like that)
Another configuration can be based on the traditional servers in the production site;
In this configuration, we still use Dataguard and rsync for the replications , so the only difference is, this time, we use traditional servers like HP , Dell or Sun and a Storage in the Production site to provide the active/passive clustering.
As for the DR site, we still have only one server, which can even be a VM.
In this case, the topology can be like the following;
The network between Prod and DRC is also important. A low latency will increase the recoverability.
As for the servers, here are some notes;
Production servers are preferred to be physical (For ex: HP servers, Sun Servers)
DRC server can be a VM (Oracle VM Server , ESX etc..)
Test environments can be placed on the passive production node, or they can be placed on Virtual server outside this configuration. For every TEST environment, 32GB Memory, 750 GB disk sapce and 8 core CPU is enough.
CPU Cores specified in this note are Intel. The latest generation intel cpus are prefferred.
There can be an additional storage in the Production Site, to increase the storage level fault tolerancy , as well.
Lastly, as for the "U" sizes, here are some notes;
ODA is a 4U Rack moutnable system.
The traditional servers supporting these sources can be between 1U and 4U sizes as well and if VMs are preffered, they dont even count for the U sizes, as they are virtual.Storage for this capacity can be handled with a 4U architecture and
With a quick calculation, 8U free space is okay for the production site. DR site can be somewhere between 1U-4U according to the configuration.
Important note: The values used for sizing are completely dependent on the environment. So it may vary accordingly. All this sizing information is given for a new EBS 12.2 environment, which is expected to have the ability to support 70 users. %10 of these users are expexted to currently access the EBS applications. The environment is expected to grow %10 in a year. The configuration is sized to support 3 years. This is for an EBS environment, on which 60000 invoices are made out in a single month, and 40000 bills invoices are received in a single month.
Thanks for sharing this interesting document. My questions are :
ReplyDeleteIn the Production Site : as we are using a Linux cluster, do we use the physical hostname during apps intall ?
Do we need to run an autoconfig each time we want to start the env on the passive server ?
Hello Erman,
ReplyDeleteThanks for you interesting post.
My questions are :
From apps perspective : should we install apps using the physical hostname or it will work with the cluster name?
If the cluster name doesnt work, should we execute the autoconfig each time, we enable the passive server?
Hi Omar, considering the cluster name as a virtual host, my answer is Yes.
ReplyDelete