Last month, I have written an article on Active-Active datacenters .(http://ermanarslan.blogspot.com.tr/2016/09/rdbms-active-data-center-from-oracle.html).
In that article, I have given the concepts for building active-active datacenters. The most exciting concept that was given there was the RAC Extended Clusters or Strech Clusters.
RAC Extended Clusters is applicable when the distance between the sites is not more than 25 km(for non-Exadata environments) and this is a RAC configuration in which the nodes can be in different sites.
In that article, I have given the concepts for building active-active datacenters. The most exciting concept that was given there was the RAC Extended Clusters or Strech Clusters.
RAC Extended Clusters is applicable when the distance between the sites is not more than 25 km(for non-Exadata environments) and this is a RAC configuration in which the nodes can be in different sites.
So I have introduced you the RAC Extended Clustering very briefly and now, I want to discuss about building RAC Extended Clusters on multiple Exadata Machines, located in different data centers. (supposing the distance between these data centers are <=100 meters -- Infiniband network currently has a limitation of 100 meters. )
First of all, Oracle Real Application Clusters on Extended Distance Clusters is not supported with Exadata. So, basically, it is not supported to build RAC Extended Clusters on multiple Exadata machines.
I want to write on this topic, because a realistic design like the depicted below; actually can't be built because of this support issue.
Altough, it seems realistic and efficient, an architecure like the one above can't be actualized, because RAC extended clusters is not supported with Exadata.
The discussion that I m talking about is actually the reasons behind this lack of support.
First of all, let's review the most cruicial things that are required for building a stable RAC extended clusters.
1) A fast and dedicated connectivity between the nodes and the sites is required for the Oracle
RAC inter-instance communication. (this is okay, supposing the distance between the Data Centers is < IB network limitation)
2) A tie breaking voting disk needs to be placed in a third site. (This can be established as well)
3)A host based mirroring solution should be used to host the data on site A
and site B and to make sure that the data between the two sites is kept in sync. Storage at each site must be setup as separate failure groups and use ASM
mirroring, to ensure at least one copy of the data at each site. However; in Exadata, the disks in each cell are in seperate failgroups, not all the disks in each storage(all cells in each Exadata) are in seperate failgroups..
4)The fast interconnect is very important. So when you have an Exadata, you can't tolerate any slowness on there...(at least, if I were Oracle, I don't want to go in that risk) Any bottleneck in the interconnect between two different sites, will go against the paradigm of Engineered Systems and will Exadata look bad. Look at the Oracle whitepaper, "http://www.oracle.com/technetwork/products/clustering/overview/extendedracversion11-435972.pdf" , it is actually explaining "Oracle RAC one node", which doesn't require a very fast interconnect... You see the point..
4)The fast interconnect is very important. So when you have an Exadata, you can't tolerate any slowness on there...(at least, if I were Oracle, I don't want to go in that risk) Any bottleneck in the interconnect between two different sites, will go against the paradigm of Engineered Systems and will Exadata look bad. Look at the Oracle whitepaper, "http://www.oracle.com/technetwork/products/clustering/overview/extendedracversion11-435972.pdf" , it is actually explaining "Oracle RAC one node", which doesn't require a very fast interconnect... You see the point..
So, you see? Actually the 3rd requirement given above is the thing that causes Extended Clusters to be unsupported with Exadata.
That is, we have to use ASM with Exadata and by using ASM we can have only Normal (2-mirrors) or High (3-mirrors) redundancy. Also, each storage cell in an Exadata is a failure group, so ASM places mirrored data in different cells, but it is not guaranteed that ASM will place the mirrored data in different cells, which are located in different Exadata machines.
So, in other words, we don't have the full control for the distribuition of mirrored data.
The question comes to mind here is, can't we extend the failure group definition of each Exadata machine to include all the disks in those Exadata machines? Well, this is also not supported.
The concept and thought for describing this lack of support is as follows;
Failure groups define ASM disks that share a common potential failure mechanism.
Are 2 different disks on 2 different cells have lots of potential failures in common? Actually no when compared with 2 different disks located in the same cell...
A cell includes controller, power, OS, patch level and dependencies.
So extending the ASM failure group defitinion to include multiple cells together is just not aligned with the concept. (remember Exadata is not an ASM failure group, it is group of failure group)
If it is possible to defined all the disk in one Exadata to be in a single failure group, then it would also break the engineered systems paradigm, right?
The concept to be used here should actually be the groups of failure groups, but this concept does not exists.
Good news is that, RAC Extended Clusters will probably be supported with Exadata in Oracle Database 12.2 (12CR2)... It is not certain yet, but it is expected. so we will see..
By the way, I m still discussing this support thing with Oracle Support. As you may guess, I will revisit this blog post, if I will get some additional info.
No comments :
Post a Comment
If you will ask a question, please don't comment here..
For your questions, please create an issue into my forum.
Forum Link: http://ermanarslan.blogspot.com.tr/p/forum.html
Register and create an issue in the related category.
I will support you from there.