Monday, February 3, 2020

Oracle Linux / An Important Part of the Red Stack ! - Support Subscription Types - Clear and Simple

I just wrote a blog post about Oracle Linux and its advantages for Oracle customers. Actually that blog post was insipired by the Oracle 's own documents.. However; I still felt the need for writing a blog post about that subject, Oracle Linux for Oracle Database, an important part of the red stack!

I wrote that because I still think that there may be some people out there who don't know the advantage of  'Oracle on Oracle' , yet. So, as I have given the info about Oracle Linux and its advantage, now it is time to take a look at the Oracle Linux support types given in this complementary blog post.

Before going further, I want to share a list of Oracle Linux documents which may help you on getting more info about Oracle Linux and having a general info about the hot topics in Oracle Linux world (Oracle Linux upgrades for instance)

1. Upgrading an Oracle Linux System
https://docs.oracle.com/en/operating-systems/oracle-linux/7/install/ol7-upgrade-cond.html
2. Oracle Linux FAQ
http://www.oracle.com/us/technologies/027617.pdf
3. Why Oracle Database runs best on Oracle Linux
http://www.oracle.com/us/technologies/linux/linux-for-oracle-database-wp-2068570.pdf
4. Oracle Linux Virtualization Manager (New Virtualization solution based on KVM)
https://www.oracle.com/a/ocom/docs/linux/oracle-linux-virtualization-manager.pdf

Anyways, let's give you some clear definitions of Oracle Linux support subscriptions and some relevant info for deciding the correct Oracle Linux support type according your needs.


First of all, Oracle Linux Support subscriptions are done per physical server.
The important metrics for deciding these subscriptions are the number of physical servers and the number of Cpu sockets. By the term Cpu socket, we are actually talking about physical cpus.

Core count is not important, multiple cores or hyperthreading is counted as a single physical CPU.
Even if your Oracle Linux runs on a virtualized environment like Oracle VM Server, Hyper-V or Vmware, the subscriptions are still in physical server and Cpu socket basis.
That is the count of Guest VMs is not important.

We have 5 subscription types. These are as follows;

Oracle Linux Network : This is like the entry level..  It doesn't provide any support services actually. Only the access to the Unbreakable Linux network and full indemnification for Oracle Linux...

Oracle Linux Basic Limited : This one is the same as 'Oracle Linux Basic'. Same support services that we have in Linux Basic support is provided with the limited one as well.. If your physical servers have 1 or 2 sockets, you can go with this one.

Oracle Linux Basic : Basic support offers the following support services:

  • 24x7 telephone and online support
  • Around-the-clock access to enhancements, updates, and errata
  • Oracle Enterprise Manager for Linux Management
  • Spacewalk support
  • High availability with Oracle Clusterware
  • Comprehensive tracing with DTrace
  • Oracle Linux load balancer
  • Comprehensive indemnificatio
  • Oracle Container runtime for Docker

Oracle Linux Premier Limited : This one is the same as 'Oracle Linux Premier'. Same support services that we have in Linux Premier support is provided with the limited one as well.. If your physical servers have 1 or 2 sockets, you can go with this one.

Oracle Linux Premier : Premier support is like all inclusive. Premier support offers the following support services:

  • 24x7 telephone and online support
  • Around-the-clock access to enhancements, updates, and errata
  • Oracle Enterprise Manager for Linux Management
  • Spacewalk support
  • High availability with Oracle Clusterware
  • Comprehensive tracing with DTrace
  • Oracle Linux load balancer
  • Comprehensive indemnification
  • Zero-downtime patching with Ksplice
  • Oracle Linux Virtualization Manager
  • Oracle Linux Cloud Native Environment, Oracle Container runtime for Docker, Oracle Container Services for use with Kubernetes
  • Gluster Storage for Oracle Linux
  • Ceph storage for Oracle Linux
  • Oracle Linux software collections
  • Oracle Linux high availability services support (Corosync and Pacemaker)
  • Premier backports
  • Lifetime sustaining support

As you see, very clear and simple.

Basically, if you have mission critical production environments, that must be up all the time and if you want to use the cool features like zero downtime patching , Gluster storage and etc, then you should go with the Premier support.

On the other hand, if you are not that much into it, but you want to be safe and almost fully supported in the main domain of Oracle Linux, then you can go with the Basic support.

You don't have to count your cores, threads, guest vms and so on.. You just need to count your physical CPUs and decide on Basic or Premier support level. That's it:)


Read for more ->  https://docs.oracle.com/en/operating-systems/oracle-linux/7/licenses/

Friday, January 31, 2020

Oracle Linux / Linux for Oracle Database / Why?

Today's blog post will be about Oracle Linux, a Linux distribution packaged and freely distributed by Oracle, available partially under the GNU General Public License since late 2006.

We deal with this operating system everyday, as most of the critical databases are running on it.
We work with it while managing Oracle databases on commodity systems, but not only that...
We are also getting touch with Oracle Linux as it is embedded in Oracle's leading Engineered Systems, such as Oracle Database Appliance, Exadata, ZDLRA, BDA and so on.



Besides, nowadays, we see it more often. Especially, in environments where 19C database upgrade projects are done. That is , Oracle Linux becomes the first choice for the Oracle customers, who are planning to upgrade their Oracle databases to 19C version. Remember, Linux 7 is a prereq for those customers. So while upgrading their Linux Operating systems, some customers also make a decision to change their Linux distribution and to continue their way with Oracle Linux.

According to the industry analyst firm IDC, Oracle Linux's market growth is significant:  
 “Oracle Linux has been consistently one of the fastest growing enterprise Linux distributions in the past few years. Much of this growth comes from customers moving to Oracle Linux in order to take advantage of ‘Oracle on Oracle’ i.e., Oracle’s OS optimization for its own solution stacks, running on-premises and in the cloud” - Ashish Nadkarni, IDC 

Today, I want to shed a light on the reasons behind this decision actually.
These reasons that I will summarize below, are also the causes that make Oracle itself use Oracle Linux in its own systems and leading engineered systems.

Oracle's own database, middleware, and application software engineering projects is running on Oracle Linux.. Each day, Oracle Linux receives more than hunderds of hours of database and applications tests. So in a way, if we are using Oracle products then we are safer on Oracle Linux.

Having Oracle databases or applications running on Oracle Linux means an end to end Oracle stack. This provides administration and support efficiency. (single vendor support, no need for cross-platform skill sets etc..)

Oracle Engineering is putting lots of efforts including stress tests, performance tests and system verification on Oracle Linux in order to certify the Oracle Applications with it. Again we are safer.

In Oracle Linux, we have the opportunity to use Oracle's own kernek UEK , which is optimized for the best performance. Yes! This UEK kernel has performance enhancements.. It has optimized system calls and C library interfaces. UEK delivers these enhancements and optimizations to the process scheduler, memory management, file systems, and the networking stack.
This means you have an advantage in performance (in applications performance and in query processing times), as well as in transaction capacity and scalability.

When both application stack (including the database layer) and OS stack is from the same vendor, there is an opportunity to have a big integration between these layers.. When you add the industry collobrations to this, you get a significant advantage. Oracle Linux distinguishes itself from the other OS platforms with these abilities (ofcourse in addition to the UEK )

Oracle Linux engineers are  focused on Linux development. They improve their abilities by dealing with this operating system in a very big installed base. They have things developed and those things are now parts of the Linux kernel. Such as RDS, which was developed by Oracle in order to have low-latency connectionless protocol , which improves database performance in Linux. Oracle Engineers also tuned the infiniband stack. At the same time, the collobration between Intel and Oracle made it possible to accelerate columnar compression/decompression+encyrption operation and improve the Numa scalability.
This means, Oracle Linux is a distribution, which is developed in a tightly integrated environment and it is developed with the Oracle Products in the primary focus. This is an advantage if we are using Oracle Products , especially the Oracle database.

So far so good. As we are specifying the reasons that makes customers to choose Oracle Linux, we also must list the features of Oracle Linux. Because these features are also important factors for customers making this decision.

Resource management, ability to perform instance caging by binding instances to specific Cpus.
Ability to pin the processes to the same processor and  same memory nodes on Numa architecture.. (this numa binding increases performance, as the relevant the cpus used by the relevant processes access local memory - rather than non-local memory..)

Ability to use a smart flash cache, by expanding the database buffer cache to a second-level flash drive based cache. (flash access is much more faster than disk access). This is directly related with the database customers. In order to expand the buffer cache, what we need to do is ->just attach the flash drive and make our database use it by setting the relevant database parameters and restarting the database;
SQL>  alter system set db_flash_cache_file='/dev/sdb' scope=spfile;
System altered.
SQL> alter system set db_flash_cache_size=1G scope=spfile;
System altered

By using the ksplice, we can have Zero downtime updates for the kernel and key user space libraries with no reboots or interruption. This simplfies the maintanence and increases the continuity+availability for our mission critical database applications.

With the MCE(Machine Check Exception) daemon running on Oracle Linux, we can trigger events based on the certain error thresholds in Cpu or memory. This daemon can also take actions based on the thresholds. So, we have a tool that presents information like hardware errors , parity and cache errors to OS..

With the integrity check mechanism (following T10 PI -- Protection Information), we have an opportunity to perform integrity checks from the application to OS, through the switch and host bus adapter, and to the disk storage device itself.
Oracle has an open source interface to be used for this task.
Read the following whitepaper for getting more info on this ->
http://www.oracle.com/us/technologies/linux/prevent-silent-data-corruption-1852761.pdf

Oracle Real Application Clusters, which provides redundancy in the event of a hardware or software failure, is free of charge for Oracle Linux Basic and Premier Support customers. This is definitely a good reason..

Oracle Linux has built-in security mechanisms like ip filtering for firewall capabilities, strong encryption, and military-grade SELinux mechanisms. In addition to this OS level security mechanisms, Oracle Linux is tested and recommended for hosting Oracle Database Security options. (TDE, Data Redaction, Audit Vault and Database Firewall)

Virtualization, Cloud Readiness and Managebility are 3 other important reasons for using Oracle Linux. We have OVM, KVM and ready-to-use, easy-to-deploy VM templates for having fully configured oracle software environments and OS images in virtual machines. Today, even EBS or CRM application can be directly provisioned using these templates. In the cloud side, we get Oracle Linux support at no additional cost directly when we subscribe to OCI. With this support, we can use ksplice, 24/7 Linux support services , MOS Linux knowledge base and Oracle Enterprise Manager (for linux management). By the use of enterprise manager for linux management, we reduce infrastructure management cost + TCO.

Oracle's enterprise engineered systems are running Oracle Linux. This is one of the most important reasons for choosing Oracle Linux actually.

Engineered systems running Oracle Linux:


  • Exadata
  • ODA
  • Exalytics
  • BDA
  • Private Cloud Appliance
  • ZDLRA


Finally, we have validate configurations, which can be used as references for having easier, faster, and lower-cost deployments of Oracle Linux and Oracle VM solutions..
These validated configuration are published in oracle technetwork;
https://www.oracle.com/technetwork/topics/linux/validated-configurations-085828.html

Oracle Linux is free to download. All errata is freely available from Oracle Linux Yum server.

Oracle supplies preinstall packages (like preinstall rpms) for Oracle Database, even for EBS. Morever, Oracle Linux provides packages for developers, scripting languages and database connectors via Yum.

Well.. I tried to list the reasons that makes Oracle Linux to be choosen and these reasons make the Oracle Database run best on Oracle Linux actually..

There may be more reasons to list, and things to discuss of course , but it is obvious that you can deploy your Oracle databases or Oracle Application to Oracle Linux with peace of mind. 
So let's download a copy of Oracle Linux and get started :)
See you on my next blog post, which will be a short and clean one -- about Oracle Linux licensing  :)
I almost forgot, here is my tech review Oracle Linux. (I did it on 2017 actually, so it is an old one, but still.. have a look..)

https://www.itcentralstation.com/product_reviews/oracle-linux-review-42176-by-it_user600741

Wednesday, January 29, 2020

Twitter Acccount! Erman Arslan's Oracle Blog ♠💻📖🖋 @EAOracleAce

On this week, Erman Arslan's Oracle Blog starts sailing on Twitter.

hashtagStay tuned, follow the latest tweets, read the blog, keep up to date without a clog :)

Tuesday, January 28, 2020

You have a technical question? Are you asking for technical advice? // Selections from solved issues (Erman Arslan's Oracle Forum)

In addition to 7 years of blogging (ermanarslan.blogspot.com), it has been 6 years now since, I started this voluntarily remote support project.

With this project I have tried helping my followers to find a solution that will work for them. (https://ermanarslan.blogspot.com/p/forum.html)

Although it is hard to mantain this kind of a continuous support in parallel to my work, I m pleased to see the increased count of solved issues.

We have more than 1350 tips and/or solutions at the moment.

In this blog post, I want to give some selections from the wide range of categories covering different types of topics.


Selections From Solved Issues

Hyperion / question on full TLS implementation -- TLS 1.2, LDAPS and EPM 11.1.2.4

EBS / Configuring a concurrent program to be run by only 1 user at a time.

EBS / Running EBS custom code using sqlplus (setting apps context, setting the language -- using mo_global and fnd_lobal)

Linux / Script to get the WWID of disks (using correct arguments with scsi_id)

EBS / Question on OAM/OID integration with r12.2

EBS / How to disable JOC

Goldengate / REPERROR (-1, DISCARD)

EBS / PERWSVAC FRM 40735: On insert trigger  raised unhandled exception ORA-01400

Database / Error ORA-03137 & ORA-12012

Oracle Database Appliance / Questions about ODA

You have a technical question? Are you asking for a technical advice?

Wednesday, January 15, 2020

EBS R12 -- ERROR -- "Cannot open your concurrent request's log file" / Concurrent Managers / NFS shares with 64 bit Inodes / FileHandles / nfs.enable_ino64=0 / LD_PRELOAD / uint64_t / EOVERFLOW and more.

Let's recall the facts about the EBS R12's (12.0 and 12.1) apps tier.. Is it 32 bit or 64 bit?
Actually , we don't have to go to far to find the answer, as I have already written a blog post about it in the year of 2016 ->

https://ermanarslan.blogspot.com/2016/01/ebs-r12-apps-tier-32-bit-or-64-bit.html

In short, when we talk about a 64 bit EBS 12.1 or 12.0 system, we actually talk about a 64 bit Oracle Database + an Application tier which have both 32 bit and 64 bit components.
That means, even if our EBS environments are an 64 bit, we still have 32 bit components deployed and 32 bit code running our EBS environments.

Except some of the 64 bit executables such as executables in the Advanced Planning product line, the EBS apps Tier is 32 bit. That's why we apply 32 bit version of patches in to the 10.1.2 and 10.1.3 Oracle Homes.

Well.. After this quick but crucial acknowledgement, let 's get started with our actual topic, the problem that made me write this blog post.

Two days ago, I have dealed with a problem in an EBS 12.1 environment
An environment in which we had;
2 Apps nodes, 1 OAM/OID node and 2 database nodes.
It was an advanced configuration, but the problem was on a certain area.

Basically, our problem was about concurrent managers..
That is, concurrent manager could not be started ..

Acutally they were started by  the Internal Concurrent manager(ICM), but they were then going into the "defunct" state. So they were becoming zombies just after they were started and when we checked the syslog, we saw that the processes were getting segmentation faults.

This cycle was repeated in every 1 mins.. I mean managers were started by ICM and then, they were going into the defunct state..ICM recognized that they were dead, so it killed the remaining processes and then restarted them again and again..

We got one of the Standard Manager process as a sample and checked its log file..
The problem was very certain..

Process was complaining about being unable to do I/O for its log file.. (manager's log file)

The rrror recorded in Standard Manager's Log file was "Concurrent Manager cannot open your concurrent request's log file."

All the standard managers had the same error in their log files and all the associated FNDLIBR processes were going into the defunct state just after they were started by ICM..

When we analyzed the OS architecture in the Apps nodes, we saw that there were NFS shares present..
NFS shares were mounted and there were also symbolic links through these NFS shares to the directories that were hosting the Concurrent Manager's out and log files.

When we "cd" into these directories, we could list those log files and actually they were there.. We could even edit (write) and read the problematic log files without any problems .. The permissions were okay and everything looked good.

However; it seemd that, the code,  I mean the FNDLIBR processes couldn't do I/O to these files.

With this acquired knowledge , we have analzed the Storage architecture, the storage configuration itself..

It was a Netapp, a newly acquired one and those NFS shares were migrated to this new Storage 2 days ago.. So it was winking at us.. Something in the storage layer should have been the real cause.

We knew that these FNDLIBR processes were 32 bit, and they may fail while dealing with a 64 bit object .. The code was getting EOVERFLOW probably.  EOVERFLOW : Value too large to be stored in data type.

So we told the storage admin to check if there were any configuration which might cause this to happen.. Especially the Inode configuration should have been checked in this new storage.. Using 64 bit inodes might cause this...

Actually we had a solution for this kind of a Inode problem in EBS Application and Linux OS layers as well.

In EBS layer, we could apply the patch ->  Patch 19287293: CP CONSOLIDATED BUG FOR 12.1.3.2.
At this point, I asked a question to my self ... How can Oracle fix this by applying a patch to its own code?

Without analyzing the patch, my answer was : probably by delivering a wrapper which redirects these I/O functions like readdir(), stat() etc, and returns 32 bit inode numbers, that the calling application/process could handle. Maybe they have used the LD_PRELOAD which can be used to intercept the syscalls and do this trick.. They may even used the uint64_t for storing the 64 bit inodes in the first place..

Anyways, my answer satisfied my curiosity :),  lets continue..

In OS layer, we could use the boot parameter  nfs.enable_ino64=0. (note that, this move would make NFS client fake up a 32 bit inode number for readdir and stat syscalls instead of returning a full 64 bit number.)

However; we didn't want to take any actions or risks while there was an unannouced change in question.. 

At the end of the day, It was just as we thought..

The new Netapp was configured to use 64 bit filehandles :) As for the solution; the Storage admin disabled this feature and everyting went back to normal :)

Again, we still had our own solutions even if the storage admin couldn't disable the feature.

Some extra info:

*
To check if a problematic file has a 64 bit inode, you can use the ls -li command.
For ex:

$ ls -li erman
12929830884 -rw-r--r-- 1 oracle dba 0 Aug 19 11:43 erman
The inode number is the first one in the output above.

So if you see a number bigger than 4294967295 then it means it is a 64 bit number.

*
To check a binary is 32 bit or 64 bit, you can use file command.
For ex:

file /opt/erman /opt/erman
/opt/erman /opt/erman  ELF 32-bit LSB executable

That is it for today :) See you on next article ...

Thursday, January 9, 2020

Hyperion / EPM -- Enabling TLS 1.2 and LDAPS in Hyperion/EPM 11.1.2.4

Recently enabled TLS 1.2 ( HTTPS ) and LDAPS for Microsoft Active Directory (MSAD) connections in a mission critical Hyperion environment. Actually, the work was quite challenging, so I decided that it's worth sharing with you.


In this project we needed to enable TLS 1.2 on Hyperion Web application side and turn off all the SSL/TLS versions except TLS 1.2 for security reasons. In addition to that, we needed to enable LDAPS for Hyperion-MSAD connections..

Let's start with the TLS/HTTPS side of the work;

We knew that Hyperion 11.1.2.4 doesn't support TLS 1.2. So the web server that comes built-in with the Hyperion was unable to speak TLS 1.2..

That's why, we needed to add/install a seperate Web Server in front of the Hyperion's web server, which was OHS 11.1.1.7.

We also needed to configure this new Web Server to play the middle man role between the Hyperion clients and Hyperion Web applications (Hyperion's Web Server actually).

This new webserver  should have been an OHS 11.1.1.9.
Note that, OHS 11.1.1.9 supports TLS 1.2...

After this configuration, the flow would be Clients -> https 11.1.1.9 -> https 11.1.1.7..

We already enabled TLS/SSL in the Hyperion. That is, we enabled the TLS/SSL in current OHS ,but it wasn't able to speak TLS 1.2.

References for enabling TLS/SSL in EPM ( SSL terminated at OHS):

Steps to Setting Up SSL Offloading with OHS Webserver From EPM 11.1.2.x (Doc ID 1530169.1)
http://learninghyperion.blogspot.com/2015/04/secure-epm-environment-ssl-terminated.html

We ensured that, currrent OHS couldn't communicate with TLS 1.2, and we tested that by disabling all the other protocols but TLS 1.2 in our browser.


We tried to reach the application login, but couldn't do that.

As mentioned, we added a new OHS 11.1.1.9 into the picture and let it do the TLS 1.2 work.. That new OHS was also assuming the duty of being a bridge between clients and old OHS 11.1.1.7.
A reverse proxy...

The final picture became as follows;


Actually we have applied the workaround documented in MOS note : Does EPM to Support TLS 1.2 Communication via OHS? (Doc ID 2179810.1), and it worked perfectly well.

The operations to implement this workaround were as follows;

1) Download OHS 11.1.1.9 via Patch 20995453.


2) Install OHS by running the OHS 11.1.1.9 installer.

Choose Install and Configure
Enter a new Oracle Middleware Home location (DO NOT use existing Oracle Middleware home, do not overwrite anyting!)
Choose "Oracle HTTP Server" only to configure only the OHS and OPMN.
Be careful in "Configure Ports" page, as All OHS and OPMN ports have to be unique.


3) After the installation, first check that you can access the new OHS using HTTP.

4) Then enable SSL in this new OHS by making changes to the httpd.conf and ssl.conf files.

See Configuring Oracle HTTP Server to Use SSL in Fusion Middleware 11g (11.1.1.X) Document 1226933.1. Note if the EPM OHS is already SSL enabled, you should be able to copy the changes made to the EPM OHS to support SSL, to the 11.1.1.9 version of OHS. You can also use the same certificate (wallet) since the new OHS is on same machine.

5) Configure the new OHS 11.1.1.9 Web Server as a reverse proxy to OHS 11.1.1.7 by editing the OHS 11.1.1.9 ssl.conf.. Just add the ProxyPass and ProxyReversePass as shown below..

Note that, in the following example, we suppose the OHS 11.1.1.7 is already ssl-enabled.. If not, you can just make the necessary modifications accordingly.

<IfModule mod_proxy.c>
SSLProxyEngine On
# Normal reverse-proxy requirements
ProxyPass / https://<ohs_11.1.1.7_host.domain.com>:<ohs_11.1.1.7_ssl_port>/
ProxyPassReverse / https://<ohs_11.1.1.7_host.domain.com>:<ohs_11.1.1.7_ssl_port>/
ProxyPreserveHost On
ProxyRequests off
# SSL specific reverse-proxy requirements
SSLProxyProtocol ALL -TLSv1.1 -TLSv1.2
SSLProxyCipherSuite HIGH:MEDIUM:!LOW:!NULL:!aNULL:!eNULL:+SHA1:+MD5:+HIGH:+MEDIUM
SSLProxyWallet </oracle/wallet_location>
</IfModule>


6) Start/Restart OHS 11.1.1.9 and try to access the EPM login page through OHS 11.1.1.9 using https.

Example url: https://NEWOHS:443/workspace/index.jsp.

7) Disable all the protocols except TLS 1.2 and try reaching the EPM login page again to ensure that you have SSL TLS 1.2 for the communication between the user’s browser and your OHS 11.1.1.9.

Actually, it could be tested by disabling all the protocols but TLS 1.2 in our browsers, as mentioned earlier. But this time we needed to disable all the protocol (except TLS 1.2) in OHS level, so that no one could reach the EPM using lower SSL/TLS versions..

In order to disable all the protocols but TLS 1.2 in OHS, we just edited the ssl.conf of our new OHS server and modified the related line like the following and restarted our OHS;

SSLProtocol -ALL +TLSv1.2
(this instruction basically says -> accept TLSv1.2 and deny all other protocols in SSL communication)

So far so good.. 

Lets take a look at the LDAPS side of the work;

You can think LDAPS,as the SSL/TLS-based version of LDAP..

We use ldap for Active Directory authentications mostly in Oracle environments.

General speaking, we gather the credential information from the user, or from user's Operating system and use them with ldapbind and if we see a successful return from our bind operation, we let the user go into our application.. 

So we authenticate the user from a centralized LDAP directory and this LDAP directory is mostly Microsoft Active Directory (MSAD).

Most of the time, we have OAM (Oracle Access Manager) and OID (Oracle Internet Directory) between the Active Directory and our application, but this is another story and it is not in the scope of this blog post..

Anyways, we were already authenticating Hyperion users from the MSAD.. 
However; our connection between Hyperion and MSAD was based on LDAP, rather than LDAPS and that was a security vulnerability.

So we must have established secure connection communication between our Hyperion EPM servers and the user directory (MSAD LDAP).


We already knew that, whenever Hyperion tries to communicate with SSL enabled MSAD/LDAP, it uses the trust certificate from its Java keystore (cacerts).

So, in order to enable LDAPS, we imported all the MSAD SSL certificates.. (by following How to Import SSL Certificates for Hyperion EPM to Use a SSL Connection to Corporate Directory (Doc ID 1599610.1 ))

We imported the root certificates of the MSAD both into the cacerts keystore file residing in Jrockit and the cacerts keystore file residing in Jdk directories..

We used commands like the following for doing this job ->

keytool -import -alias ldapca -keystore C:\bea\jrockit_160_37\jre\lib\security\cacerts -trustcacerts -file C:\ldapca.cer

After these types of moves, we restarted all the EPM services and used EPM Shared Services to configure MSAD configuration.. We configured it to use LDAPS on port 636(default LDAPs port of MSAD) but it failed !!

Basically, when we tried to configure MSAD Connection Information we got the error as follows;

"EPMCSS-05139: Failed to retrieve base DNs. Error communicating to server. Simple Bind Failed. Invalid host, port value."


At this moment, we thought that, maybe this was because the MSAD that we were trying to talk to, was configured with TLS 1.2 only.

Actually, MSAD admin said that his MSAD configuration allows TLS 1.0 and 1.1 as well, but we still suspected that kind of a configuration problem and decided to upgrade our JDK and Jrockit in the Hyperion side.

We knew that TLS 1.2 was only supported with JDK 8 version. On the other hand, our JDK and JRockit versions were 1.6, and JDK 8 upgrade was not supported with the Hyperion 11.1.2.4..

With this in mind, we made a research and reviewed the knowledge base and bug records..
The following MOS document seemed promising..
However; it was recommending a JDK upgrade..

Essbase: Unable to connect to SSL Enabled Windows 2016 MSAD External Directory from Essbase Server. ERROR: JAVAX.NAMING.COMMUNICATIONEXCEPTION: SIMPLE BIND FAILED: LDAPSERVER.COM:636 [ROOT EXCEPTION IS JAVA.NET.SOCKETEXCEPTION: CONNECTION RESET ( Doc ID 2482392.1 )

Yes.. The above document was suggesting JDK upgrade->


To Upgrade JDK 1.6 to JDK 1.6 Update 181 or higher, apply the steps outlined in Doc ID 2390603.1

To Upgrade JDK 1.6 to JDK 1.7, apply the steps outlined in Doc ID 2351499.1


So we decided to give it a try.

We had our up-to-date backups and the operation was easy..

We choosed to upgrade our JDK to JDK 1.6 Update 181, and this operation was just about downloading the new version, unzipping it and switching the names of the currently installed JDK directories with the newly downloaded ones..

We upgraded JDK to 1.6.0.181 and Jrockit to 28.3.17 by following Doc ID 2390603.1 and ID 2482392, but we were still getting the same error!!

At this point, as for throubleshooting the error we did the following;
  • Worked with the LDAP admin to made sure that LDAP host is using SSLv3 or TLS1.0 and not use SSLv2Hello.
  • Made sure there is no firewall issue.. LDAPS port (636) was open in the firewall between the EPM server and LDAP server.
  • Made sure that the UserBaseDN, principal, credentials were configured properly. Also made sure that the server hosting LDAP was reachable and that the port (636) was open both directions.
  • Installed an external LDAP browser (http://www.ldapbrowser.com/download.htm on the server where Shared Services was installed and tested the connection to MSAD. (note that external LDAP browser connected to MSAD successfully..)
  • Tried re-importing CA certificates to the keystores used by EPM's Weblogic.
We couldn't get any solution from the things I just listed above, but during these throubleshooting work; we discovered something which was quite interesting.

Actually , the hostname that we were using for connecting the MSAD wasn't belong to MSAD itself.. So there was something between EPM and MSAD and that thing was a Load Balancer!

Customer was using a Load Balancer in front of its MSAD! Yes... But not only that... Customer was using that Load Balancer for offloading the SSL work between the  MSAD and the rest of the network.

This made all clear.

We needed to import Load Balancer's root certificate in order to speak LDAPs with the MSAD.

Actually we needed both Load Balancer's root certificate and MSAD 's root certificate.. This was because in case of a failure in Load Balancer we still need to reach the MSAD and be able to speak TLS with it.

As for the solution, we gathered the root certificate of the Load Balancer and imported it to our cacerts files, restarted EPM services and it was done! LDAPs was enabled! :) interesting right ? :)

Once again, we realized how important the communication between IT employees is!

So if you are working on a new environment, always ask and try to get a detailed infrastructure architecture which shows the connections between your server and other servers in the network...
If you can't get that info from the customer, then you try to find it in your own way .. 

Almost forgot ! :) One more thing for the TLS 1.2 enablement! 
We made the firewall admin to close all the ports between the clients and the OHS 11.1.1.7... We don't want to have a naughty client to reach the application using TLS 1.0 or SSLv3 by bypassing our new OHS 11.1.1.9  :)

Monday, December 23, 2019

ZDLRA -- Zero Data Loss Recovery Appliance "Fast. Integrated. Zero Data Loss" Engineered for Data Protection !

We can't underestimate the importance of backup and recovery in our business lifes. Actually not only for business, but for personal life, as well..

In this blog posts, I will be concantrated on the business side of course :) I will introduce you an end to end solution for the business side, specifically for the Oracle database administrators.

As you know, backup and recovery are the most important things for DBAs.
As a DBA, you can't just say "I don't have any backups to restore or used for recovery".

If the backup and recovery is in our responsibility, and if there's a problem with the backups of production environments when there is a need to restore, then it means we are in a real throuble.

That is, if it is our responsibility to manage the Oracle database backup and recovery, then we don't have any such luxuries like not having yesterday's backup, or having a corrupted backup when it is required to be used.

This is also very important for IT managers and directors, as backup and recovery is their responsibility in the first place.

Recovery is also important as much as having backups. It must be ensured that our backups can be restored and used for the recovery purposes quickly to comply with the SLAs.

Backups should be taken continously and the checks should also be continous for ensuring the recoverability all the time.

In order to fullfil these important requirements , we use backup recovery tools.. So we need to have them, and we need to use them properly, efficiently.

We need  to schedule our backups, store them on disks, later on tape and restore them from there when necessary. We should be able to integrate several 3rd party backup solutions, we need to make them integrate with RMAN scripts, database storage policy, retention policies and so on.

We should install the tools, configure them, integrate them,  use them, and manage them and etc :) Lots of things to do and to manage right? :)

From this perspective, we need to be both responsible and well skilled.

However; eventhough we do all that we can, we may still have hard times when there is an urgent need to restore our backups.. As we can't backup continously , we may have gaps in our backups and we live with that risk time to time.

Even if everyting is in place, we may have a performance impact in our production database systems, while following these strictly defined backup policies.

In this backup tasks, and in this data lifecycle management mechanism, there are also invisible areas for DBAs. As a DBA and system admin, most of the time we can't even answer to the basic questions like where is your backup stored?  Which tape do you need to use for recovering last week's backup and so on..

There are many more things to consider, but I will stop now :)

I know I m a little pessimistic about these subjects, but these are the facts when using the traditional backup solutions.

Okay.. Here comes the solution to all these problems : Oracle Zero Data Loss Recovery Appliance,
by short name : ZDLRA.


ZDLRA is engineered for Data Protection! It is based on Exadata X8! -- Think about Exadata 's Flash.. Exadata 's I/O throughput and Exadata internal network bandwith (we have even RDMA over converged ethernet now)

Almost all the database versions are supported -> Supports 11G, 12C, 18C and 19C.
It eliminates Data Loss by providing real-time protection .. That is, it acts like a dataguard destination and continously backup the redo changes. (This is called Continous Redo Transport)

ZDLRA eliminates the production impact and reduces both the backup restore/restore times and complexity..  It backs up the changed data only and stores it in an intelligent way , so it restores efficiently. We can also offload our tape backup processes to this machine. (we can even deploy backup agents on it)

ZDLRA Delivers Cloud scale protection... That is, ZDLRA has the ability to serve data protection to thousands of Databases. It has the ability to scale without downtime.
We use a Policy based data protection in ZDRLA and we have an end to end visibility + control over the protected databases.

ZDLRA protects from disasters. It can replicate data real-time to remote ZDLRA environments and cloud.. This feature is an addition to the tape archival.. These replications are done transparently and in the background.

Lots of things to mention when it comes to ZDLRA. The technologies like delta push , delta store.. The incremental forever strategy, and more..

Well. So far so good... I think I have done an adequate introduction and now I leave you alone with my presentation about ZDLRA :) 












Friday, December 6, 2019

EBS -- EBS 12.1.3 / Oracle Database 19C Upgrade

It is time to write about EBS. Here we are, talking about upgrading EBS databases to 19C.

In these days, we are mostly dealing with 11.2.0.4/12C databases in EBS world, so it is the time to upgrade those databases.

A big majority of EBS customers are still using EBS 12.1.3, my blog post will be about upgrading an EBS 12.1.3 databases to 19C.

In this blog post, I will try to give you the facts about these kinds of projects and then will give you a consolidated action plan for just a quick overview.

Facts :
  • EBS 12.1.3 is certified with Oracle Database 19C (as of Sep 2019)
  • EBS 12.1.3 is certified with Multitenancy, 1 CDB and 1 PDB. So if you are considering 19C upgrade in EBS environments, you should know that your 19C EBS database will be a multitenant.. Oracle certifies EBS and 19C only with this condition and Oracle won't certify EBS with a non-CDB 19C database even in the future.
  • With the 19C upgrade, UTL_FILE_DIR becomes obsoleted, so UTL_FILE_DIR based file access mechanisms, should be replaced with the Database Directories-based file operations.
  • Uprade environment should be tested very carefuly after the Test iterations.. (custom code should be reviewed..) Performance problems may appear and if they appear they should be resolved .. Especially for the ISG(Integrated SOA Gateway).
  • For Linux customers (most of our customers are on Linux), 19C requires Oracle Linux 7 or Redhat Linux 7.  So you need to consider OS upgrade if you are on a lower version.. Note that, OEL 7 upgrade can be done in-place if you are on a certain OS software level, but I find it risky. Anyways, this item must be carefully considered.
  • EBS 12.1.3 apps tier is also certified with OEL&RHEL 7. Linux upgrade for apps nodes is not a must, but it is a nice-to-have thing.. Besides, this makes single node environments (Both Apps and DB in the same server) to be upgraded without a need for splitting the services to multiple nodes.
  • I can't give a certain downtime requirement for such an upgrade. That's why I recommend it to be measured during the Test iterations.
  • As EBS 19C upgrades require CDB-based db tier, we need to have a high level of understanding of CDB-PDB / multitenant Oracle databases. Actually, we need to learn it by hearth, or as the germans say "wir ( APPS DBAs) alle müssen es auswendig lernen" :)
  • It seems Oracle made our job easy again.. I mean, things like converting non-PDB to PDB is done using perl scripts :)
Okay, let's take a look at the action plan from the surface:

Preperation :
  • Applying Prerequisites Patches for the Upgrade. (19C interop,AD/TX Delta patches and etc.).
  • Applying Warehouse Builder patches (optional)
  • Applying Autoconfig patches.
  • Applying   Patch 6400501: APPSST11G:1203:NOT ABLE TO COMPILE FORMS LIBRARRY WITH 11G DB (For Linux)
  • Applying Patch 12964564:R12.FND.B - Enabling the Oracle Database 11g Case-Sensitive Password Feature for Oracle E-Business Suite Release 12.1.1+ (optional, for enabling case sensitive passwords)
  • Creating the initialization parameter setup files (running txkOnPremPrePDBCreationTasks.pl)
  • Install 19C RDBMS Software (Software only)
  • Create Nls9i data (running $ORACLE_HOME/nls/data/old/cr9idata.pl)
  • Applying 19C RDBMS Home Patches (almost 15 DB patches)
  • Creating a new appsutil.zip and copying it to the required folders/servers.
  • Copying orai18n.jar file to the required folders.
  • Create a CDB (without any PDBs)
  • Patching CDB, synching it with the 19C home (running datapatch)
  • Creating CDB MGDSYS schema(running catmgd.sql)
  • Creating CDB TNS Files(running txkGenCDBTnsAdmin.pl)
  • Configure Transparent Data Encryption for CDB (conditional/optional)
  • Shutdown CDB
  • UTL_FILE_DIR(for the required UTL_FILE migration)
  • Shutting down Application services and application tier listener in source
  • Drop SYS.ENABLED$INDEXES (conditional)
  • Disabling Database Vault (conditional)
  • Exporting OLAP analytical workspaces (conditional)
  • Removing the MGDSYS schema (conditional -- running catnomgdidcode.sql)
Upgrade + Conversion :
  • Upgrading the DATABASE (11.2.0.4 to 19C)
    • Backing up database
    • Upgrading database using DBUA
    •  Performing post-upgrade tasks
  • Running patch post-install instructions (for the patches applied in earlier steps)
  • Compiling PL/SQL code natively (optional)
  • Importing OLAP analytical workspaces (conditional)
  • Running adgrants.sql
  • Granting create procedure privilege on CTXSYS(adctxprv.sql)
  • Compiling invalids
  • Granting data store access
  • Validating the WF rulesets (wfaqupfix.sql)
  • Gathering SYS stats
  • Creating the new MGDSYS schema (conditional) -- running catmgd.sql
  • Creating Demantra privileges (conditional)
  • Exporting Master Encryption Key (conditional)
  • Converting the upgraded database to Multitenant
    • Creating the PDB descriptor
    • Disabling the ENCRYPTION_WALLET_LOCATION sqlnet.ora entry (conditional)
    • Updating the CDB initialization parameters
    • Checking for PDB violations (review and resolve the errors if there are any)               Creating the PDB (run txkCreatePDB.pl)
    • Running the post PDB script (txkPostPDBCreationTasks.pl)
Post-Upgrade tasks :
  • Modify initialization parameters (according to the EBS-Database initialization parameters MOS document)
  • Run Apps Tier autoconfig (with some additional context file modifications)
  • Apply post-upgrade WMS patch (Patch 18039691)
  • Recreate custom database links
  • Apply latest RDBMS Release Update (Database Release Update 19.5.0.0.0, OJVM Release Update Patch 19.5.0.0.0 and other 19.5.0.0.0 patches)
  • Restart application services
Post-Upgrade Support :
  • Babysitting
  • Error correction & Throubleshooting
Okay, we are at the end of this blog post.. Lastly, I m sharing the MOS document which should be followed line by line to do such an upgrade.. You should follow it closely.. It may redirect you to some other documents when necessary, but you will come back to it after taking the actions on those documents. So your main document is ;

"Interoperability Notes: Oracle E-Business Suite Release 12.1 with Oracle Database 19c (Doc ID 2580629.1)"

Friday, November 29, 2019

Exadata/RDBMS -- Database Release Schedule + Support Dates - 19C + Exadata Image Upgrade (to the latest version // 19.3.1)

In order to get a long-term support, customers started to upgrade their Oracle databases to 19C.

"Oracle Database 19C is the most current long-term support release".


Read -> "Release Schedule of Current Database Releases (Doc ID 742060.1)" for more info about the release schedule and support dates.

Exadata customers are also planning to upgrade, but they have an extra prereq, as their image version should support Oracle Database 19C.

19C database upgrade brings the needs for 19C Grid upgrades and Exadata Image version upgrade actually. So this year, there will be lots of upgrade operations in the red stack :)

11GR2 works perfectly well. 18C is an option, but customers prefer to have a long term support, and to be always up-to-date, so the 19C is the best option to upgrade currently. Especially for the customers who follow the techological improvements in the Oracle database closely and/or who try to catch up, as the new developments in the areas outside the database (their integrations targets, their application stack and so on) are moving too fast.

The premier support for 20C and 21C is seems to be short, like 18C. However; 22C will be the next long term support release after 19C..

Anyways, lets go back to our original topic : Upgrading Exadata X3-2 image version to 19.3.1.


Currently, we don't have an image version that supports Oracle Database 19C in Exadata Cloud at Customer(ECC), but we do have an image version that supports  19C database in Exadata on-prem.
This post is based on an image upgrade work, which was recently done on an Exadata X3 environment..

Anyways, as you may guess, I upgraded an Exadata X3's image version to 19.3.1 (the latest release currently) recently.

My customer needed to have an Exadata environment that supports 19C databases.. Ofcourse 19C Grid as well..

So, before upgrading the Grid and Database version , we needed to upgrade the Exadata image version. The machine was a Gen-3 (X3) and that 's why I had some doubts about this upgrade.. Fortuneatly, it went perfectly well :)

I already documented the operation required for upgrading the image version of Exadata in the following blog posts earlier->

https://ermanarslan.blogspot.com/2018/03/exadata-upgrading-exadata-software.html
https://ermanarslan.blogspot.com/2018/07/exadata-image-grid-122-upgrade.html

So, in this blog post, I will try to give you some additional information about these kinds of operations.

In order to upgrade the image version of an Exadata, we must check the documentation and download the correct patches ;

For 19.3.1 , the documentation is as follows:

Latest Image Support Note : Exadata 19.3.1.0.0 release and patch (30441371) (Doc ID 2592151.1)

Exadata 19.3.1 supports the following Grid and Database version:

Oracle Grid Infrastructure:
19.4.0.0.0.190716 *
18.7.0.0.0.190716 *
12.2.0.1.0.190716 *
12.1.0.2.0.190716 *
Oracle Database:
19.4.0.0.0.190716 *
18.7.0.0.0.190716 *
12.2.0.1.0.181016
12.1.0.2.0.180831
11.2.0.4.0.180717

Well.. In order to upgrade our image versions, we download Cell, Database server and Infiniband network software and image files, as shown in the picture below.
Note that, this is a Bare Metal Exadata and it is a X3, so it doesn't have a RDBMA network switch but it has an infiniband network switch.


We use the patch tool/patchmgr, which comes with the Cell server patch , to patch the cell nodes. However; we download and use a specific patch tool/patchmgr for patching the compute/db nodes.( Patch 21634633 in this case...)


We upgrade the Exadata image version by executing 3 the main processes, given below;
  1. Analysis and gathering info about the environment.
  2. Pre-check
  3. Upgrading the Images 
So, we execute the 3 main phases above and while executing these phases, we actually take the following 7 actions;

1) Gathering info and controlling the current environment :
Image Info, DB Home & GRID Home patch levels opatch lsinventory outputs, SSH equivalency check , ASM diskgroup repair times check, NFS shares, crontab outputs, .bash_profile contents, spfile/pfile backups, controlfile traces

2) Running the Exack:
Downloading the up-to-date exachk and running it with the -a argument.
After running the exachk -> analyzing its output and taking the necessary actions if there are any.

3)Creating the necessary group files for the Patchmgr . (cell_group, dbs_group, ibswitches.lst)

4) Running Patchmgr precheck. After analyzing its output-> taking the necessary actions (if there are any) For ex: if there are 3rd party rpms, we may decide to remove them manually before the upgrade.

"-Note that, this time I had to upgrade the infiniband switch release to an intermediate version.."
"-Also, I had to change the Exadata machine's NTP server :) -- full stack.. "

5) Running Patchmgr and upgrading the images. (we do the upgrade in rolling mode)

Before running the patchmgr, we kill all the ILOM sessions.. (active ILOM session may increase the duration of the upgrade)

6) As the post upgrade actions; reconfiguring NFS & crontabs. Also reinstalling the 3rd party rpms (if removed before the upgrade)

7) Post check: checking the databases, their connectivity and alert log files..
Note that : we also run exachk once again and analyze its output to ensure that everything is fine after the Image upgrade.

After upgrading the Image version to 19.3.1, you will see something like the following when you check the versions;

[root@dbadm01 image_patches]# dcli -g all_group -l root imageinfo| grep -E 'Image version|Active image version'
dbadm01: Image version: 19.3.1.0.0.191018
dbadm02: Image version: 19.3.1.0.0.191018
celadm01: Active image version: 19.3.1.0.0.191018
celadm02: Active image version: 19.3.1.0.0.191018
celadm03: Active image version: 19.3.1.0.0.191018
[root@dbadm01 image_patches]# dcli -g ibswitches.lst -l root version |grep version |grep "SUN DCS"
csw-ibb01: SUN DCS 36p version: 2.2.13-2
csw-iba01: SUN DCS 36p version: 2.2.13-2

Note that, the versioning of  Infiniband Images are different than versioning of Exadata Images.
So 2.2.13-2 is the latest image version for infiniband and it is the one that compatible with 19.3.1 Exadata image version..

Well.. That is it for now :) for Exadata :)

Before coming to the end; one more thing...
After upgrading the Exadata Image version; we may want to upgrade our Grid Infra, as well..

In order to upgrade our Grid Infrastructure after upgrading our Exadata image version; we follow the MOS note below;


"19c Grid Infrastructure and Database Upgrade steps for Exadata Database Machine running on Oracle Linux (Doc ID 2542082.1)"

---

That 's it for today :) I hope this has been useful to you:)

See you in my next blog post (which will be about 19C Database upgrades.. -- probably :)

Monday, November 18, 2019

RAC & Exadata // Some Tips & Tricks for the migration ... for the performance

Currently dealing with a RAC/EXADATA migration project..
This time I'm dealing with a mission critical single node environment. This environment is fully CPU-bound, but it has also very fast All Flash Arrays in its storage layer.
I m dealing with a source environment which has lots of connections doing lots of things and using lots of CPU cycles.
The application is not RAC-aware either. With all the things given above, you can guess that the application can not scale out in the database layer very easily.
In order to migrate this kind of a database to RAC/to Exadata, we must take some extra actions, some improvements actually. Some changes for the RAC, for scaling out properly.
Yes, we must think about all the 3 layers.. Application, Database and Storage.
Especially; If the target is an Exadata, then we must concantrate on the Application layer intensively.

In this blog post, I will quickly give you some tips and tricks that you may use before migrating a database environment, which is similar to the one I just described above ;

NOCACHE Sequences: Ordered Sequences do not scale well.
Use non-ordered and cached sequences if sequences are used to generate the primary keys 
ALTER SEQUENCE ERMAN_SEQ1 … CACHE 10000;
If you not cache them, you may see EQ or SQ contentions..
However, know that if you use non-ordered, cached sequences, then you may have non-ordered values in your table columns which are feeded with these sequences..
So, if you can't use cached and non-ordered sequences with your application, then consider having an active-passive configuration. You should consider using that part (the code that uses sequences) of your application only on one instance of your RAC.

Missing cluster interconnect parameter: Making an Oracle database running in Exadata use static infiniband interconnect ip address relies on setting cluster_interconnects parameter.
If not set, Oracle database by default chooses HAIP infiniband addresses for the cluster interconnect and it is not recommended.

This recommendation can also be viewed by analyzing an Exachk report.
That is , if we don't set the cluster_interconnects parameter in the database and leave Oracle database to use the default HAIP interconnects, then Exachk will report a failure saying "Database parameter CLUSTER_INTERCONNECTS is NOT set to the recommended value"

Database parameter CLUSTER_INTERCONNECTS should be a colon delimited string of the IP addresses returned from sbin/ifconfig for each cluster_interconnect interface returned by oifcfg. In the case of X2-2 it is expected that there would only be one interface and therefore one IP address. This is used to avoid the Clusterware HAIP address; For an X2-8 , the 4 IP addresses should be colon delimited

So use ifconfig to determine the ip addresses assigned for ib0 and ib1 interfaces (not the ib0:1 or ib1:1) on all the rac nodes, and set these ip address in a colon delimeted strings for all the instances and restart the database;
Ex:

alter system set cluster_interconnects='ib_ipaddress1ofnode1:ib_ipaddress2ofnode1' scope=spfile sid='SIDinst1';

Hard parses :  Use soft parsing to save and reuse the parse structure and execution plan. With soft parsing, metadata processing is not required.
When soft parsing, you don't parse and describe.. You only execute and fetch.. Yes, we are talking about eliminating cost here.

Avoid Hard Parsing by Prepared Statement and using Bind Variables...

Instead of;
String query = "SELECT LAST_NAME FROM "+"ERMANS WHERE ERMAN_ID = " + generateNumber(MIN_ERMAN_ID,MAX_ERMAN_ID);
prepStmt = connection.prepareStatement(query);
resultSet = pstmt.executeQuery();   
Change to:
String query = "SELECT LAST_NAME FROM "+"ERMANS WHERE EMPLOYEE_ID = ?";
prepStmt = connection.prepareStatement(query); 
int n = generateNumber(MIN_ERMAN_ID,MAX_ERMAN_ID)
prepStmt.setInt(1, n);
resultSet = pstmt.executeQuery(); 

Caching the soft parses :  Although soft parsing is not expensive, it can still take some time.. Consider using statement caching.
for ex : oracleDataSource.setImplicitCachingEnabled(true) + connection.setStatementCacheSize(10);

High number of concurrent sessions : In order to control concurrent sessions, consider using connection pooling. Consider limiting the pool and processes to avoid connection storms. Ensure that load is balanced properly over the RAC nodes.

High number of database sessions : More processes means higher memory consumption, bigger page tables, higher risk of paging, higher system CPU time. Consider using connection pools, consider releasing the connections when your works are finished.

Small Log Buffer Size:  Consider making log_buffer parameter bigger if you see log_buffer_space waits even in your current platform.

High interconnect usage: Using interconnect is expensive even if you use infiniband for the interconnect communication.. It is expensive, since it depends on lots of things. 
For ex: Even  the things that LGWR does, are important while we are using the interconnect. That is; when blocks with pending changes are pinged by other instances , the related redo must be written to log, before the block can be transferred. So in such an environment, where you have chatty processes that manipulate and read the same data blocks, you may even consider having the sessions that manipulate the same data frequently to connect to the same RAC node all the time.

Performance tuning: It just never ends. Like peeling an onion… There’s always another layer.. Consider using Partitioning, compression, indexes (you may even consider dropping some of them) and so on. Consider implementing RAC best practices..
Global hash partitioned indexes & Locally partitioned indexes -> Both of these make you achieve better cache locality.

Re-considering current underscore parameters : Any reasons? , for ex: _disk_sector_size_override=TRUE  (YOU DON’T NEED TO SET IT – 1447931.1)