Wednesday, September 17, 2014

Exadata-- TNS-12170 error diagnostics -- a detailed approach

It has been a while since my article about Exadata.
This will be about an ORA-12170 error diagnostics in Exadata.. I find it quite interesting, as the cause of the problem mentioned above, was an unexpected one..
This post will be like short story , but I will try to give some detailed information during the transitions as well.
Anyways.. The story is based on a problem reported by one of my customers..
Customer has an quarter-rack Exadata X3 machine, configured to host an Oracle EBS 11i Production database..
We have configured this Exadata Machine and have implemented EBS on it.  We migrated the EBS database from Solaris to this Exadata machine, and migrated the EBS Apps Tier from Solaris to a new Oracle Linux server, as well.
It was successfull project , as we did hard and complicated works during the migration.(the source systems were Solaris, and database was 10gR2 and so on.)
As you may guess , the customer was also using Discoverer for reporting purposes.
Discoverer Desktop edition have been running all over the place , and no issues were reported about this product till yesterday.

The problematic situation has arisen when a bunch of the employees tried to connect to the mentioned EBS database located in Exadata..
They were using Discoverer tool to connect to the database, and encountering ORA-12170/TNS-12170 errors during their connection attempts..

I was already familiar with TNS-12170 error , and knew it was related with the client-server network of the corresponding database environment..
Even so, I made a quick search about this error, and found the following;

Cause: The server shut down because connection establishment with a client failed to complete within the allotted time interval. This may be a result of network/system delays; or this may indicate that a malicious client is trying to cause a Denial of Service attack on the server. 

Action: If the error occurred because of a slow network/system, reconfigure the SQLNET.INBOUND_CONNECT_TIMEOUT parameter in sqlnet.ora to a larger value. If a malicious client is suspected, use the address in sqlnet.log to identify the source and restrict access. Note that logged addresses may not be reliable as they can be forged (e.g. in TCP/IP). 

So, as you see above, the search have provided the familiar information about the error, and a generic action plan that can be checked and implemented accordingly..

So far so good...
On the other hand, this was a quite new Exadata... It was configured by the Professionals and EBS was running without a problem.. In fact, all the other Discoverer Clients were connecting to the database without any problems.. 
These facts made me think that, there should have been a specific network problem between these problematic clients and the Exadata Database Machine ..

Here is what I did for the analysis;

First, checked the firewall in the db nodes and did not see any deny-rule there.
Then thought that, using wireshark for capturing the client-server network communication to see the break might be useful. But I quickly give up that thought when I saw the infiband network device configuration..
The infiniband network was as follows;

BONDING_OPTS="mode=active-backup miimon=100 downdelay=5000 updelay=5000 num_grat_arp=100"

So infiband/private interconnect was on network
Then I realized that these problematic clients were on the same network .. I mean, they were using the ip addresses from to

Afterwards; I took a look to the route configuration of the db nodes in this Exadata Machine.

Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface * U 0 0 0 eth0 * U 0 0 0 bondeth0 * U 0 0 0 bondib0 * U 0 0 0 bondib0
default UG 0 0 0 bondeth0

And yes! The cause was there.. The route declaration for all the ip address in the  network was directed from the bondib0 device..
Bondib is the bond device of internal network in Exadata..
So, it was normal for these clients to not to be able to connect to the databases on this Exadata, as Infiniband network is an internal network...

It is needless to say that there was a bad luck or misconfiguration in the network layer of this environment ..
I mean that the infiniband network should have been never used for clients, so bondib which is a private network should have been non routable and unique..

On the other hand, as I have faced this situation, I needed to solve:)

I knew that the network is derived from the subnet mask.

For example: If you want to have an private interconnect ip addres as and if you want to have a subnet mask , you get as your network ,and this effects your route configurations..

Exactly as follows;

010 = 00001010
252 = 11111100
--------------------- AND
8 =   00001000  --> so the network is

Related MOS notes:
The Basics of IPv4 Subnet and Oracle Clusterware ( Doc ID 1386709.1 ) 
How to calculate the correct subnet for an interface ( Doc ID 1059759.1 ) 
How to Validate Network and Name Resolution Setup for the Clusterware and RAC ( Doc ID 1054902.1 ) 

So, in order to fix this; there were 3 ways/methods to prefer;


1) Changing Exadata's infiniband network
2) Changing the network of these problematic clients.
3) Adding the routes to the Exadata Db nodes for overwriting the general route configuration for accessing those clients.


1) Changing Exadata infiniband network was a risky thing, and it wasnt worth changing it to make 15-20 discoverer clients happy. 
Additional info: 
The steps are mentioned in the Oracle Exadata Database MachineOwner's Guide 11g Release 2 Page 7-60
Also in  Changing IP addresses on Exadata Database Machine ( Doc ID 1317159.1 ) 

2) In my opinion , it was the best way, but the customer didnt prefer this one.

***3) "Seemed like the the second best way in this situation. So , I applied this method as follows.."*****

For adding specific routes , I used route command.
I executed the following commands on db nodes one by one.

route add -host dev bondeth0
route add -host dev bondeth0
route add -host dev bondeth0
route add -host dev bondeth0
route add -host dev bondeth0
route add -host dev bondeth0
route add -host dev bondeth0
route add -host dev bondeth0
route add -host dev bondeth0
route add -host dev bondeth0
route add -host dev bondeth0
route add -host dev bondeth0
Above ip addresses were the ip addresses of the problematic Discoverer clients.

Also, to make these routes permenant; I have added these scripts in to the /etc/rc.local files of the Db nodes as follows;

#BEGIN DO NOT REMOVE Added by Oracle Exadata 
if [ -x /etc/rc.d/rc.Oracle.Exadata ]; then
. /etc/rc.d/rc.Oracle.Exadata
#END DO NOT REMOVE Added by Oracle Exadata ###########

#DO NOT REMOVE Added by Erman  :)##
route add -host dev bondeth0
route add -host dev bondeth0
route add -host dev bondeth0
route add -host dev bondeth0
route add -host dev bondeth0
route add -host dev bondeth0
route add -host dev bondeth0
route add -host dev bondeth0
route add -host dev bondeth0
route add -host dev bondeth0
route add -host dev bondeth0
route add -host dev bondeth0

I had to add these routes in rc.local because the route -p option is not there in Linux.. -p option is only available in windows. :)

So that was it . TNS-12170 errors have dissapeared immediately after adding those routes..


  1. Perfect...thanks a lot for knowledge sharing. Please upload me so that community can learn more and more. your blog is awesome

  2. Can Cihan HırçınNovember 2, 2014 at 6:01 PM

    great workflow and diagnosis. thanks for sharing


If you will ask a question, please don't comment here..

For your questions, please create an issue into my forum.

Forum Link:

Register and create an issue in the related category.
I will support you from there.