Sunday, January 31, 2021

EBS -- Using Python to catch and terminate runaway forms runtime processes / Runaway Forms Runtime Handler

 Whenever I need to develop some tools to ease dev ops or administration, I usually use perl and some bash scripts to do the job.. Sometimes, when I need to go a little more deep, I use C and when I need some GUI in the client-server mode, I use.. (when I need to write a TUI, I use dialog uility in Linux as well).. It's been a long time since I used Python. As far as I remember, the last time I used python was in my senior year at university.

So, today I wanted to make a change and wrote a python program to detect runaway forms processes (frmweb processes that don't have a database session but  have high cumulative cpu time) and terminate them.. 

Actually, I could almost do the same thing with 5-10 lines of perl, python or bash code. But! My goal was to actually prepare something to help friends in the oracle world who are thinking of programming a system and have an eye on Python. So, I wrote this program a little longer and in tutorial mode. In this context, I tried to use as many features as possible. ( functions, logging, 2d array-like objects, iterations on array, oracle db connection, environment variable handling inside the code and so on)

Reports bugs to -> rman.arslan@gmail.com :)

I shared the code/script in the next paragraphs of this arcticle (Under the heading of RUNAWAY FORMS RUNTIME HANDLER), but first let's see an example output that the code generates when we execute it.. Note that, you also see the logfile that the code produces below;


We have some features and I listed there here -> 
  • Handle LD_LIBRARY_PATH environment variable
  • Oracle DB Connectivity 
  • Get Apps password as an input without showing it in the command line.
  • Batch and interactive modes.
  • Connect to DB once and use that connection while iterating our 2d array.
  • Identifying runaway forms processes (having a process id, but not having a db session & having a high cumulative cpu time. >2h
  • Parsing ps output, storing in an 2d array. 
  • Logging -- date + findings/actions

RUNAWAY FORMS RUNTIME HANDLER:

#!/usr/bin/python
#First things first; we tell our shell -> use python to interpret and run this script.. This way, we don't need to write pyhton in front of the script name --  everytime we execute this script
#usage : ./runaway_frmweb_handler.py
#we run the script with the forms server OS user / EBS application owner OS user.. ex: applmgr

"""
There may be some cases, where we have forms processes left over and running for a long time without doing anything.
In most of the  cases, we see them spinning on the CPU and we identify them with their huge CPU usage -- time.
This script is used to identify and kill those types of forms processers in order to decrease the unncessary load on the application servers.
Tested on EBS R12 , but it needs to be tested more..
Script Creation Date : 24.01.2021
Last Modification Date : 27.01.2021
Lang: python
Author: Erman Arslan

Features:
Handle LD_LIBRARY_PATH environment variable
Oracle DB Connectivity 
Get Apps password as an input without showing it in the command line.
Batch and interactive modes.
Connect to DB once and use that connection while iterating our 2d array.
Identifying runaway forms processes (having a process id, but not having a db session & having a high cumulative cpu time. >2h
Parsing ps output, storing in an 2d array. 
Logging -- date + findings/actions
"""

######################################################
############## IMPORTING MODULES ##################
######################################################

"""
We import the modules that we need use in our code. import command is similar to #include in C/C++ ..
We could also have imported the required objects only.. (rather than importing the whole module) - Using from <module_name> import <names>
Note that, we also import numpy. Numpy provides a high-performance multidimensional array object, and tools for working with these arrays.
In order to have numpy in our server, we installed pip and then using pip, we installed numpy.
By using the as keyword, we give numpy an alternate name and we use that alternate name in our code.(just to make easier for us to write the name, we use np as an alternate way here..
 
curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
python get-pip.py
pip install numpy
"""

import subprocess
import os
import sys
import cx_Oracle
import numpy as np
from datetime import datetime
from datetime import date
import logging
import getpass

####note that, we set the LD_LIBRARY_PATH to the instant client directory. We handle it.. We set it to the directoy where oracle client libraries reside.

if ('LD_LIBRARY_PATH' not in os.environ or '/oracle_client/instantclient_12_1' not in os.environ['LD_LIBRARY_PATH']):
   os.environ['LD_LIBRARY_PATH'] = '/oracle_client/instantclient_12_1'
   try:
    os.execv(sys.argv[0], sys.argv)
   except Exception, exc:
    print 'Failed re-exec:', exc
    sys.exit(1)

"""
We install cx_Oracle module, as well.. pip install cx_Oracle==7.3
for EBS 12.1, we install 64 bit oracle client, as we have 32 bit oracle homes in apps nodes..  If we dont have 64 bit oracle client in place,  get the following error:
cx_Oracle.DatabaseError: DPI-1047: Cannot locate a 64-bit Oracle Client library: "libclntsh.so: wrong ELF class: ELFCLASS32". See https://oracle.github.io/odpi/doc/installation.html#linux for help

instantclient-basic-linux.x64-12.1.0.2.0.zip
[root@ebstestdb oracle_client]# unzip instantclient-basic-linux.x64-12.1.0.2.0.zip
Archive:  instantclient-basic-linux.x64-12.1.0.2.0.zip
  inflating: instantclient_12_1/adrci
  inflating: instantclient_12_1/BASIC_README
  inflating: instantclient_12_1/genezi
  inflating: instantclient_12_1/libclntshcore.so.12.1
  inflating: instantclient_12_1/libclntsh.so.12.1
  inflating: instantclient_12_1/libipc1.so
  inflating: instantclient_12_1/libmql1.so
  inflating: instantclient_12_1/libnnz12.so
  inflating: instantclient_12_1/libocci.so.12.1
  inflating: instantclient_12_1/libociei.so
  inflating: instantclient_12_1/libocijdbc12.so
  inflating: instantclient_12_1/libons.so
  inflating: instantclient_12_1/liboramysql12.so
  inflating: instantclient_12_1/ojdbc6.jar
  inflating: instantclient_12_1/ojdbc7.jar
  inflating: instantclient_12_1/uidrvci
  inflating: instantclient_12_1/xstreams.jar

We also soft link the library cd /oracle_client/instantclient_12_1; ln -s libclntsh.so.12.1 libclntsh.so
Note that, we already imported the cx_Oracle module above..
"""

######################################################
############## FUNCTION DEFINITIONS ################
######################################################

# We define our kill, db_conn and a db_check functions just to use some functions in python :)
# Note that, we create a single connection and use that connection while iterating our 2d array..

def SIGKILL_func(forms_pid):
 kill_cmd='kill -9 ' + forms_pid
 os.system(kill_cmd)

def db_conn(apps_pass):
 EBS_tns = cx_Oracle.makedsn('ebstestdb', '1555', service_name='TEST') # if needed, place an 'r' before any parameter in order to address special characters such as '\'.
 global conn
 conn = cx_Oracle.connect(user=r'APPS', password=apps_pass, dsn=EBS_tns)

def db_check(forms_db_pid):
 process_check_query = "select PROCESS from v$session where PROCESS=",forms_db_pid
 process_check_query = ''.join(process_check_query)
 c = conn.cursor()
 c.execute(process_check_query)
 c.fetchone() #we try the fetch one record to populate the c.rowcount properly..
 if (c.rowcount == 0):
  print "This process has no db session, so it is ok to be killed"
  logging.info('This process has no db session, so it is ok to be killed')
  return "killable"
 else:
  c.execute(process_check_query)
  for row in c:
   if(row[0]==forms_db_pid):
    logging.info('This process has db session, so we should not kill it.')
    print "This process has db session, so we should not kill it."
    return "Not killable"
   else:
    logging.info('This is weird')
    print "This is weird.."
    return "Not killable"

######################################################
############## WE START HERE  #######################
######################################################

#We first check our command line arguments and exit if we don't like the command line..

if (len(sys.argv) > 2 ):
 print "Wrong argument given..\n Usage : runaway_frmweb_handler.py or runaway_frmweb_handler.py batch"
 quit()
elif ( len(sys.argv) == 2 ):
  if (sys.argv[1]!="batch"):
   print "Wrong argument given..\n Usage : runaway_frmweb_handler.py or runaway_frmweb_handler.py batch"
   quit()
  else:
   print "Running in batch mode."
else:
 print "Running in interactive mode."


#We start logging to file here... We will log our findings and actions in /tmp/runaway_frmweb_handler.log.

import logging
logging.basicConfig(filename='/tmp/runaway_frmweb_handler.log', filemode='w',encoding='utf-8',format='%(asctime)s %(message)s', datefmt='%m/%d/%Y %I:%M:%S %p', level=logging.INFO)
logging.info('Script started')


#We also get our apps password here. We get apps password  without displaying it in the shell..
apps_password= getpass.getpass("Enter your APPS password: ")

#We connect to the database
db_conn(apps_password)

"""
We build our command to check Linux process, which have high cumulative cpu time..
Cumulative CPU time, "[DD-]hh:mm:ss" format. (alias time).
That cumulative CPU time we get from command corresponds to the TIME+ value that we see in top command output..
Ofcouse we get process ids as well.. We use process ids to kill those runaway processes..
Note that, we execute our command using subprocess call, we get the output and we manipulate the output array with numpy.reshape.
"""

cmd = ['ps -eo pid,cputime,euser,ucmd | grep `whoami` |grep -v grep  | grep frmweb | awk {\'print $1" "$2\'}']
proc = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)
process_to_kill_counter=0
o,e = proc.communicate()
ps_array=o.decode('ascii')
ps_array=ps_array.split()
import numpy as np
ps_array_2d = np.reshape(ps_array, (len(ps_array)/2, 2))
array_length=len(ps_array)/2

"""
We check our array and get the process ids of the runaway forms processes.. -if there are any...
We ask if it is okay to  kill those processes that we identify.
That is it.
"""

"""
Note that, if we see 4th digit, I mean if we see a value in the day files of the cumulative cpu time, then we directly consider that process runaway.
If we don't have a 4th digit, we check -> if ( cputime_hour_count>1 ), and decide.. 
We treat a forms process with a cumulative CPU time of more than 2 hours as a runaway. We still get user's confirmation before doing anything..
"""
if (array_length <= 0):
 print "There are no forms processes running"
 logging.info('There are no forms process running, I quit.')
 quit()
for x in range(array_length):
 process_id = ps_array_2d[x][0]
 try:
  cputime = datetime.strptime(ps_array_2d[x][1], '%d-%H:%M:%S')
  print "Found a runaway process with a cumulative cpu time > 1 day ->", process_id
  logging.info('Found a runaway process with a cumulative cpu time > 1 day -> %s', process_id)
  if (db_check(process_id)=="killable"):
   process_to_kill_counter += 1
   if (len(sys.argv) != 2):  # we already control the cmd line arguments in the beginning, so if we are here then it means all arguments are correct, so it is sufficient to check the length..
    answer_input= raw_input("Do you want me to kill it? Y or N :")
    if (answer_input=="Y"):
      logging.info ('Approved! Killing Process id: %s',process_id)
      print "Killing Process id : ", process_id
      SIGKILL_func(process_id)
    else :
      logging.info('Disapproved! I will leave it running')
      print "Okay.. I will leave it running"
   else :
    logging.info ('I m in batch mode, so approved! Killing Process id: %s',process_id)
    print "Killing Process id : ", process_id
    SIGKILL_func(process_id)
 except ValueError:
  cputime = datetime.strptime(ps_array_2d[x][1], '%H:%M:%S')
  cputime_hour_count=cputime.hour
  if ( cputime_hour_count>1 ):
   print "Found a runaway process with a cumulative cpu time > 2h ->", process_id
   logging.info('Found a runaway process with a cumulative cpu time > 2h -> %s',process_id)
   if (db_check(process_id)=="killable"):
    process_to_kill_counter += 1
    if (len(sys.argv) != 2): # we already control the cmd line arguments in the beginning, so if we are here then it means all arguments are correct, so it is sufficient to check the length.. 
     answer_input= raw_input("Do you want me to kill it? Y or N :")
     if (answer_input=="Y"):
      logging.info('Approved! Killing Process id : %s', process_id)
      print "Killing Process id : ", process_id
      SIGKILL_func(process_id)
     else :
      logging.info('Disapproved! I will leave it running')
      print "Okay.. I will leave it running"
    else :
     logging.info ('I m in batch mode, so approved! Killing Process id: %s',process_id)
     print "Killing Process id : ", process_id
     SIGKILL_func(process_id)
if ( process_to_kill_counter==0 ):
 logging.info('No runaway form processes to kill')
 print "No runaway form processes to kill"
conn.close() #we close our database connection at the end

RDBMS-- Parsing the listener log with a straight forward in-database method / by taking that jdbc_url anomaly into consideration :)

Remember; years ago, I shared a method for reading the listener log files in a scriptized way.. This type of a method can be used during security checks, or during a migration project (to see the connections, to know by which applications the connections to the database are made..) 

I want to remind you again; in migration projects, this type of an analysis makes our job easier especially when bundling .(when deciding the migration bundles)

You can review that blog post -> https://ermanarslan.blogspot.com/2016/03/rdbms-listener-logs-jdbc-parsing.html

Today, I give you another method for reading and parsing the listener log files. This one is purely an in-database method, and it is pretty straight forward.

I want to mention that, we handle the listener log lines which include jdbc_url correctly.. As you may already know; HOST parameter in the CONNECT_STRING shows __jdbc__ when the client connects to the database using the Jdbc thin driver. So, in order to determine the the real host name of these jdbc clients, we should use the info recorded in the PROTOCOL_INFO.

Let's see;

First, we create a database directory to reach the listener trace/log file from the database..

create or replace directory LISTENER_LOG as '/u01/dbebs/PROD/db/tech_st/11.2.0/admin/PROD_ebstestdb/diag/tnslsnr/ebstestdb/prod/trace';

We create an external table to read the listener log file using the database directory we just created. (we use our listener log file name in the location caluse..)

create table listener_log_ea ( line varchar2(4000))organization external (type oracle_loader default directory LISTENER_LOG access parameters (records delimited by newline nobadfile nologfile nodiscardfile fields ldrtrim missing field values are null reject rows with all null fields (line char (40000)))location ('prod.log'))reject limit unlimited;

 Lastly we query the external table by taking that jdbc anomaly into consideration :)

with details as
(
select last_value(tstamp ignore nulls) over ( order by tstamp ) as tstamp,
substr(host,1,instr(host,')')-1) host,
substr("USER",1,instr("USER",')')-1) "USER"
from
( select
case when line like '__-___-____ __:__:__ %' then to_date(substr(line,1,20),'DD-MON-YYYY HH24:MI:SS') end tstamp,
case when line like '%HOST=__jdbc__%' then substr(line,instr(line,'(ADDRESS=(PROTOCOL=tcp)(HOST=')+29) else
case when line like '%HOST=%' then substr(line,instr(line,'HOST=')+5) end
end host,
case when line like '%USER=%' then substr(line,instr(line,'USER=')+5) end "USER"
from listener_log_ea
)
)
select *
from details where host is not null

That's it.. That "with" query can be modified according to the needs and the characteristics of the environment.. After all, you got the point..

Wednesday, January 27, 2021

RDBMS -- DBMS_IJOB.CHANGE_ENV - Change Log_user, Priv_user, Schema_user and NLS_ENV of a DBMS_JOB -- without the need to recreate.

Here is a quick tip for the ones using DBMS_JOB.

Using DBMS_IJOB (undocumented), we can manipulate dbms jobs.
Ofcouse, DBMS_JOB is the best known interface for manipulating jobs, but DBMS_IJOB let us even change the LOG_USER ,PRIV_USER and SCHEMA_USER of a dbms job. In this way, we don't need to recreate the job for such a change. 

Here is a demo;

SQL> DECLARE
X NUMBER;
BEGIN
begin
SYS.DBMS_JOB.SUBMIT
(
job => X
,what => 'null;'
,next_date => to_date('01.27.2021 16:59:35','mm/dd/yyyy hh24:mi:ss')
,interval => 'TRUNC(SYSDATE+1)'
,no_parse => FALSE);
exception
when others then
begin
raise;
end;
end;
END;
/
PL/SQL procedure successfully completed.


SQL> select job,log_user,priv_user,schema_user,interval from dba_jobs where what='null;';

JOB  LOG_USER   PRIV_USER   SCHEMA_USER     INTERVAL
286         SYS              SYS                   SYS               TRUNC(SYSDATE+1)

SQL> begin
for j in (select * from dba_jobs where job = 286)
loop
dbms_ijob.change_env(j.job, 'APPS', 'APPS', 'APPS', j.nls_env);
end loop;
commit;
end;
/
PL/SQL procedure successfully completed.

SQL> select job,log_user,priv_user,schema_user,interval from dba_jobs where what='null;'

JOB  LOG_USER   PRIV_USER   SCHEMA_USER     INTERVAL
286         APPS            APPS                  APPS               TRUNC(SYSDATE+1)

Thursday, January 21, 2021

EBS 12.2 19C DB environments -- Suggestions for some issues (including performance related ones) / based on a real life story

In my previous blog posts, I made some suggestions for dealing with problematic discoverer reports, which may have performance issues especially after 19C database upgrades. In fact, we recently worked in a production EBS environment where there were some serious performance problems.(performance problem after the upgrade.) 

I was like a post-upgrade consultant in that project and with the intense work we have done, we have made the environment work with acceptable performance in a short time.

I continue to share some of the problems we encountered in this type of a consultant work and the solutions or workarounds that have been implemented (in a short time and in pressure) to make the system acceptable in terms of functionality, continuity and performance .

Today I want to touch on 3 specific issues .. (which I also posted on my twitter account recently)

*Issue : We also saw occasional instance terminations in EBS 12.2 19C environments. Espically under high load.. Solution -> Patch 29838337: XDB STRESS TEST - HIT ORA 600 [KJBLREPLAY: DUP] (latest RU can also be considered)

*Issue: We saw continious updates on AR_TRX_BAL_SUMMAR_MAIN_SUM ... This updates were increase the load of the database nodes dramatically.. (although, the database was running on Exadata)

UPDATE AR_TRX_BAL_SUMMARY MAIN_SUM SET RECEIPTS_AT_RISK ...
Those looped updates place a serious load on the system.

So here is the recommendation; If you don't use Credit Management and Advanced Collections applications, these updates may be disabled.. See MOS note - 2445550.1

*Issue : Reverse Journals were taking a lot of time to complete..This was related with Reverse Journals/GLPREV - performance and we created a custom index on GL_JE_HEADERS for that..

Index ON GL_JE_HEADERS (ACCRUAL_REV_JE_HEADER_ID)

We have seen the benefit... Index on (ACCRUAL_REV_PERIOD_NAME, LEDGER_ID) -> can also increase the perf. of automatic reversal.. (also see Patch 29464680) 

-----------------------------------------------

Below, I  share the solution methods for different problems experienced in this environment, as well.

*In case you need to manage the database load manually ; read the following blog post -> 

http://ermanarslan.blogspot.com/2021/01/ebs-122-configuring-application.html

*Read the following blog post for Discoverer - related quick win that we have implemented in this environment-> 

http://ermanarslan.blogspot.com/2020/12/ebs-122-rdbms-optimizerfeaturesenable.html

That 's it for today :)  I hope, this will be helpful to you.

Saturday, January 9, 2021

Erman Arslan's Oracle Forum -- December 1, 2020 - January 6, 2021 - "Questions and Answers Series"

Question: How much time do you spend/lose?

Answer: Well, how much time I gain? :) 

Remember, you can ask questions and get remote support using my forum.
Just click on the link named "Erman Arslan's Oracle Forum is available now.
Click here to ask a question", which is available on the main page of Erman Arslan's Oracle Blog -- or just use the direct link: 


30 Issues, 125 Replies and 785 thread views this month :)


Come on, let's see what we've been up to this month. (Do not forget to read the blog posts too :)

Erman Arslan's Oracle Forum - Issues this month:

adapcctl-sh exiting with status 204 in Ebs 12.2 env.

Error Instantiating the OHS Config. Executed in 117109 milliseconds, returning status 1

Post EBS 12.2.9 Upgrade, cutover, oafm -- manager not starting

Question on : guaranteed restore point for cutover /ADOP

Custom TOP & formservlet.ini

XMLTYPE/CLOB and BLOB datatypes - how to replicate these fields from Oracle to Kudu?

DB link access from Oracle to MSSQL

Another question about replicating data from Oracle to Kudu

EBS Business Continuity - synchronize application files between primary and standby site.

Enterprise Manager Hangs After Log In

Workflow Mailer conf.- Failure of server APACHE bridge.

Workflow Mailer - insufficient free space following gc

About Profile options - modification and etc

EBS Solaris to Oracle Exadata + 19C in single step , Steps for DB migration to Exadata.

Can't access conc output files with different responsibilities

EBS 12.1 RAC Migration / Türkçe

Fatal SSL Error - Bug 26040483 UTL_HTTP call to https site fails ORA-28750

EBS 12.2 - how to modify ssl.conf

Goldengate - OGG-06439 & OGG-00918

adpreclone - Large FMW_Home.jar file

Actualize_all is stuck and not proceeding

EBS Customers -- Microsoft's plan to drop support for Transport Layer Security (TLS) protocols 1.0 and 1.1 in its browsers at the end of 2020. Does it really mean that we can no more connect to it,after first of january 2021?

Question on Virtualization- Vmware, KVM, OVM

While proceeding with addnode we got error mentioning CORBA

Weblogic Patch on R12.2 and FS_CLONE

Correct method for applying Weblogic Patches on R12.2

Question / case about RAC node eviction

not able to import the certificates to the wallet

SSL/TLS in multi node EBS with DMZ

Tuesday, January 5, 2021

EBS 12.2 -- Configuring Application connections manually ( including jdbc connections) - on RAC / on Exadata

In some cases, we may want to configure application connections manually.. As you may already know, EBS 12.2 configures its Forms&Reports, Concurrent processing and web (HTTP Server) connections through the tnsnames.ora file which resides in the Application nodes.. ( tnsnames.ora file located in the directory pointed by $TNS_ADMIN  -- in application nodes)

These TNS configurations are all autoconfig-managed and they are tide to the autoconfig variables named s_tools_twotask, s_cp_twotask and s_weboh_twotask..

In addition to that, the configuration for all the JDBC connections of EBS 's Apps Tier is also automatic managed and it is configured through the s_apps_jdbc_connect_descriptor.

Normally, when we run autoconfig, or when we install the apps with an Oracle RAC database configuration, then we will end up with a connection configuration which is based on scan listeners.. This configuration supports both load balance and fail over..  

This seems good as it leverages the RAC and SCAN-based architecture.. It provides load balancing and failover for databases connections.

However; as I mentioned in the beginning, we may want to change that.. Especially when we have an unbalanced environment where we have a node which is more crowded and loaded than the others.. (this may be caused by several reasons, like manual tns configuration for Discoverer clients or 3rd party applications that are configured to connect to only one of the db nodes..)

So in such a case, we may want to configure all the EBS apps tier connections and make the application tier services connect only a single node (least crowded one). 

In order to make such a configuration, we update the application tier context file and set twotask autoconfig variables to the failover TNS entries (TNS entries which have _FO suffix in our case) present in tnsnames.ora file of the Application node/or nodes

We also set the s_apps_jdbc_connect_descriptor according to our needs. Ofcourse we don't forget to set s_jdbc_connect_descriptor_generation to FALSE, as well.. 

Note that, s_jdbc_connect_descriptor_generation setting is very important.. I mean, if we set it or leave it as is (I mean if it is set to TRUE), then the apps tier autoconfig will overwrite the jdbc url with the scan-based & load balanced one... So if it iset to true, autoconfig will revert the change that we do for the  jdbc url.. Note that, jdbc url is used by all the jdbc connections in EBS + dbc file is also created with that url..

Following are the twotask and jdbc url related settings that I made in environment where customer wanted me to make all the services (including Concurrent Processing, Forms and Weblogic) connect to a RAC Node.. (Exadata Node 1 in this case). As the TNS Entries and jdbc url settings are based on failover mode (not the load balance mode), the services will always connect that RAC Node (Exadata Node 1), unless that RAC Node is down. So this supports failover a well.

Note that, this is tested in an EBS 12.2.9 environment with an Oracle 19C Database ( running on Exadata Cloud at Customer)

<TWO_TASK oa_var="s_tools_twotask" osd="unix">PROD_FO</TWO_TASK>

<CP_TWOTASK oa_var="s_cp_twotask">PROD_FO</CP_TWOTASK>

<TWO_TASK oa_var="s_weboh_twotask" osd="unix">PROD_FO</TWO_TASK>

<jdbc_url oa_var="s_apps_jdbc_connect_descriptor">jdbc:oracle:thin:@(DESCRIPTION=(LOAD_BALANCE=no)(FAILOVER=yes)(ADDRESS=(PROTOCOL=TCP)(HOST=exadata01-vip)(PORT=1521))(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=ebs_PROD))(ADDRESS=(PROTOCOL=TCP)(HOST=exadata02-vip)(PORT=1521))(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=ebs_PROD)))</jdbc_url>

<jdbc_url_generation_check oa_var="s_jdbc_connect_descriptor_generation">false</jdbc_url_generation_check>

After those settings, we run autoconfig on apps Tier and that's it! (ofcourse, we need to shutdown the apps tier before running autoconfig)

One more thing, you can check your jdbc url and ensure its correct before writing it to the s_apps_jdbc_connect_descriptor.. I know, there are many ways to do that.. But! You can also use SQLPLUS to do this job... Easy and clean :)

Here is an example for the syntax:

sqlplus 'apps/apps_password@(DESCRIPTION=(LOAD_BALANCE=no)(FAILOVER=yes)(ADDRESS=(PROTOCOL=TCP)(HOST=exadata01-vip)(PORT=1521))(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=ebs_PROD))(ADDRESS=(PROTOCOL=TCP)(HOST=exadata02)(PORT=1521))(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=ebs_PROD)))'

I hope it was useful. Take care of yourselves :)