Thursday, October 31, 2013

EBS -- Cancel Workflow Notification Mails to be sent and purge out queue ..

Notification mails are created by the workflow engine to inform EBS users about an ongoing activity.. According to the scenario of the workflow, these mails can inform the related users about a Purchase and Order incident, maybe  about a  new hired employee and so on..
Notificaton mails are sent by the Notification Mailer in EBS.. Notification Mailers is a seeded technology that Oracle delivers with EBS system..

After this general information phase, lets look at the issue with those notification mails that can be annoying..
When you clone an EBS environment, usually you deactivate the notification mailer unless told to do the opposite..
Consider the following scenario,
You cloned a EBS environment, and didnt activate the notification mailer.. Time passed and large number of notifications have been created because of the workflow activities.. These notifications could not be sent to the related users because Notification mailer was deactivated..
One day, you ve been told to activate the notification mailer on this clone environment.. You set the smtp address ( assuming you dont you imap), and started the notification mailer..
Shortly after, you got a call from the business saying, users are receiving tons of notification mails.. Mail servers hang , please stop the notification mailer..

So this is the issue and the worst case scenario...

To prevent this unwanted situation, you need to do the following in the cloned environment --before activating the notification mailer.. Note that, this method applies to EBS 11.5-R12..

Connect to EBS database of the cloned environment with APPS user and update wf_notifications table..

sqlplus apps/apps_passwd
update WF_NOTIFICATIONS set mail_status = 'SENT' where mail_status = 'MAIL';

Above action will update mail_status column in the wf_notifications table. Thus, notification mailer will not find any mails to be sent.. Note that, notification mailer send notification mails if it find the records with mail_status column = MAIL in the wf_notifications table.
 Then run the following sql to rebuild WF_NOTIFICATIONS_OUT queue ..

Login to the application server of the cloned environment with the application owner and execute the wfntfqup.sql with necessary argument (apps apps_pwd applsys)  This action will clear the mails which are placed already in the WF_NOTIFICATIONS_OUT and waiting in this queue to be sent.. After clearing those records, the wf_notifications_out will be recreated by the wfntfqup.sql using the WF_NOTIFICATIONS table..

sqlplus usr/passwd@db @$FND_TOP/patch/115/sql/wfntfqup.sql APPSusr APPSpw FNDusr
Ex: sqlplus apps/apps@db @$FND_TOP/patch/115/sql/wfntfqup.sql apps apps applsys

After this actions you are safe, you can start the notification mailer in the cloned environment..

Wednesday, October 30, 2013

Oracle Exadata -- Infiniband -- OFED -- Delivering the messages -- bypassing the kernel, not generating those interrupts, using RDMA directly place them into memory..

Infiniband  has been deployed in Oracle Exadata Database Machine, Oracle Exalogic Elastic Cloud , Oracle SPARC SuperCluster and more. It has been used in these engineered systems mainly for high performance clustering.. It is used for both db-storage connections and rac interconnect.

In Exadata X2, in each exadata servers, we have 1 dual port PCIe 2.0 HCA with two infiniband  4x QDR(40gb/s) ports.. With these two ports, each server can be connected to two different infiniband switches.

There are types of Infiniband like single data rate (SDR), double data rate (DDR), quad data rate (QDR), fourteen data rate (FDR), and enhanced data rate (EDR).

Following table shows the data rates of each type of link:

1X2 Gbit/s4 Gbit/s8 Gbit/s10.3125 Gbit/s13.64 Gbit/s25 Gbit/s
4X8 Gbit/s16 Gbit/s32 Gbit/s41.25 Gbit/s54.54 Gbit/s100 Gbit/s
12X24 Gbit/s48 Gbit/s96 Gbit/s123.75 Gbit/s163.64 Gbit/s300 Gbit/s

Oracle Exa family uses 40 Gbit/s Infiniband QDR using Sun switches. 40 Gbit/s makes 32 Gbit/s effective data rate because , SDR, DDR and QDR links use 8b/10b encoding — every 10 bits sent carry 8bits of data — making the effective data transmission rate four-fifths the raw rate.
Infiniband fabric on Exa Family, is used to connect the compute nodes with the storage nodes.
Infiniband provides transport protocols in Hardware and Direct Memory Access capability. Thus, interconnected systems have more cpu available for processing.Actually we are talking about Remote Direct Memory Access, which provides direct and efficient  access to host or client memory without involving processor overhead..

Infiniband technology on the server side is provided by Host Controller Adapters (HCA), which is connected through PCI Express slot.
All the Infiniband functionality is supplied by HCA hardware.. -- Server's CPU is not used for infiniband transport..

Some notes about this kind of offloading:

Normally/traditionally, when a network data transfer occurs, network interface card receives the data packet and interrupts the server's CPU.. Server's CPU extracts the data from the network packet and writes the data to the memory, where relevant applications can reach..
If a network packet needs to be sent, again, Server's CPU copies the data from memory to network buffer..
In Infiniband, data packets is moved directly to memory  without any intervention from the host processor. Also, unlike traditional software-based transport protocol processing, InfiniBand provides hardware support for all of the services required to move data between hosts..(RDMA)
Oracle employs the OpenFabrics driver stack for Infiniband in Exadata..Ofcourse, Oracle  have made some improvements on OFED stack to make it Exadata-ready.
About Open Fabrics: The OpenFabrics Alliance (OFA) develops, tests, licenses, supports and distributes OpenFabrics Enterprise Distribution (OFED™) open source software for high-performance networking applications that demand low latency and high scalability.
Lets take a look at the following diagram;

Here you see the levels of Infiniband Stack..

When you look at the Upper level protocols section; Open Fabrics supply serveral protocols for the application needs. So by the use of upper level protocols, various application types can take the advantage of Infiniband accelerated technology.

Open Fabrics Upper Level Protocols are:

IPoIB- > IP over Infiniband
SDP-> Bypasses the TCP stack.. For tcp sockets
EoIB-> Ethernet over Infiniband .. Network interface implemetation of Infiniband,  enables routing of packets from the InfiniBand fabric to a single GbE or 10 GbEsubnet.
SRP->for tunneling SCSI request packets over InfiniBand
iSER-> eliminates the traditional iSCSI and TCP bottlenecks by enabling zero-copy RDMA.
Network File System (NFS) over RDMA : NFS over RDMA extends NFS to take advantage of the RDMA features of InfiniBand and other RDMA-enabled fabrics
Reliable Datagram Sockets (RDS)/ZDP : RDS provides reliable transport services and can be used by applications such as Oracle RAC for both interprocess communication and storage access.

On top of InfiniBand, Exadata uses the Zero Data loss UDP (ZDP) protocol. ZDP is open
source software that is developed by Oracle. It is like UDP but more reliable. Its full technical
name is RDS (Reliable Datagram Sockets) V3. The ZDP protocol has a very low CPU
overhead with tests showing only a 2 percent CPU utilization while transferring 1 GB/sec of

Using RDS, Oracle internal interconnect test tool shows;
50% less CPU than IP over IB, UDP
½ latency of UDP (no user-mode acks)
50% faster cache to cache Oracle block throughput.
So with this implementations, it looks like normal Ethernet to host software..
All IP-based tools work transparently – TCP/IP, UDP, HTTP, SSH..

That is it for now, but I will write more about this topic when I have time for the research...

Linux -- Extend Logical Volume Size -- LVM

In this example, we will extend the size of a Linux Logical Volume.
Before starting, I want to give a general information about Linux LVM..
LVM is a  logical volume manager for the Linux kernel. It manages disk drives and similar mass-storage devices.
 In LVM, disk partitions are assigned to the physical volumes, physical volumes are assigned to Volume Groups and Logical Volumes are related to Logical Volumes, which placed on top of the filesystems.
Any type of filesystem can be created on the logical volumes, including as swap space.
Physical Volumes -> Volume Groups -> Logical Volumes -> Filesystem

Now, lets get to work;
We will extend the size of the logical volume which is mounted to root. We will do this online..

First we check the disk sizes using fdisk command.

root@erman ERBANT]# fdisk -l                                                                                      

Disk /dev/sda: 429.4 GB, 429496729600 bytes                                                                        
255 heads, 63 sectors/track, 52216 cylinders                                                                      
Units = cylinders of 16065 * 512 = 8225280 bytes                                                                  

   Device Boot      Start         End      Blocks   Id  System                                                    
/dev/sda1   *           1          13      104391   83  Linux                                                      
/dev/sda2              14       33418   268325662+  8e  Linux LVM                                                  

you can see that /dev/sda is a 429 GB sized disk.. It has two partitions sda1 and sda2 , and it has free space, that means another partition can be created on it.

[root@erman ERBANT]# df
Filesystem 1K-blocks Used Available Use% Mounted on
261932172 217235312 31391416 88% / 

/dev/sda1 101086 12746 83121 14% /boot
none 2913928 0 2913928 0% /dev/shm

We first create a new partition(sda3) on the disk /dev/sda using fdisk .. We will use the extra space that our storage admin have assigned to our linux server..
If the OS can not see the newly added disk, you can scan the scsi bus to make the OS see the new disk/new size of the disk ..
echo "- - -" > /sys/class/scsi_host/host0/scan
[root@erman ~]# fdisk /dev/sda                                                              
The number of cylinders for this disk is set to 52216.                        
There is nothing wrong with that, but this is larger than 1024,                                                      
and could in certain setups cause problems with:                              
1) software that runs at boot time (e.g., old versions of LILO)                                                        
2) booting and partitioning software from other OSs                                                                                                                                          
   (e.g., DOS FDISK, OS/2 FDISK)                                                                                                        
Disk /dev/sda: 429.4 GB, 429496729600 bytes                                                        
255 heads, 63 sectors/track, 52216 cylinders                                                                      
Units = cylinders of 16065 * 512 = 8225280 bytes                                                                                  
   Device Boot      Start         End      Blocks   Id  System                                                                          
/dev/sda1   *           1          13      104391   83  Linux                                                                              
/dev/sda2              14       33418   268325662+  8e  Linux LVM                                                                                                                            
Command (m for help): n                                                                                  
Command action                                                                                                                            
   e   extended                                                                                                                    
   p   primary partition (1-4)                                                                        
Partition number (1-4): 3                                                                                                                                                                    
First cylinder (33419-52216, default 33419):                                                                                                                                                
Using default value 33419                                                                                                                                                                    
Last cylinder or +size or +sizeM or +sizeK (33419-52216, default 52216):                                                                                                                    
Using default value 52216                                
Command (m for help): w                                                                                                                          
The partition table has been altered!                                                                                                          
Calling ioctl() to re-read partition table.                                                                                                      
WARNING: Re-reading the partition table failed with error 16: Device or resource busy          
The kernel still uses the old table.                                          
The new table will be used at the next reboot.                                                                                
Syncing disks.    

So , we created a new partition (/dev/sda3)  in the disk /dev/sda..

We use partprobe to make the OS recognize the newly created partition. partprobe is a program that informs the operating system kernel of partition table changes, by requesting that the operating system re-read the partition table.

[root@erman ~]# partprobe  

Next, we create a filesystem using mkfs on the new partition;

mkfs.ext3 /dev/sda3
Lets look at our physical volumes;

[root@erman ~]# pvdisplay                                                                                                                                                                
  --- Physical volume ---                                                                                                                                                                    
  PV Name               /dev/sda2                                                                                            
  VG Name               VolGroup00                                                                                                  
  PV Size               255.88 GB / not usable 0                                                
  Allocatable           yes                                                                                            
  PE Size (KByte)       32768                                                                                                      
  Total PE              8188                                                                                  
  Free PE               5                                                                            
  Allocated PE          8183                                                              
  PV UUID               B0fUIm-FO21-toMI-drZo-XKhR-gS79-ZQ6JuF                                                    
So here we see that, we have one physical volume /dev/sda2 which size is 255 GB.. This physical volume is assigned to Volume Group VolGroup00..

We add another physical volume using newly created partition /dev/sda3 with the following command;

[root@erman ~]# pvcreate /dev/sda3                                                                                                                                                    
  Physical volume "/dev/sda3" successfully created  

So we created our new physical volume , lets look to physical volumes again;

[root@erman ~]# pvdisplay                                                                                    
  --- Physical volume ---                                                                                                                              
  PV Name               /dev/sda2                                                                                            
  VG Name               VolGroup00                                                                                                            
  PV Size               255.88 GB / not usable 0                                                                                            
  Allocatable           yes                                                                                                            
  PE Size (KByte)       32768                                                                                                      
  Total PE              8188                                                                            
  Free PE               5                                                                                                                                
  Allocated PE          8183                                                                                                                        
  PV UUID               B0fUIm-FO21-toMI-drZo-XKhR-gS79-ZQ6JuF                                                    
  --- NEW Physical volume ---                                                                                                                                                                
  PV Name               /dev/sda3                                                                                                                  
  VG Name                                                                                                                                              
  PV Size               144.00 GB                                                                                                                
  Allocatable           NO                                                                                                                            
  PE Size (KByte)       0                                                                                                                            
  Total PE              0                                                                                                                                
  Free PE               0                                                                                                                                
  Allocated PE          0                                                                                                                              
  PV UUID               9xMM1S-Me5o-vcHo-WGOd-YbFn-DOuu-Msx0NL      

What wee see from above is ,our new physical volume is created with size of 144GB  , which is actually the size of the related partition. (/dev/sda3)

So far so good;

So far, we created a new partition  using available space, created an ext3 filesystem on it  and we created a physical volume using the newly created partition.

Subsequently, we will extend the VolumeGroup VolGroup00 using the new physical volume ;

vgextend VolGroup00 /dev/sda3

Next we check the logical volumes and extent the Logical Volume  size with the available size of the related Volume Group..

root@erman ~]# lvdisplay                                                                                                                                                                
  --- Logical volume ---                                                                                                                            
  LV Name                /dev/VolGroup00/LogVol00      --> the LV that we will extend.                          
  VG Name                VolGroup00                                                        
  LV UUID                cILSJp-4ksS-mETv-vSDC-237v-Z0it-Vhw806      
  LV Write Access        read/write                                                                                                            
  LV Status              available                                                                                                                    
  # open                 1                                                                                                                                
  LV Size                253.78 GB                                                                              
  Current LE             8121                                                                                        
  Segments               1                                                                                                            
  Allocation             inherit                                                                                                        
  Read ahead sectors     0                                                                                          
  Block device           253:0                                                                                                                      
  --- Logical volume ---                                                                                                                            
  LV Name                /dev/VolGroup00/LogVol01                        
  VG Name                VolGroup00                                                      
  LV UUID                gJQCEa-D8ot-C19G-kGDo-lnpz-E1FH-b6gJQC                        
  LV Write Access        read/write                                                                                                            
  LV Status              available                                                                                                  
  # open                 1                                                                                                                                
  LV Size                1.94 GB                                                                                                                    
  Current LE             62                                                                                                                            
  Segments               1                                                                                                                          
  Allocation             inherit                                                                                                                      
  Read ahead sectors     0                                                                                                            
  Block device           253:1                                                                                                                      
  --- Volume group ---                                                                                                                              
  VG Name               VolGroup00                                                                
  System ID                                                                          
  Format                lvm2                                                                                                        
  Metadata Areas        2                                                                                        
  Metadata Sequence No  4                                                                                      
  VG Access             read/write                                                                                                                        
Here, we extend the size of the Logical Volume LogVol00

lvextend -L +144G /dev/VolGroup00/LogVol00        (I used +144G to extend the logical volume, as I know there is 144 GB more to extend.. you can use other argument with lvextend  command something like extend using all the available space etc..)

Finally, we extend the filesystem on the logical volume we just extend using the resize2fs..

resize2fs /dev/VolGroup00/LogVol00

To extend an ext3/ext4 partition and filesystem (without LVM), you can refer the following link;
Linux -- How to extend a Linux partition and ext3/ext4 filesystem.

Monday, October 28, 2013

Database -- RMAN Multiplexing

Multiplexing term for  Rman is defined basically for the representation of Rman 's ability to read multiple files concurrently and write to a single output file by multiplexing its inputs.. (except Image copies, as multiplexing is against the logic of image copying...)
Multiplexing is on the basis of channels..

The level of multiplexing is determined by the following factor.

The level of multiplexing is the lesser of the MAXOPENFILES(default 8) and the FILESPERSET (default 64) in each backup set.

As an example, you want to backup a database which has 100 datafiles, with one channel. FILEPERSET is set to 60 ..  So the minimum of the FILEPERSET and MAXOPENFILES (default 8 ) is 8. So multiplexity level is 8.
Thus, one channel simultaneously writes blocks from 8 datafiles into each backup piece.

So depending on the multiplexing level, Rman  Input Buffer Size's are differ.
When the level is less than or equal to 4, RMAN allocates 16 buffers of size 1..
When 8 > the level  > 4, RMAN allocates a variable number of disk buffers of size 512 KB so that the total buffer size for all the input files is less than 16 MB.
When the level > 8 , RMAN  allocatest disk buffers of 128 KB for each file.. (For each input file 512)

Friday, October 25, 2013

EBS -- not a valid responsibility for the current user. Please contact your System Administrator.

This error can be encountered on EBS login.... Especially if you set an end_date to a responsibility and if that date was reached recently..

In this situation, check "Applications Start Page" profile option of the problematic user..
This option can be used  to set a particular page at start..
Check this profile option, because signs are pointing to it. You set an end_date , and now some users cant login to ebs, ending up with not a valid responsibility for current user error. It means, the responsibility set at start is not valid any more, the end_date reached..


Logon as System Administrator responsibility, navigate to Profile > System, and search for profile "Applications Start Page". Clear the values set at the problematic user's level.. 
Relogin with the problematic user, and test..

Thursday, October 24, 2013

EBS R12-- and RAC database -- RAC to Single Cloning

In this post, I will try to explain how to clone an EBS running on a Rac database with 2 instances to another machine which will have a single instance database and application services.

Source environment:

Two instance Rac database (Exadata, Oracle Database Application , Ibm Servers or Ibm servers does not matter..)
One Application Node (conc+forms+web+reports)

Target environment:

One Single instance database
One Application Node (conc+forms+web+reports)

Here is the diagrammatic explanation of the process.

As this is directly an Apps Dba operation, Ad utilities are used for the cloning process.
Ad utilities generates the rman scripts for database cloning, and execute them automatically.
Of course, sometimes problems can be encountered.. So that's why, even if it s an Apps Dba operation, Core Dba knowledge is needed, too..

Things to consider;

  • All the rpm and OS packages should be installed to the servers belong to target environment.
  • Cron entries for backups should be commented out before backing up the source database using ad utilities. ( especially backups with delete input syntax..)

  • Os users should be created, and granted properly on the target environment. db owner (for ex:oratest) and application owner (for ex:appltest).

         Example (database user):
         uid=500(oracle) gid=500(oracle) groups=500(oracle),501(oinstall),502(dba)
         Example (application user):
         uid=501(oracleapp) gid=501(oinstall) groups=501(oinstall),502(dba)

  • Hostnames of the servers belong to the target environment should be maximum 30 character long..
  • Source Environment should be in Archivelog mode.
  • Source Environment should be checked with Rman crosscheck archive log all to validate the archivelogs..

So here are the steps to clone an EBS R12 environment with a Two node Rac Database to a EBS R12 environment with a Single node Database.

  • Prepare a preclone on the database tier in the source environment.

        cd $ORACLE_HOME/appsutil/scripts/[context_name]  --context_name in format => SID_ hostname
        perl dbTier   

  • Copy Oracle Home from one of the source servers to the target, and relink the binaries with $ORACLE_HOME/relink all command.. This action will create an usable Oracle Home on the target system.
  • Take a rman backup using on the source. This action will create the stage on the stage area.. The stage will contain everything necessary to clone a database with rman. Datafiles, archivelogs and etc.. After this step , copy the stage directory to the target node.. If you use a shared filesystem , you do not need to copy anything..
        perl \
        java=/u02/CLONE/u01/app/oracle/product/11.2.0/dbhome_1/jdk \     --> jdk path
        mode=stage \  
        stage=/u02/stage \      --> stage area to store the necessary backup file
        component=database \
        method=RMAN \
       dbctx=/u02/CLONE/u01/app/oracle/product/11.2.0/dbhome_1/appsutil/SID_hostname.xml   --> context file of the db 

  • Create a pairs file on target system. 

  • Example path: /oracle/u02/CLONE/u01/app/oracle/product/11.2.0/dbhome_1/appsutil/clone
    The necessary inputs are below;
    s_db_oh=/oracle/u02/CLONE/u01/app/oracle/product/11.2.0/dbhome_1 --> target oracle home path.

  • Run on the target environment to create database context file.
           perl \
           contextfile=/oracle/u02/CLONE/u01/app/oracle/product/11.2.0/dbhome_1/appsutil/TEST_oratest.xml \
           template=/oracle/u02/CLONE/u01/app/oracle/product/11.2.0/dbhome_1/appsutil/template/adxdbctx.tmp \
           pairsfile=/oracle/u02/CLONE/u01/app/oracle/product/11.2.0/dbhome_1/appsutil/clone/pairsfile.txt \

    Example : Adclonectl Questions and Answer 

    Copyright (c) 2002 Oracle Corporation
                              Redwood Shores, California, USA

                              Oracle Applications Rapid Clone

                                       Version 12.0.0

                            adclonectx Version 120.23.12010000.1

      /oracle/u02/CLONE/u01/app/oracle/product/11.2.0/dbhome_1/appsutil/clone/bin/../jre/bin/java -Xmx600M -classpath /oracle/u02/CLONE/u01/app/oracle/product/11.2.0/dbhome_1/appsutil/clone/bin/../jlib/ojdbc5.jar:/oracle/u02/CLONE/u01/app/oracle/product/11.2.0/dbhome_1/appsutil/clone/bin/../jlib/xmlparserv2.jar:/oracle/u02/CLONE/u01/app/oracle/product/11.2.0/dbhome_1/appsutil/clone/bin/../jlib/java  -e /oracle/u02/CLONE/u01/app/oracle/product/11.2.0/dbhome_1/appsutil/TEST_oratest.xml -tmpl /oracle/u02/CLONE/u01/app/oracle/product/11.2.0/dbhome_1/appsutil/template/adxdbctx.tmp -pairsfile /oracle/u02/CLONE/u01/app/oracle/product/11.2.0/dbhome_1/appsutil/clone/pairsfile.txt -initialnode
      Enter the APPS password : 

      Log file located at /oracle/u02/CLONE/u01/app/oracle/product/11.2.0/dbhome_1/appsutil/clone/bin/CloneContext_0126133644.log

      Provide the values required for creation of the new Database Context file.

      Target System Hostname (virtual or normal) [oratest] : 

      It is recommended that your inputs are validated by the program.
      However you might choose not to validate them under following circumstances:

              -If cloning a context on source system for a remote system.
              -If cloning a context on a machine where the ports are taken and
               you do not want to shutdown the services at this point.
              -If cloning a context but the database it needs to connect is not available.

      Do you want the inputs to be validated (y/n) [n] ? : 

      Target Instance is RAC (y/n) [y] : n

      Target System Database SID : TEST

      Target System Base Directory : /oracle/u02/CLONE/u01/app/oracle

      Oracle OS User [oracle] : 

      Oracle OS Group [oracle] : dba

      Target System utl_file_dir Directory List : /tmp

      Number of DATA_TOP's on the Target System [3] : 1

      Target System DATA_TOP Directory 1 : /oracle/PR/data

      Target System Archive Log Directory [/oracle/PR/data/archive] : 

      Do you want to preserve the Display [0] (y/n)  : y

      Do you want the the target system to have the same port values as the source system (y/n) [y] ? : n

      Target System Port Pool [0-99] : 20
      Report file located at /oracle/u02/CLONE/u01/app/oracle/product/11.2.0/dbhome_1/appsutil/temp/portpool.lst
      Complete port information available at /oracle/u02/CLONE/u01/app/oracle/product/11.2.0/dbhome_1/appsutil/temp/portpool.lst

      New context path and file name [/oracle/u02/CLONE/u01/app/oracle/product/11.2.0/dbhome_1/appsutil/TEST_oraetest.xml] : 

      Creating the new Database Context file from :

      The new database context file has been created :

      Log file located at /oracle/u02/CLONE/u01/app/oracle/product/11.2.0/dbhome_1/appsutil/clone/bin/CloneContext_0126133644.log
      Check Clone Context logfile /oracle/u02/CLONE/u01/app/oracle/product/11.2.0/dbhome_1/appsutil/clone/bin/CloneContext_0126133644.log for details.

  • Open the xml context file created in previous step and make the following updates for downgrading the Instance count to 1(Rac to single).

           <cluster_database_instances oa_var="s_dbClusterInst">1</cluster_database_instances>
            <instance_name oa_var="s_instName">TEST</instance_name>
            <instance_number oa_var="s_instNumber">0</instance_number>
            <instance_thread oa_var="s_instThread">0</instance_thread>
               <ALTERNATE_SERVICE_INSTANCES oa_var="s_alt_service_instances"/>

  • Clone the database from source to target using adclone.. adclone will use the stage, generate the rman scripts and execute them to make the clone ready. Note that you have to use unzip version 5.x , so make the changes in your PATH before executing the command , if necessary. 
             perl \
             java=/oracle/u02/CLONE/u01/app/oracle/product/11.2.0/dbhome_1/jdk \
             component=dbTier \ 
             mode=apply \
             stage=/oracle/u02/CLONE/u01/app/oracle/product/11.2.0/dbhome_1/appsutil/clone \
             method=CUSTOM \
             dbctxtg=/oracle/u02/CLONE/u01/app/oracle/product/11.2.0/dbhome_1/appsutil/TEST_oratest.xml \
             rmanstage=/stage/data/stage \
             rmantgtloc=/oracle/PR/data \
             srcdbname=TEST \
             pwd=apps \

  • After this step, the database should be recovered and opened.. On the other hand, there can be environment specific problems and adclone may not be able to open the database .. In this case, you need to solve the problems and continue the process manually.
    • For example: If database can not be opened because of a parameter file problem.. Lets say, your database is compatible to version, but in init.ora that adclone is using, the  compatible parameter is set to 11.2.0. In such a scenario, the database wont start.. So you need to edit the init.ora manually and start the database.. This is a constraint of adclone that I know and  have seen..
  • After the database is opened, put it in noarchivelog (optional)
    shutdownm immediate;
    startup mount;
    alter database Noarchivelog;
    alter database open;
  • Control the Global Name of the database
  • SQLPLUS>select * from global_name; 
    If the global_name is wrong, change it with the following;
    alter database rename global_name to TEST;
  • Disable thread 2
         alter database disable thread 2; 
         Note that you can not drop the redolog belong to thread 2 without disabling thread 2.

  • Drop the redolog files belong to thread 2
  • Drop the undo tablespace blong to 2nd node of the Rac.
  • Check the connectivity of the database through the listener.
  • Clean the environment tables and run autoconfig on db tier to populate them again.
  • sqlplus apps/apps_password
    SQL> exec fnd_conc_clone.setup_clean;
    cd $ORACLE_HOME/appsutil/bin
    sh contextfile=/oracle/u02/CLONE/u01/app/oracle/product/11.2.0/dbhome_1/appsutil/TEST_oratest.xml
After this point Database clone is finished, you can continue with cloning the application tier ...
  • Run the adpreclone on the apps tier of source system.
  • Copy the application filesystem from source to target..I will not give any details in this step as you can use scp to achieve this. You can choose not to copy log and out files of the concurrent managers and etc. It s up to you.
  • Run the adcfgclone (post clone) on the target node with application os user.. Answer the questions below according to your environment and needs..
              perl /oracle/PROD/apps/apps_st/comn/clone/bin/ appsTier

                     Copyright (c) 2002 Oracle Corporation

                        Redwood Shores, California, USA

                        Oracle Applications Rapid Clone

                                 Version 12.0.0

                      adcfgclone Version 120.31.12010000.8

Enter the APPS password : 

Provide the values required for creation of the new APPL_TOP Context file.

Target System Hostname (virtual or normal) [oratest] : 

Target System Database SID : TEST

Target System Database Server Node [oratest] : 

Target System Database Domain Name [] : 

Target System Base Directory : /oracle/PR

Target System Tools ORACLE_HOME Directory [/oracle/PR/apps/tech_st/10.1.2] : 

Target System Web ORACLE_HOME Directory [/oracle/PR/apps/tech_st/10.1.3] : 

Target System APPL_TOP Directory [/oracle/PR/apps/apps_st/appl] : 

Target System COMMON_TOP Directory [/oracle/PR/apps/apps_st/comn] : 

Target System Instance Home Directory [/oracle/PR/inst] : 

Target System Root Service [enabled] : 

Target System Web Entry Point Services [enabled] : 

Target System Web Application Services [enabled] : 

Target System Batch Processing Services [enabled] : 

Target System Other Services [enabled] : 

Do you want to preserve the Display [y] (y/n)  :  

Do you want the the target system to have the same port values as the source system (y/n) [y] ? : n

Target System Port Pool [0-99] : 20

Checking the port pool 20
done: Port Pool 20 is free
Report file located at /oracle/PR/inst/apps/TEST_oratest/admin/out/portpool.lst
Complete port information available at /oracle/PR/inst/apps/TEST_oratest/admin/out/portpool.lst

UTL_FILE_DIR on database tier consists of the following directories.

1. /usr/tmp
2. /tmp
3. /oracle/u02/CLONE/u01/app/oracle/product/11.2.0/dbhome_1/appsutil/outbound/TEST_oratest
4. /usr/tmp
Choose a value which will be set as APPLPTMP value on the target node [1] : 

Creating the new APPL_TOP Context file from :
Completed Apply...

Do you want to startup the Application Services for TEST? (y/n) [y] : n

Do not startup the services immediately. Complete to post actions to guarantee the success..

  • Set the APPS environment using the env file generated by post clone.. It s located in $APPL_TOP with the name format APPS'SID'_'hostname'.env.. Source the environment file in your .bash_profile or .profile files to set it automatically in every session of yours..
  • Use adadmin to do the following,
    • generate JAR files (adadmin ekran navigation: 1>4 ) Do you wish to force regeneration of all jar files? [No] ? No
    • generate message files (adadmin ekran navigation: 1>1) Default answers for all the question , Press "Enter".
    • relink executables (adadmin ekran navigation: 2>1')
    • copy files to destination (adadmin ekran navigation: 2>2 ) Default answers for all the question , Press "Enter".
After these operations; the application is ready to be opened.. (unless you have to change the grants ,update some alerts, modify some packages or mask, delete, truncate, update some of the tables that have site specific sensitive information..)
  • Open the application services..
sh apps/apps_password

  • Log in as sysadmin and check the services through OAM, check the concurrent managers and notification mailer and etc..
If you have any questions, feel free to contact me..

Tuesday, October 22, 2013

EBS R12--change oacore jvm count

In EBS projects --depending on the workload-- , sometimes you need to increase the oacore process counts. There is a detailed calculation to determine the needed oacore process for an instance, but in general; it s based on the user count.. Basically user count / 100 will give you the needed oacore process count..
Note that, the memory size that you will assign to these processes are also important for determining the right process counts.

To change the oacore process count for an EBS instance;

change opmn.xml in $ORA_CONFIG_HOME/10.1.3/opmn/conf directory.

You need to change the following tag in opmn.xml;

< process-set id="default_group" numprocs="1">

In addition to that, 

Change the $CONTEXT_FILE to reflect the change and protect it against future autoconfig.

You need the change the following tag in context file;

<oacore_nprocs oa_var="s_oacore_nprocs">1 

You need to be sure that rmi and jmi port range should be arranged with the process counts..
For example: If you change the process count to 4, than rmi and jsm port range should supply 4 ports, too..

No autoconfig is needed.
If your application services are already running, you just need to reload the oacore processes using reload command.. This will reflect the change to the oacore process which run under the control of oracle process manager..
If you application services are not running, the start them .. You will see the change in oacore process count..

Friday, October 11, 2013

EBS R12--Create Accounting problem //CODE_COMBINATION_ID_INTERIM

After you have installed EBS R12 and finished your implementation with finance modules, you can encounter a hang problem in your first accounting process...
The problem seems to be a hang problem , as it is caused by a logical contention between a Financial Concurrent Program and its child program..

The details about the issue are as follows;

Create Accounting concurrent program does some work and locks the table gl_interface.. After that, it  runs Journal Import concurrent program, which tries to lock GL_INTERFACE table, too.
Journal Import tries to lock gl_interface , because it is programmed to create CODE_COMBINATION_ID_INTERIM on GL_INTERFACE table.. Journal Import program takes this action if it finds that the table GL_INTERFACE does not have a CODE_COMBINATION_ID_INTERIM column..
Note that : This is not an essential column and looks more for some future functionality. There should be no loss of functionality due to its absence.So, this incident leads to a hang situation in concurrent program processing, and the accounting process never ends..
Note that : Creating CODE_COMBINATION_ID_INTERIM column on GL_INTERFACE table is possibly not a part of the standard Oracle EBS installation process, that s why the creation code is there in the Journal Import 's concurrent program code..
Anyways, as a solution, you can create CODE_COMBINATION_ID_INTERIM column on GL_INTERFACE from very beginning of the first accounting process..
As a workaround, when the issue produced and hang occurs, you can kill the Create Accounting process. This action will permit the Journal Import program to continue and create the CODE_COMBINATION_ID_INTERIM.. Once this column is created, the error will not occur on your next accounting processes..

Column specs of CODE_COMBINATION_ID_INTERIM (Note that this is taken from EBS 12.1.3)

Wednesday, October 9, 2013

Database--PLSQL-- Using Awr Tables to report the changes over time..

With Oracle 10g and above, awr tables are populated with full of statistics.. So you can directly use the Awr tables to gather those statistics in a shaped format according to your needs. You can use excel or similar tool to create visual report based on your outputs ...

Following Plsql analyzes statistics stored in Automatic Workload Repository tables to calculate the change in the size of all the tablespace in the database, as time passes...

You supply the begin and end dates to this script, and it will create the report for you. Then you can use excel to visualize the data..

/* Formatted on 09.10.2013 15:33:25 (QP5 v5.163.1008.3004) */
   CURSOR c2
      SELECT name
        FROM v$tablespace
       WHERE name NOT IN ('SYSTEM', 'SYSAUX', 'TEMP','GG'); -- you can extend this list
   --Variable Declaretion
   min_snap_id   NUMBER;
   max_snap_id   NUMBER;
   max_size      NUMBER;
   min_size      NUMBER;
   actual_size   NUMBER;
   SELECT MIN (snap_id)
     INTO min_snap_id
     FROM dba_hist_snapshot
    WHERE begin_interval_time > SYSDATE - 5;

   SELECT MAX (snap_id)

     INTO max_snap_id
     FROM dba_hist_snapshot
    WHERE begin_interval_time > SYSDATE - 5;

   FOR c2rec IN c2

        SELECT ROUND ( (tablespace_usedsize * 8 * 1024) / 1024 / 1024, 2)
          INTO min_size
         WHERE     snap_id IN (SELECT snap_id
                                 FROM dba_hist_snapshot
                                WHERE BEGIN_INTERVAL_TIME > SYSDATE - 5)
               AND snap_id = min_snap_id
               AND =
               AND dt.ts# = tsu.tablespace_id
      ORDER BY snap_id ASC;
        SELECT ROUND ( (tablespace_usedsize * 8 * 1024) / 1024 / 1024, 2)
          INTO max_size
         WHERE     snap_id IN (SELECT snap_id
                                 FROM dba_hist_snapshot
                                WHERE BEGIN_INTERVAL_TIME > SYSDATE - 5)
               AND snap_id = max_snap_id
               AND =
               AND dt.ts# = tsu.tablespace_id
      ORDER BY snap_id ASC;

      IF max_size <= 1

         max_size := 1;
      END IF;

      IF min_size <= 1

         min_size := 1;
      END IF;

        SELECT ROUND ( (tablespace_maxsize * 8 * 1024) / 1024 / 1024, 2)

          INTO actual_size
         WHERE     snap_id IN (SELECT snap_id
                                 FROM dba_hist_snapshot
                                WHERE BEGIN_INTERVAL_TIME > SYSDATE - 5)
               AND snap_id = max_snap_id
               AND =
               AND dt.ts# = tsu.tablespace_id
      ORDER BY snap_id ASC;

      DBMS_OUTPUT.put_line (

         || '  '
         || TO_CHAR (min_size)
         || '  '
         || TO_CHAR (max_size - min_size)
         || '  '
         || TO_CHAR (actual_size));

This script will create an output like below;

CTXD 493,54 0 65535,97
OWAPUB 1 0 32767,98
APPS_TS_QUEUES 33441,25 0 262143,88
ODM 8 0 32767,98
OLAP 15,5 0 32767,98
APPS_TS_TX_DATA 374292,25 1,63 1409023,33
APPS_TS_TX_IDX 324591,25 ,13 1146879,45
APPS_TS_INTERFACE 11458,88 0 98303,95
APPS_TS_NOLOGGING 1296,25 0 65535,97
APPS_TS_ARCHIVE 2845,63 0 65535,97
APPS_UNDOTS1 42692,63 -64,63 51864
TOOLS 2,81 0 32767,98
IZUX 1 0 32767,98
IZUD 1 0 32767,98
APPS_TS_SUMMARY 12923,88 0 98303,95
APPS_TS_MEDIA 13993,38 0 65535,97
PORTAL 1 0 32767,98
APPS_TS_SEED 2342,88 0 65535,97

The output is in the following format:
"(Tablespace name) (min_size)  (max_size-min_size) (actual_size)"

Then you take these output in to excel and create a report like the following (for all tablespaces);


Using the excel graphics, following shape can be obtained by the output above..

As you see above; we obtained a clear and fine graphic for reporting the tablespace growth in time..

For RAC database, the plsql should be a little different, I didnt write it down here to keep this short. But if you want the rac version , you can contact me.
I was in a hurry while preparing this.. If you see any mistakes or things to improve, please contact me, as well..
This logic can be used to report the changes in database activity and structures with lapse of time...