Sunday, June 29, 2014

EBS 12.2-- EBS on Virtualized ODA X4-2 --baby Exadata

In this post, I will cover an ODA(Oracle Database Appliance) X4-2 installation to support virtualization of EBS 12.2 environments.


As you know, Oracle Database Appliance is a point of entry for the Oracle Engineered Systems. It is known as Baby Exadata. It has 2 compute nodes and 1 storage server for building a 4U consolidated environment. It supports Oracle RAC and Oracle Virtual Server environments..

So to install or configure ODA, we have 2 options.
One option is to use ODA hardware as Bare metal and the other one is to use ODA hardware with the virtualization technology of Oracle. (Oracle VM server)

In this arcticle, you will find an overview of using Oracle Database Appliance as a virtualized plaftorm.
I will go on by giving the explanation from a real life example , we just implemented 1 week ago..
So, we actually implemented 5X EBS 12.2.3(including VISION) environments on a virtualized ODA x4-2 platform, and we have seen the difficulties and advantages of using ODA for the virtualization, and implementing virtual EBS servers..

Lets start by taking a look to the ODA X4-2 installation&configuration;
In order to have a virtualized ODA, we need supply Ip addresses for following interfaces;

2x  HOST IP' s. (Oracle Virtual Db Servers)
2x Private IP's (If you will use RAC)
2xDom0 IP's (For Oracle VM Servers)
2xSCAN IP's(IF you will use RAC)
2x Oracle ILOM IP'S( For connecting to the Compunte nodes through NETMGT)
Also, at least 1 ip address for Each Virtual Machine..

There is also 2 method for using Virutalized ODA. We can use RAC (if we have license), or we can choose not to use RAC.

I'll explain Virtualized ODA installation with a non-RAC configuration..

Note that: the best practice is using a RAC based ODA virtualization, but I will not explain it in here, because it seems it is easy to handle, thus less effort neeeded for the deployment.(Oracle Doc: Best practices for deploying Oracle E-Business Suite Release 12.2.x on Oracle Database Appliance)

The installation of ODA X4-2 was made in Linkplus Partner Hub.
Mr Timur Yalcin from Linkplus prepared a similar network environment for us to deploy and work with the new purchased ODA in Linkplus PartnerHub. We worked with Mr Timur throughout the deployment process .. So after the configuration and deployment , we transfferred ODA x4-2 to the Data Center of our Customer, and everyting have worked properly without a need to change even an Ip address..



The initial installation of ODA was done from the serial port. (SERMGT)
Firstly, we connected to ODA from serial port(root/changeme) and found the ip address of the nodes.
Then we connected to the both nodes through ILOM interfaces(NETMGT) using ip information we just gathered, and opened consoles.
Note that, you can find related information in Support document :888888.1(ODA related document) Also, you can download patches and installation packages from the document above..
As mentioned earlier, there are two download options out there. 1) Virtualized  2)Bare :Metal.. We have implemeted the 1st one.
Okay, after downloading the ISO files, we used our clients to install ISO files.. (virtual cd rom)
I mean, We mount the ISO to the ODA through network..
The installation had 2 steps .. First the OVM servers was installed to the Compute nodes of ODA, and then ODA Base was installed on top of the OVM servers..



Since the customer had no RAC license, we couldnt apply the Best practice , as  we shouldnt use RAC in ODA x4-2. So we use the database servers coming with the ODA Base ,as our ASM isntances.. Therefore, we only started ASM instances on them..
Our plan was to use the ASM diskgroups though transperent ACFS for placing our vm repositories. So, all the virtual disk were created on these ACFS filesystems and it all went well..
Lets explore this process a little furhter;

We needed to use ASM because ODA has no Raid controller on its Storage server..  A raid configuration was needed.. The Storage disks could be seen from the ODA Base virtual machines , and those virtual machines had ASM configured by default. So we have used ASM for our shared disk/repository storage..



So the architecture was like above..
We use ASM in order to build a fault tolerant IO Subsystem, and we created our virtual machine repositories using ASM disgroups (For example: oakcli create repo ERMAN -dg DATA -size 700(Gig by default)
Note that : Another repository should be created on top of ASM Diskgroup that includes SSD disks. That is , Virtual disks for storing redologs should be created on the Vm disks which reside on this diskgroup.

We created two repositories actually.. One for the templates which we would import and use as virtual machines, and other one for actual virtual machines..

Note that: you can resize your Repositories created on ACFS filesystems..
Altough , I couldnt see an oakcli argument for that, it can be done by resizing the underlying IO device.. In case of an ACFS filesystem, resizing OVM respository means resizing the underlying ACFS filesystem, which can be done using asmca or acfsutil tools.
Note that: I have done this, and didnt encounter any problems
but I couldnt find an Oracle Support Document to support this. That'why, the risk is yours.
Anyways, We imported our templated into our repository dedicated for templates and we created virtual machines from these templates in the repository created for our virtual machines.. (oakcli import vm template..)

Our goal was to build a Virtualized little EBS Farm, so we created our repository based on EBS sizes..

Following table lists the minimum requirements for the EBS 12.2.3 templates..

Virtual Appliance
Size of Virtual Disk
(in GB)
Actual Disk Space Used on File System (in GB)
RAM
VCPU
Domain Type
Oracle E-Business Suite  Release 12.2.3 Vision Demo Database Tier
300
189
2
1
HVM
Oracle E-Business Suite Release 12.2.3 Production Database Tier
300
93
2
1
HVM
Oracle E-Business Suite Release 12.2.3 Application Tier
300
67
6
1
HVM
Oracle E-Business Suite Release 12.2.3 Single Node Vision Install
500
256
6
1
HVM
Oracle E-Business Suite Release 12.2.3 Sparse Tier/OS Install
300
2
6
1
HVM


Okay lets have a closer look to the template operations..
Once we set up our VM repository environments , we were ready to import our EBS templates.
For using EBS 12.2 , we downloaded 12.2.3 templates from Oracle's website edelivery.oracle.com/oraclevm..
We unzip and concatenate them to have a one single ova file for each template and we imported them using oakcli..
After the import, we cloned virtual machines from these templates.. Once the cloning ass process done, we were ready to configure and start our virtual machines. Configuration is needed after the cloning, because we needed to configure the Memory and Cpu limits of these virutal machines.
Anyways, we used oakcli configure and oakcli modify commands to configure our requirements, and that's it ..
Examples of using oakcli commands are below;

To import a Template:
oakcli import vmtemplate EBS_12_2_3_PROD_DB -assembly /OVS/EBS/Oracle-E-Business-Suite-PROD-12.2.3.ova -repo vmtemp2 -node 0

To list the available Templates:
oakcli show vmtemplate

To list the Virtual Machines:
oakcli show vm

To list the repositories:
oakcli show repo

To start a virtual machine:
oakcli start vm EBS_12_2_3_VISION

To create a Repository (size in gb, by default):
oakcli create repo vmrepo1 -dg data -size 2048

To configure a Virutal Machine: (cpu, memory etc..)
oakcli configure vm EBS_12_2_3_PROD_APP -vcpu 16 -maxvcpu 16 
oakcli configure vm EBS_12_2_3_PROD_APP -memory 32768M -maxmemory 32768M

To open a console for a virtual machine (Vnc required)
oakcli show vmconsole EBS_12_2_3_VISION

To create a virtual machine from a template:
oakcli clone vm EBS_12_2_3_PROD_APP -vmtemplate EBS_12_2_3_PROD_APP -repo vmrepo1 -node 1

After this point; we could start our virtual EBS servers..
Note that : All the oakcli operations was done from the ODA base machines. I mean the virtual machines which host ASM instances.. (not from the DOM 0 machines)

We first started with EBS 12.2.3 vision, which was as a single server in our case.. When we booted EBS 12.2.3 Vision, it didnt not make us configure itself automatically. Therefore, we used following scripts to configure it ..
Note that: In vision 12.2.3 templates, OS account "oracle" is the owner of both Application tier and database tier..
We first configured the ip addresses and hostnames using script named configstatic.sh.. (there was a script named configdhcp for dhcp configuration, but in our case we needed to use static ip configuration)
The configstatic.sh path is as follows;
/u01/install/scripts/configstatic.sh

Note that : There is a bug in configdhcp.sh "unexpected fi" at line 62.
Therefore, you need to modify the script and add "then" to the if..then..fi statement, before using it..

Okay after configuring our network, we used visiondbconfig.sh script to configure our database environment.. (for setting SID etc...)
/u01/install/VISION/scripts/visiondbconfig.sh

After configuring the db tier, we used visionappsconfig.sh to configure apps tier.. I mean to connect our application services to the database we just configured. We always used root for running these scripts...
/u01/install/VISION/scripts/visionappsconfig.sh
Note that : All these scripts are mentioned in note: Oracle VM Virtual Appliances for Oracle E-Business Suite Deployment Guide, Release 12.2.3 (Doc ID 1620448.1)

We have done these configuration for our PROD , TEST , DEV and similar environments ..
In generally , the configurations were made similar, but the script names were different, as they were different templates.. ( Fresh &Vision)
You can find the configuration scripts for EBS 12.2.3 Fresh/PROD Templates below;

The scripts to manage the Oracle E-Business Suite PROD instance are:

SCRIPTS BASE_DIR : /u01/install/PROD/scripts/
START SCRIPT : /u01/install/PROD/scripts/startproddb.sh
STOP SCRIPT : /u01/install/PROD/scripts/stopproddb.sh
DB VM RECONFIG SCRIPT : /u01/install/PROD/scripts/prodbconfig.sh
DB VM CLEANUP SCRIPT : /u01/install/PROD/scripts/proddbcleanup.sh

The scripts to manage the Oracle E-Business Suite application tier instance are:

SCRIPTS BASE_DIR : /u01/install/APPS/scripts/
START SCRIPT : /u01/install/APPS/scripts/startapps.sh
STOP SCRIPT : /u01/install/APPS/scripts/stopapps.sh
APPS VM RECONFIG SCRIPT : /u01/install/APPS/scripts/appsconfig.sh
APPS VM CLEANUP SCRIPT : /u01/install/APPS/scripts/appscleanup.sh
CONFIGURE A NEW WEB ENTRY POINT : /u01/install/scripts/configwebentry.sh

Note that;
While importing EBS 12.2 PROD database template in to a Virtualized Oracle Database Appliance(ODA) platform, you may get OAKERR:7044 error.

Example command:
oakcli import vmtemplate EBS_12_2_3_PROD_DB -assembly /OVS/EBS/Oracle-E-Business-Suite-PROD-12.2.3.ova -repo vmtemp2 -node 0
Error:
OAKERR:7044 Error encountered during importing assembly - Cannot find OVF descriptor file.

This error is caused by the files in the ova files.. That is, oakcl cant handle the situation , if the name of the file packaged in the ova, contains a space character..
As follows;
tar -tvf Oracle-E-Business-Suite-PROD-12.2.3.ova.problem
-rw------- someone/someone 13485 2014-02-07 09:16:36 Oracle-E-Business Suite-PROD-12.2.3.ovf
-rw------- someone/someone 17032424448 2014-02-07 10:22:42 Oracle-E-Business Suite-PROD-12.2.3-disk1.vmdk

As you see above, the names for both ovf and vmdk contains empty spaces.. That's why, oakcli cant handle and throws "Cannot find OVF descriptor file" error.

To fix this, you may use tar command in Linux, or a similar tool depending on your platform.
Here is what I did to resolve the problem;

Solution:
1) First, I extracted the files to a directory called erman_ova_1
tar xvf Oracle-E-Business-Suite-PROD-12.2.3.ova -C erman_ova_1/
2)Then I changed the file names, and make them not to contain any empty spaces
cd erman1/
mv Oracle-E-Business\ Suite-PROD-12.2.3.ovf Oracle-E-Business-Suite-PROD-12.2.3.ovf
mv Oracle-E-Business\ Suite-PROD-12.2.3-disk1.vmdk Oracle-E-Business-Suite-PROD-12.2.3-disk1.vmdk
3)Also , opened ovf file and changed all the file names containing space characters.. Note that: the file name to be imported is also written in the ovf file..
4)Then I moved the original ova file to not to overwrite it.
mv Oracle-E-Business-Suite-PROD-12.2.3.ova Oracle-E-Business-Suite-PROD-12.2.3.ova.problems
5)Lastly I recreated the ova file from the directory where I have modified the filenames..
cd erman1/
tar -cvf /OVS/EBS/Oracle-E-Business-Suite-PROD-12.2.3.ova Oracle-E-Business-Suite-PROD-12.2.3.ovf Oracle-E-Business-Suite-PROD-12.2.3-disk1.vmdk
Note that: ovf file should be packaged first..

So, after importing templates and creating virtual machines we had a Virtualized ODA environments as follows;

As you see above, all of our EBS Virtual machines are using ASM to communicate with storage..
So VM machines which were deployed with ASM are present on each ODA node..
ASM instances manage the shared repo that reside on the Storage server of ODA ..
This also have brought an opportunity to start a virtual machine in another ODA node, if its current node fails.. On the other hand; there was a dependency created .. So our vm machines have become dependent to the vm machines which host ASM instances.. So we need to start them first in order to start our EBS virtual machines..

In ODA X4, the pci cards that are connected to the disk storage, are bypassed. That is, the main machines are bypassed, so the ASM VM nodes communicate with the disks almost directly.
This seems to be an improvement in IO peformance.. Also the disks are SAS based. So we have 4 x fault tolerant SAS connections to the Storage device. (2 from each node)
Also note that: Fiber is used for interconnect ..
ODA computes nodes are 24 cores each. With hyperthreading it makes 48 logical cores each.
You can find the information as it seen from the OS..
Note that: If we know the power of the underlying hardware, we can confiure our nodes better to achive the best performance and to have a better utilization..Here is the specs of a ODA x4-2 node.. Note that, in a node, we have 2 cpus , 12 cores and 2 threads per core. So Hyper Threading is enabled. This makes 48 virtual cores..
Also, we have 256 gb memory per node and We use 2.6.39-4000 uek smp kernel with xen.
Below is the hardware stat gather from an ODA node;

release : 2.6.39-400.126.1.el5uek
version : #1 SMP Fri Sep 20 10:54:38 PDT 2013
machine : x86_64
nr_cpus : 48
nr_nodes : 2
cores_per_socket : 12
threads_per_core : 2
cpu_mhz : 2693
hw_caps :
virt_caps : hvm hvm_directio
total_memory : 262086
free_memory : 192913
free_cpus : 0
xen_major : 4
xen_minor : 1
xen_extra : .3OVM
xen_caps : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64
xen_scheduler : credit
xen_pagesize : 4096
platform_params : virt_start=0xffff800000000000
xen_changeset : unavailable
xen_commandline : dom0_mem=4096M crashkernel=256M@64M
cc_compiler : gcc version 4.1.2 20080704 (Red Hat 4.1.2-48)
cc_compile_by : mockbuild
cc_compile_domain : us.oracle.com
cc_compile_date : Mon Feb 4 16:40:15 PST 2013
xend_config_format : 4

In conclusion, even with ASM as a repository, I can say that: after configuring and starting our EBS servers, we havent seen any anomalies. The performance is acceptable, and the enrironments are stable. Also the management of the virtualized platform is so easy, even without Oracle VM manager..


Okay that's I have to say for now..

Ooo I almost forgot.. One more thing , which is a little important :)

EBS 12.2.3 Fresh templates (PROD) come with US7ASCII character set.. That is , if you are in a country which has special characters in its alphabet, you are in throuble :) You will even not able to licese your language with that character set.. (Note that: There is no such a problem in VISION template because it comes with utf8)
So,  if he installation is already done the best option is to change the characterset following below document..
Migrating an Applications Installation to a New Character Set (Doc ID 124721.1)

Oracle suggests you to perform 2 iterations before doing the same in production.

Thursday, June 26, 2014

ODA- Oracle Database Appliance X4-2 node information

If we know the power of the underlying hardware, we can confiure our nodes better to achive the best performance and to have a better utilization..
Here is the specs of a ODA x4-2 node.. Note that, in a node, we have  2 cpus , 12 cores and 2 threads per core. So Hyper Threading is enabled. This makes 48 virtual cores..
Also, we have 256 gb memory per node and We use 2.6.39-4000 uek smp kernel with xen.

release                : 2.6.39-400.126.1.el5uek
version                : #1 SMP Fri Sep 20 10:54:38 PDT 2013
machine                : x86_64
nr_cpus                : 48
nr_nodes               : 2
cores_per_socket       : 12
threads_per_core       : 2
cpu_mhz                : 2693
hw_caps                :
virt_caps              : hvm hvm_directio
total_memory           : 262086
free_memory            : 192913
free_cpus              : 0
xen_major              : 4
xen_minor              : 1
xen_extra              : .3OVM
xen_caps               : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64
xen_scheduler          : credit
xen_pagesize           : 4096
platform_params        : virt_start=0xffff800000000000
xen_changeset          : unavailable
xen_commandline        : dom0_mem=4096M crashkernel=256M@64M
cc_compiler            : gcc version 4.1.2 20080704 (Red Hat 4.1.2-48)
cc_compile_by          : mockbuild
cc_compile_domain      : us.oracle.com
cc_compile_date        : Mon Feb  4 16:40:15 PST 2013
xend_config_format     : 4

ODAVM: How to Move a User VM from One Node to another

In order to move one node to another in  ODA, we need to follow the actions below;

Firstly, we stop the vm that we want to move.. It should be offline.
oakcli stop vm VM_MACHINE_NAME_WE_WANT_TO_MOVE

Then, we connect to the dom0 of the node that the vm resides and using tar ve create a tgz file.
cd /OVS/Repositories/odarepo2/VirtualMachines/VM_MACHINE_NAME_WE_WANT_TO_MOVE
tar -zcvf /OVS/VM_MACHINE_NAME_WE_WANT_TO_MOVE-vm.tgz *img *cfg

After that; we connect to the dom0 of the node that we want our vm to be moved on..
and import the tgz we just created into our repo as a vmtemplate;
oakcli import vmtemplate OVM_VM_MACHINE_NAME_WE_WANT_TO_MOVE -files /OVS/VM_MACHINE_NAME_WE_WANT_TO_MOVE-vm.tgz -repo OUR_REPO

Lastly, we clone/create our vm from the template on Dom0 of the node we want our vm to be moved on and start the vm
oakcli clone vm clone-OVM_VM_MACHINE_NAME_WE_WANT_TO_MOVE -vmtemplate OVM_VM_MACHINE_NAME_WE_WANT_TO_MOVE -repo OUR_REPO
oakcli start vm clone-OVM_VM_MACHINE_NAME_WE_WANT_TO_MOVE

If we encounter the following error while starting the vm;
OAKERR:7019 Unable to start VM because cpu pool default-unpinned-pool is empty
then, we configure the vm before start and identify the cpupool for it.
oakcli configure vm clone-OVM_VM_MACHINE_NAME_WE_WANT_TO_MOVE -cpupool OUR_CPU_pool..

Monday, June 23, 2014

OVM -- oakcli command examples

To import a Template:
oakcli import vmtemplate EBS_12_2_3_PROD_DB -assembly /OVS/EBS/Oracle-E-Business-Suite-PROD-12.2.3.ova -repo vmtemp2 -node 0

To list the available Templates:
oakcli show vmtemplate

To list the Virtual Machines:
oakcli show vm

To list the repositories:
oakcli show repo

To start a virtual machine:
oakcli start vm EBS_12_2_3_VISION

To create a Repository (size in gb, by default):
oakcli create repo vmrepo1 -dg data -size 2048

To configure a Virutal Machine: (cpu, memory etc..)
oakcli configure vm EBS_12_2_3_PROD_APP -vcpu 16 -maxvcpu 16 
oakcli configure vm EBS_12_2_3_PROD_APP -memory 32768M -maxmemory 32768M

To open a console for a virtual machine (Vnc required)
oakcli show vmconsole EBS_12_2_3_VISION

To create a virtual machine from a template:
oakcli clone vm EBS_12_2_3_PROD_APP -vmtemplate EBS_12_2_3_PROD_APP -repo vmrepo1 -node 1

OVM -- resizing repository (ACFS)

It seems it s possible to resize an OVM repository. Altough , I couldnt see an oakcli argument for that, it can be done by resizing the underlying IO device.. In case of an ACFS filesystem, resizing OVM respository means resizing the underlying ACFS filesystem, which can be done using asmca or acfsutil tools.

Note that: I have done this, and didnt encounter any problems
 but I couldnt find an Oracle Support Document to support this. That'why, the risk is yours.

OVM-- EBS 12.2 DB template -- OAKERR:7044 Error encountered during importing assembly - Cannot find OVF descriptor file

While importing EBS 12.2 PROD database template in to a Virtualized Oracle Database Appliance(ODA) platform, you may get OAKERR:7044 error.

Example command:
oakcli import vmtemplate EBS_12_2_3_PROD_DB -assembly /OVS/EBS/Oracle-E-Business-Suite-PROD-12.2.3.ova -repo vmtemp2 -node 0

Error:
OAKERR:7044 Error encountered during importing assembly - Cannot find OVF descriptor file.

This error is caused by the files in the ova files.. That is,  oakcl cant handle the situation , if the name of the file packaged in the ova, contains a space character..

As follows;

tar -tvf Oracle-E-Business-Suite-PROD-12.2.3.ova.problem
-rw------- someone/someone 13485 2014-02-07 09:16:36 Oracle-E-Business Suite-PROD-12.2.3.ovf
-rw------- someone/someone 17032424448 2014-02-07 10:22:42 Oracle-E-Business Suite-PROD-12.2.3-disk1.vmdk

As you see above, the names for both ovf and vmdk contains empty spaces.. That's why, oakcli cant handle and throws "Cannot find OVF descriptor file" error.

To fix this, you may use tar command in Linux, or a similar tool depending on your platform.
Here is what I did to resolve the problem;

Solution:
1) First, I extracted the files to a directory called erman_ova_1 
tar xvf  Oracle-E-Business-Suite-PROD-12.2.3.ova -C erman_ova_1/
2)Then I changed the file names, and make them not to contain any empty spaces
cd erman1/
 mv Oracle-E-Business\ Suite-PROD-12.2.3.ovf Oracle-E-Business-Suite-PROD-12.2.3.ovf
mv Oracle-E-Business\ Suite-PROD-12.2.3-disk1.vmdk Oracle-E-Business-Suite-PROD-12.2.3-disk1.vmdk
3)Also , opened ovf file and changed all the file names containing space characters.. Note that: the file name to be imported is also written in the ovf file..
4)Then I moved the original ova file to not to overwrite it.
mv Oracle-E-Business-Suite-PROD-12.2.3.ova Oracle-E-Business-Suite-PROD-12.2.3.ova.problems
5)Lastly I recreated the ova file from the directory where I have modified the filenames..
cd erman1/
tar -cvf /OVS/EBS/Oracle-E-Business-Suite-PROD-12.2.3.ova Oracle-E-Business-Suite-PROD-12.2.3.ovf Oracle-E-Business-Suite-PROD-12.2.3-disk1.vmdk

Note that: ovf file should be packaged first..

Sunday, June 22, 2014

ASM -- Dropping Aliases causes ORA-01157 on database startup

You may use ASM file aliases for datafiles , controlfiles and redolog files of your database configured with the ASM .. However, one day, you may want to drop them ..
This may be the case if your using EBS 12.2 for instance.. EBS 12.2 installer now uses rman  to restore its preconfigured database  at install time, and it defines aliases for the files if your using an ASM instance.

So , if you want to drop ASM aliases, first check v$datafile , v$controlfile, v$log and your init.ora ..

Check those files if you dont want to be suprised after you drop these aliases and restart your database..
What I try to say is, you may end up with controlfile not found or ORA-01157 errors during your database startup..

Okay, lets explain the reason for this errors,

The error related with control file is basically caused by the init.ora/spflile..When you are using aliases for the control_file init.parameter, these errors are expected and the solution is simple.
As you may guess, you just need to modify your init.ora/spfile and reflect the changes in the filenames.
 In other words, you just change your control files to point the system generated aliases rather than the user created aliases you just dropped.
Note that: ASM file names we see normally, like system.dbf.092198 are also aliases but they are system generated aliases.

Okay, so what about ORA-00157 errors then?
The MOS document explains the problem : ORA-01157 on Database Startup After Dropping an Alias (Doc ID 444151.1) . But I didnt like the solution defined there..

You may check your v$datafile view to see/understand the problem , here..
As you know, v$datafile contains datafile information from the control file
The problem is basically, your controlfile has still the user aliases defined for the asm files and that's why, your instance is referring the datafiles and redolofiles by their user defined alias names and not the system generated ones.Anyways, because of  the aliases written in your controlfile  , you end up  with ORA-00157, as your instance will not able to find the datafile through the aliases, which are just dropped..

So, there are three solutions for fixing this,,

1) You can create the alises :) MOS document above suggest that, but I dont like it. "I want to drop the aliases :)"

2) You can mount your database and rename your files, I mean, modify the filenames from dropped aliases to system generated aliases ..But, I dont like that too, because you need to know all the aliases and their associated system generated aliases.. That why, I dont like to this option.. It is exhausting.

3) You may create your controlfile without resetting logs(noresetlogs) . This is my favorite, as you can create a controlfile creation script(which can be gathered from the instance in mount mode already) and then change your datafiles and redologfiles to the system generated alises in the creation script (just run an ls in ASM filesystem and copy&paste the output) and run it. So, when you run the script, your controlfile will be recreated and it will contain the system generated alises.. Thus , you may open your instance without any problems..  In my opinion, this option is the easiest of all..

I feel the need to write this article because there is no information about this specific subject in MOS. Only the document above describes that but I believe, that document needs to be modified, too.


Thursday, June 19, 2014

12.2 -- Online Patching , Revolutionary Advancement explained in detail

EBS new release 12.2 have started to be used in the new implementations. It is still quite new, but eventhough it is new, it is preferred in the new EBS projects, as it brings a quite new technological architecture and big advancements.  In the last 2 months, we have done 4 different EBS 12.2 installations&upgrades(including 1 VM template import operation) and I can say that we have met with the new technology and we are glad for the new features altough getting used to the new EBS Apps Dba capabilities was a little challenging.
Almost the entire middleware architecture is different, its installation capabilities are advanced, and patching on EBS 12.2 is a lot different. We use different tools for several administration activities and we have new tools for monitoring them..  Also we dont use a lot of things, which were mandatory in the previous releases, because they are obsoleted now.
Today, I will write an article about one of these new features in EBS, which is Online Patching. Online Patching is a big advancement in EBS, and it brings a completely new patching architecture to us. It is a bit tricky and complex, but it brings a lot of advantages..
In one of my previous post, I already wrote about Online patching in EBS 12.2, and I have mentioned the new patching tool adop..
http://ermanarslan.blogspot.com.tr/2014/03/ebs-122-applications-dba-online.html
In that post, I didnt go in to the details about Online patching. On the other hand; as the days went by, I felt the need to go in the details about it, because online patching is the process that we often execute as apps dba's  and that's why I think, we need to have a detailed knowledge about what's going on behind the scenes while we use adop to patch our EBS environments..



Okay, Lets start with it.
Online patching is a big advancement. Actually, it s revolutionary advancement addressed to be the solution for the patch downtime problem, which is a major concern for EBS customers.
It brings a new feature called patching cycle to provide its functionality. Maintanence mode is obsoleted now, as all patching activities are done online. Patching any module does not cause the entire system to be down, anymore.
We patch our environment online without affecting the running code and when we need to switch our patched environment with the running environment, we allocate our downtime ,which is measured in minutes and commit our changes..
So, still we will have a little downtime but it s just to little to cutover the things, it is measured in minutes, and it can be anytime we want.
We can apply our patches during the day without affecting the application and then cutover at the evening during our maintanence window.. That means, we can apply a big legislative patch during our payroll is running..

So how is it possible?
To provide online patching, Oracle uses a bunch of technologies.. Edition Based Redefinition is the biggest one of these.. Note that, Weblogic is also contributed..

Basically, patches are applied to the copy of the production system, actually copy of the production code. Thus, users are unaware about patching operations, as patches are applied to the copy.
Okay, when we say copy of the production, we only mean copy of the code, not the copy of the data..
We have codes on Filesystem and in the Database, so in online patching any code objects changed in a patch is copied.. I repeat, Application data is not copied on online patching operations..

As I said in the beginning, we switch our environments to commit the changes done during a patch application.. The Cutover phase is the switch.. It is actually the time to switch from production to newly patched copy, and that's where we have the downtime...(measured in minutes)
We have a downtime in cutover phase, because our middleware services are restarted from the patch environment . They are restarted to be able use the new code.. So you can consider this downtime as the time need to restart the middle tier. The downtime is felt like a logoff, like a disconnection;
I mean, when we cutover, the user are logged off and when they reconnect, they are directed to newly patched system.
Okay, lets explore patching architecture a little bit deeper;
In EBS 12.2 we have a secondary filesystem. It is actually a copy and  it is kept syncronized by the patching tools.
On the other hand, in the database tier; we have a single database..
We dont have a copy of the database to support the online patching.. However ; copies are made in terms of code objects in the databsae.. That'is ;  a seperate copy is maintained for all of the database code objects which are changed by a patch.. This feature supply the online patching in the database tier, and it is the actual revolution.
In online patching, when we start to apply a  patch, patching tools and online users start to share the system resources..That's why patching can take longer time when you compare it with traditional patching. However; in online patching the system remains online .. That's why , patching time is not important..
Note that : In 12.1, we had an option named staged appl top. With this option, the system could be online while patching.. On the other hand, when it comes to database updates, the system should be offline again..
Note that : Staged Appl top was the basis for EBS 12.2's online patching..

In 12.2, we have actually 3 filesystems, not 2. We have fs1, fs2 and fs_ne
fs1 is prod, fs2 is patch filesystem(they change the roles after cutover)
fs_ne is the non editioned filesystem.. It contains logfiles, reports outputs and data import/export files.Fs_ne is the non editioned filesystem, and it is  not syncronized during patching activities.. So, as all the data that is to be written intothe  filesystem is written to fs_ne, we dont need to worry about copying fs data, as it is already in fs_ne.

So we have 3 filesystem and they connect to same database. Run edition filesystem (lets say fs1) only reads code and data definition.. Patch filesystem read and writes data into the both fs and db..



Okay, lets examine the db tier by starting  from the Edition Based Redefition (EBR)..
EBR is an isolation mechanism that supplies the online upgrades of db tier. It allows an application to store its application definition in different editions. So we can have 2 copies for the same object code in a single database. Changes to the database objects are made in the isolation of an edition , and the editions define what you view.. Client code choses the edition.(run or patch)
In EBS 12.2 , we always have a run edition, which is used by production.
We have a patch edition which is only present while a patching cycle is active. Patching tools use it..
Also we can have one or more old editions. Old editions are defined after a patch edition becomes the run edition.. Old editions are removed when we execute cleanup.

Patching cycle consists of 5 processes.
Prepare->Apply->Finalize>""!!Cutover!!""->Cleanup
<------Abort----------------->


These are the phases of ADOP tool, actually.
Basically; prepare phase is the copy phase, apply phase is the phase where one or more patches are applied, finalize phase is where we finalize things , where we compile objects and etc..
Cutover phase is where we switch and cleanup phase is where we clean the mess.
Also we have abort phase, which is an option to abort a patch application.. We can abort one or patch application till we perform cutover..

Lets continue with the details about adop phases:

Prepare:

Syncronization of patch and run on the filesystem.( Oracle doesnt blindly copy everyting.. It is an incremental sync) Only files changed after the last patch are copied. Patch FS= Run FS --> they become synced.
A new patch edition in the db is created. Copies made for every code object. They are virtual copies though..The copied objects are actually pointers. They dont consume space.. Only, when an object is patched, it become real/actualized, and only then it consumes space.
For ex: When you change a view or package , it become real and occupy space inside the patch edition..
Note that: Objects like Tables and Indexes are not copied  in online patching, as they are non-editioned objects!

Apply:

Adop runs adpatch to apply one more patches.
Patches are applied to the Patch edition. Normally, in a EBS patch, we have a fs patch driver and db patch driver in the backend.So, Fs patch driver uses patch filesystem(lets say fs2), and db driver uses patch edition created in prepare phase.

Finalize:
It is just a staging point. We compile invalid objects, generate derived objects, pre-compute ddl to be run at cutover and getting ready.
--We can pause at this stage as long as we want. We can wait for the appropriate downtime for the cutover in this stage.

Cutover: (downtime in minutes)

It restarts the application on the patched copy.
Oracle cutovers to the patch edition of the FS , and cutovers to the patch edition of the db.
Oracle(adop) restarts the middle tier services.
Users are logged off when the middle tier is stopped. Run FS becomes the Patch FS (vice versa) and Run Db edition becomes the Patch Db edition (vice versa)
In this stage, a final maintanence is performed, too.
At the end, the users are brought back online on the patched System..

Cleanup:

We do the cleanup the keep the System tablespace clean. In this phase, Adop deletes the unnecessary editioned objects on the db tier.
Once all objects are deleted from an old edition. The old edition itself is deleted, too.
Filesystems are resyncronized..(Oracle doesnt delete any files on FS actually )

Abort:

It is basically aborts the patching cycle. We can abort during and after prepare,apply or finalize phases. On the other hand; we cant abort once we cutover ..

Okay lets go a little deeper and mention about the EBR techniques that Oracle uses to maintain the new online patching feature;
Basically, we have editioned and non-editioned objects in our databases.
Editioned objects can be: PLSQL,Spec,Body,Procedure,View,Trigger,Editioning Views,Types and Synonyms..
Not-editioned objects are : Tables, Indexes, Materialized Views and Sequences .. These objects are non-editioned because they are storage objects..
(Remember : we only copy the application defitinion)

Altough, we have non-editioned objects in our databases , Oracle supplies this Online Patching... So how?
Adop uses Advanced features of EBR to operate on non-editioned objects.. So we use editioning views, cross edition triggers and editioned data storage in the backend..
Note that: Non-editioned objects cant reference editioned objects.. I mean; for example: a table that has a type object defined to one of its columns.. This kind of references can not be tolerated in 12.2. So we dont have such references in 12.2..

So in the case of tables, by using Editioning views, we can see the tables in different shapes acorrding to the editions.. So it is important to know that all the code which access EBS, should use this editioning views, because this views are the key for online patching.. So we dont copy the tables, but alter them in a different ways to not to interrupt the running applications.

Suppose a patch wants to make an alter to a table;  for example patch wants to alter a column;
In online patching architecture, the patch doesnt alter the actual columns, it just adds a new column with desired attributes.. This is because we dont want our applications to be affected from this change... So the running application still uses the old format/run format of the table, that's why it is not getting affected.. Note that :Application can use the old format without any user intervention, because it uses an editioning view to reach that table.

What are those editioning views in APPS?
We have now synonyms defined in APPS for the tables . These synonyms are pointing to the Editioning views, and the editioning views are pointing to the actual tables.
So, for example:
WF_ITEMS is the synonym -> WF_ITEMS# is the editioning view -> WF_ITEMS is the actual table ..
That's why all the codes reaching the EBS should use those synonyms..
If we reach the table directly(physically), we may end up with a wrong structure for the table we want to operate on.. I  mean, we may still see the old columns which are obsoleted by a patch.

Following describes what I tried to explain more clearly;

So suppose we want to alter a column named ID, and we want to change its column type from varchar2(10) to varchar2(50);
Oracle adds the change as a new column to supply the online patching.. So during the patch cycle, Patch edition views the table's columns as name,surname and ,id2, but Run edition still views the table's columns as name,surname and id.. This makes Run edition to be online, while applying a patch which changes the tables definition.




Also, It is worth to explain the corss edition triggers , as they are one of the major features that contributes the Online patching evolution in EBS. This triggers works somehow integrated with EBR..
Basically, during a patch application; if a row is inserted or updated in the run edition; these triggers are triggered. When they are triggered, they write down the data in the columns defined on the patch edition..
Consider the following scenario for the above table, what happens if a new ID is inserted to the ID column by the running application while we are patching ?
So by using cross edition triggers we save this change in the newly added ID2 column as well.

Furthemore, there is another feature that is used behind the scenes that makes online patching to be possible. It is Editioned Data Storage.. Editioned Data Storage  allow an online patch to modify the seed data. The seed data means seeded data, which is the data for the Self Service UI  for example..
So altough, we have said that only the application code is copied during an online patching operation, the seed data is an exception to that...So it s copied , too
That is; if we have a seed data in the table, Oracle copies the data in to the same table, and the patch operate on that copy.. This copy is retained till we cleanup.


To supply this function, Oracle is using VPD policies and similar technologies together..
In the meanwhile; when the application changes a seed data that is used by the run filesystem, the change is synced to the copy portion, as well.
For example: a profile option is changed during a patch that is operating on the relevant seed table, then this change is synced to the copy portion, which the patch is operating on.
Note that : Ofcourse, this is a one way syncronization.. So there is no syncronization from patch/copy portion to run portion.

Okay, till this point; I have tried to explain the online patching architecture in general and given some details about the underlying technologies which contribute to the online patching feature of EBS 12.2

Lastly, I will mention about some facts and benefits of the online patching..

All the features that I have mentioned in this post come with the installation of 12.2. Also, Oracle supplies these features even for the custom application sif we register them properly.
These features/online patching comes with the 12.2 upgrade, too..

As we have learned that we have a patch edition in Online Patching; It is important to remind that : The patch edition is not a testing environment! So you will still need to be make your tests on your test system.

As for the benefits;  Online patching removes the bariers for the upgrade, because it is online.
It gives us the ability to make our downtime negotiations easier, because using online patching, we have downtime in minutes.
Also it is easier to identify the downtime if using Online patching, because the downtime means the restarting of Midtier in general.
My favorite benefit is that, It is online . Application is online ! :)

That's it for now. I hope you will find this document useful.
Please feel free to contact me, in case you have any questions or objections about the things that I have written in this post..


References: Oracle E-Business Suite Technology Webcasts and Training

EBS 12.2 -- HR Organization Chart feature certification with Weblogic 12C

HR Organization Chart feature is not certified with Weblogic 12c.
I have raised an SR for this, and Development reported the situation as follows;

Here is the response from the Dev team in the bug 18895765 :
ATG team has not yet certified EBS-SDK session management with Weblogic 12c


So Weblogic 12c can not be used as a deployment platform for HR Organization Chart Feature. We will be using 10.3.6 as always..

Monday, June 16, 2014

EBS 12.2 -- Patch: 17020683 ORA-02289: sequence does not exist

If you are upgrading from 12.2.0 to 12.2.3 and if you have applied TXK and AD Delta 4 patches  before upgrading to 12.2.3; you may encounter "ORA-02289: sequence does not exist " error during upgrade patch. (17020683)

Actually, you patch can be completed successfully, but you may find following in your patch log file.

Starting phase 40 (A40): daa
AutoPatch error:
The following ORACLE error:
ORA-02289: sequence does not exist
occurred while executing the SQL statement:
SELECT to_char(APPLSYS.AD_TASK_STATUS_S.NEXTVAL) FROM SYS.DUAL

Unable to get the current value of sequence APPLSYS.AD_TASK_STATUS_S.
AutoPatch error:
adptaskPrepatchTiming: Error calling aiuoqg()
AutoPatch error:
aijpsp: Error calling adptaskPrepatchTiming.

There are an SR record in the Oracle Support ,for the same error, altough it is reported for a different activity. On the other hand, there are no patch info for fixing this issue. So we have raised an SR for this.

The solution is to apply 18880325:R12.AD.C patch , before the 12.2.3 upgrade patch.(17020683)

So when you encounter this ORA-02289 error; 
abort the patching cycle; start a new patching cycle;
grant select on applsys.ad_task_status_s to apps;  
in this patching cycle apply 18880325:R12.AD.C
grant select on applsys.ad_task_status_s to apps;
Check your relink log. ensure that there are nor erorrs. 
then apply 17020683 in the same patching cycle... 
After this 2 patches are completed, complete your patching cycle using finalize,cutover, cleanup and fs_clone..

Sunday, June 15, 2014

Database 12C-- You need Access to Information In Subseconds, "Oracle Database In-Memory"

Last year, I had write an article regarding Larry Ellison's speech in Oracle Open World 2013.
http://ermanarslan.blogspot.com.tr/2013/09/database-12c-new-in-memory-option-flip.html
This year, Larry Ellison made another presentation about the upcoming Oracle In-Memory Option and I had followed it through youtube. The presentation was so exciting , and inspring for me, so I have sticked to my tradition and prepared this blog post to put whats have been presented in to the paper.


The subjects of the presentation was  Oracle Database In-Memory option, its features, the scalability and M6 Machine.. It  was like the speech in Open World 2013 but i have found it more focused and detailed. In addition, Larry have talked about RDBMS, SIMD instructions and the general characteristics of other in-memory databases.. Also, you will find real life stories, examples, feedbacks and demonstrations regarding the performance of Oracle In Memory Option in the presentation. The stories, examples and feedbacks are from real companies, real databases and real applications..
That's why I strongly suggest you to watch it...  



So lets start with it.. I will mention the things taking place in the presentation as key notes briefly.

  • Nowadays, Memory- DRAM becomes cheapers. Thus, we need to make us of memory more because memory is fast.
  • Flash Memory is new, but it have already started replace the Hard Disks. It s fast and persistent.. Thus, by using them, it s faster to reach the data using I/O subsystem..
  • Network also have speeded up. Infiniband is much faster than ethernet. Oracle database and Oracle Hardware uses Infiniband to accelerate everthing.. reliably and economically...

"In-Memory Option" puts the frequently accessed data  in memory , and operate on this data instantly.
  • Using In-Memory Option , you have 100X faster queries in both OLTP and DW. Also you have 2x Faster OLTP environment without a need for a single change in your application..
  • The goals of Oracle In-Memory Option:
1) 100x sql .. It was a goal for this product, because it has been accelerated 2x or 3x in the past. With 100x, Oracle brings the real time analytics..
2) Speeding up the OLTP.. This is a hard thing to do. Normally, we need to compromise the transactions.
3) Transperency.. No application changes should be needed.. It must be like 'Throw a switch and everyting runs faster!'.. No need to rewrite anything for taking the advantage ..

  • There are 2 RDBMS format out there.. 
1) Rows format DB .. It is good for OLTP. It delivers fast processing for few rows with many columns. It is also the traditional format of RDBMS databases, as well as Oracle.
2) Column format DB.. It is good for Analytics/Reports.. Analytical DB's are normally column organized.. The queries take the columns and operate on them.. I delivers fast processing for one or more column with many rows.

  • The magic Oracle 12c is, It stores the data in memory, in both formats.. Both row format and column format. There is nothing changed in row format, bytheway. This makes Oracle 's new option transparent!! No changes needed to use the new in-memory option. No export/import.. nothing.
  • Why is OLTP speeded up?  So , as there are 2 formats in memory, we have both row cache and column cache.. Row based OLTP operations works on the Row Cache.. The question is "how can transactions go faster?".. This question comes to mind, because in this technology, there are two stores in memory, and both of them should be updated.. So there is an additional work...
    Oracle explains this as follows,
    Normally we create 2 index for oltp , 10 index (maybe more) for analytic queries. Even, we create the indexes for queries, we speed up them, but then our transactions are slowing down..
    Inserting a row into a table means , insert row + update index + update index... + update index.. Maintaining those indexes is a very expensive operation and slows down OLTP..
    So, what Oracle suggest here is, actually dropping the analytical indexes, as the column store technology is developed to replace the analytical indexes..
  • We dont log anything for the Column Cache, and we have using compressin on the column cache. We only say that; "I want this table in memory in column format"
  • Oracle uses SIMD CPU instructions. "Single Instruction Multi Data". This brings an incredible speed for Oracle on Sparc and Intel. These SIMD instructions have been used for scientific work normally and in these days, they are used for graphic accelarations.. You can scan billion rows per second per core.
  • With In-Memory Option, JOINS are turned into fast scans in memory.10x faster joins
  • Ad-Hoc reports can be online using in-memory option. No need to prepare a cube.. Even if you havent a cube for data, this engine builds outline ad-hoc .. That's why It is fast and doesnt require query anticipation..
  • Normally, when we build a columnar representation in db, our OLTP operations slow down. But OLTP will not slow down, anymore, besides OLTP operations will speeded up using in-memory option of Oracle Database.. This is explained with Oracle Database having 2 caches.. Row cache and Column cache. Of course, the maintanence operations of the cache increased by x2 but we dont need any anayltical indexes in this method.. So dropping the analytical indexes will make us to be speeded up.. Even in OLTP, we have some tables which have 20 or more indexes, because we run our analytical queries in OLTP environments.. (for example: E-Business Suite...) So consider dropping those indexes... No updates will be need for maintaining the indexes.. That's the thing which make use speeded up.. It takes long long time to update those indexes , so maintanence for 2 cache in memory is not important when you compare it with the index maintance.. 
  • So, you can use your transaction systems as Warehouse!
  • In Memory works in Scale-out.. so, Part of it can be on Machine 1, other part of it can be on Machine 2 and so on..
  • Oracle 's In-Memory does not require the entire database to be in memory. It s smart. I keeps the active part in memory, inactive part can be in Flash or disk (Tiering) . So there is a memory hierarcy..
  • The column store is a cache! (active dagta) "As your active data is in memory, your database runs at the speed of memory." So you get the speed of memory, but the capacity of disk.. So it is economic, scalable and fast.
  • What about availability? What if a failure occurs in memory system? What will happen Oracle's In-Memory?  You are protected against any kind of failure.. Node, Site, corruption, human error and etc.. Oracle keeps  copies of the column cache in at least 2 nodes. (like a disk raid mirror) So if you lose a node, you are still online.
  • To implement in-memory options, you need to  declare ;
inmemory_size = XX GB (how much you want in memory column cache)
alter table | partition in memory and in case of OLTP drop the indexes!
That's it..
  • Oracle In-Memory is released a little late because Oracle wanted to supply a lot of things with it. Oracle EBS , Fusion Apps, JD Edwards, Peoplesoft, Siebel and etc run faster,reliable and economic with it.
  • Example: E-Business Suite: 
A real company, real database and real application:  a batch job which runs for 58 hours, completed in 13 minutes with In-Memory Option.. You cant even wait for 58 hours, but you can wait 13 minutes and continue your work.. It is like real time, it can be waited..
  • Example: OTM (transportation management) runs 1030x faster.. 
  • So when you operate with this speed, you may change your processes. Your processes can be optimized to change on-the-fly according to the data that will be available online. 
  • Example: Jd Edwards, analytics on AR becomes 3500x faster. 

"We have no idea . The old one never ever finished" -> Some customers answers the questions: "How much faster?"
:)

In Summary, 
  • Using In-Memory option; you will have an extreme Performance in both Analytics and OLTP at the same time.. According to Oracle , no one else can do that at the moment.
  • Everything is running unchanged.
  • It is extremely available.  In Memory Cache is mirrored. so it is high available.. When you test High Availability, nothing happens :) Actually this should be the result :)
  • It scales out in RAC.  It runs on M& 32 Machine.. Using In-Memory Option, Oracle places 3 trillion rows in M6's memory, and It runs in real time.. 
  • We have never seen 3 trillions rows in 32TB memory :)
  • When 1 node crashes, there will be a slowness in the rate of slowness/node .. that's expected, but there is no outgages..

Lastly, Larry Ellison answered the question regarding the comparison for Scale up & Scale out . A question like which one is a better choice.. The answer was as follows;
Scale out is the trend right now. but this does not mean that scale up will come to an end.. Some applications can works better on using scale up..

Oracle Database In-Memory is scheduled to be generally available in July 2014.

Note that: Scale up means using Smp boxes.. Adding ram, adding cpu or migrating to another more powerful server.. Scale out means something like adding a node to the RAC.

That's all .. I hope you will find this post useful..

Thursday, June 12, 2014

Linux-- Environment modules example

You can set your environment dynamically using environment modules.. Do not confused environment modules with the kernel modules. Environment modules are used for setting environment variables  via modulefiles. Popular shells are supported, including bash, ksh, zsh,sh, csh, tcsh, as well as scripting languages like perl and python.

In this post, I will create an environment module to set my environment according to my needs using module function. I will create a tcl file and use module load & unload functions to set & unset my environments using Linux modules.

Info:
module avail: to list all available modules you can load 
module list: to list your currently loaded modules 
module load moduleName: to load moduleName into your environment 
module unload moduleName: to unload moduleName from your environment
Modules are useful in managing different versions of applications. Modules can also be bundled into metamodules that will load an entire suite of different applications.

First, we create our module file in the modulespath as follows;
/usr/share/Modules/modulefiles/ermanv file:

#%Module1.0#####################################################################
##
## modules erman
##
## modulefiles/erman  Written by Erman Arslan
##
proc ModulesHelp { } {
        global version modroot

        puts stderr "this is erman test"
}

module-whatis   "Sets the environment for erman ERNAN=/home/applmgr/erman"

# for Tcl script use only
set     erman_home     /home/applmgr
set     version         4.6.2
set     sys             linux86

setenv         ERMAN              /home/applmgr/erman

Then we can execute module help to see if it s working.

[root@ermanprod modulefiles]# module help erman

----------- Module Specific Help for 'erman' ----------------------

this is erman test

We can see our modules description with whatis.

[root@ermanprod modulefiles]# module whatis erman
erman                : Sets the environment for erman ERNAN=/home/applmgr/erman

We see our module is detected by using module avail.

[root@ermanprod modulefiles]# module avail

------------------------------------------------------------------------------- /usr/share/Modules/modulefiles -------------------------------------------------------------------------------
dot         erman       module-cvs  module-info modules     null        use.own

-------------------------------------------------------------------------------------- /etc/modulefiles --------------------------------------------------------------------------------------
compat-openmpi-psm-x86_64 compat-openmpi-x86_64

Next, we load our module and all envrionment variables are set automatically.

[root@ermanprod modulefiles]# env |grep ERMAN
normally we dont have a environment variable called ERMAN

[root@ermanprod modulefiles]# module load erman
We load our module..
[root@ermanprod modulefiles]# env |grep ERMAN
Now we see our env variable ERMAN is set
ERMAN=/home/applmgr/erman

Now we unload our module and see we no longer have a variable called ERMAN.. Note that : we are in the same shell..
[root@erpprod modulefiles]# module unload erman
[root@erpprod modulefiles]# env |grep ERMAN

Wednesday, June 11, 2014

EBS 12.2 -- adadminsrvctl.sh start -- sh: module: line 1: syntax error: unexpected end of file

After completed the fresh install of EBS 12.2 , you may encounter the following error while starting Weblogic admin server. Note that : this error does- not prevent weblogic to work..
./adadminsrvctl.sh start
sh: module: line 1: syntax error: unexpected end of file
sh: error importing function definition for `module'

Oracle explains this problem with an internal bug : internal Bug 14259166 , and suggest the following rename for the fix:

Rename /etc/profile.d/modules.sh to /etc/profile.d/modules.sh.bak.
Log out, log back in, and set the application environment.
Confirm these environment variables are not set: MODULEPATH, LOADEDMODULES and MODULESHOME.
Actually, it is not required because the problem is caused by the script /usr/share/Modules/init/bash, exactly the following line "export -f module" triggers the error. So commenting this line will fix the error.

In this post, I will try to explain the cause of this problem;

So when we have modules.sh in our profile.d folder, we have modules related environments set;
for example;

[root@erpprod profile.d]# su - applmgr
[applmgr@erpprod ~]$ echo $MODULEPATH
/usr/share/Modules/modulefiles:/etc/modulefiles

Normally when we havent any modules.sh in our profile.d folder, we dont set any modules environment variables.
[root@erpprod profile.d]# su - applmgr
[applmgr@erpprod ~]$ echo $MODULEPATH

Okay, another important thing in the error message is; it reports an error while importing a function definition but it does not display a function name..
Our error is ; sh: error importing function definition for `module'
But it must be something like; sh: error importing function definition for `module '/opt/IBM/InformationServer/Server/DSComponents/lib/libicui18n.so'

I mean it should display the function name ,and it is also lacking a ' character... It has module' at the end, but that ' character is not closed.. 
So this seems to be the problem.. This explains the EOF error, as a ' character should be there not EOF..

As we now that our .bash_profile sources bashrc and bashrc source /etc/profile.d/*.sh scripts;
 Lets take a look what is written in our modules.sh file;

shell=`/bin/basename \`/bin/ps -p $$ -ocomm=\``
if [ -f /usr/share/Modules/init/$shell ]
then
  . /usr/share/Modules/init/$shell
else
  . /usr/share/Modules/init/sh
fi

So basicall it finds our shell and executes related scripts accordingly..
So assuming our shell is bash; lets look at the file contents of the script : /usr/shares/Modules/init/bash
--------------------------------------

module() { eval `/usr/bin/modulecmd bash $*`; }
export -f module

MODULESHOME=/usr/share/Modules
export MODULESHOME

if [ "${LOADEDMODULES:-}" = "" ]; then
  LOADEDMODULES=
  export LOADEDMODULES
fi

if [ "${MODULEPATH:-}" = "" ]; then
  MODULEPATH=`sed -n 's/[       #].*$//; /./H; $ { x; s/^\n//; s/\n/:/g; p; }' ${MODULESHOME}/init/.modulespath`
  export MODULEPATH
fi

if [ ${BASH_VERSINFO:-0} -ge 3 ] && [ -r ${MODULESHOME}/init/bash_completion ]; then
 . ${MODULESHOME}/init/bash_completion
fi

So, before going into the deep in the script above;

Note that :
The modules system is based on modulefiles,which specify groups of environment settings that need to be made together. Modulefiles can be installed in a central location for general use, or in a user directory for personal use. Environment Modules modulefiles are written in the Tcl (Tool Command Language) and are interpreted by the modulecmd program via the module user interface. Environment Modules modulefiles can be loaded, unloaded, or switched on-the-fly while the user is working; and can be used to implement site policies regarding the access and use of applications.


Okay, after this little info it is clear that modules.sh sets our environment variables..
Okay lets, run the modules.sh script and check our environment for the changes.

Hmm..I see the following at the end of my enviroment (checked using command env)

module=() {  eval `/usr/bin/modulecmd sh $*`
}

I have suspected from the curly brace, it is in the second line , but I have checked it, it seems it s not a problem..
[applmgr@ermanprod ~]$ module() { eval `/usr/bin/modulecmd sh $*`;
> }
no errors..

So afer setting this module function; lets export it to our shell environment;

[applmgr@ermanprod ~]$ export -f module

no problems again...

So by declaring and extracting our shell function named module(), we are now able to use it directly from the shell..
That 's why, when we run module command in our shell, the module function declared above will run, and it will call modulecmd as it is declared so.
The command to be investigated is "/usr/bin/modulecmd sh $*"

So lets start with the meaning of $*...

"$*" is equivalent to "$1c$2c...", where c is the first character of the
              value of the IFS variable.  If IFS is unset, the parameters are separated by spaces.  If IFS is null, the parameters are joined without  intervening  separators.

So what does IFS variable mean?

The IFS is a special shell variable.
You can change the value of IFS as per your requirments.
The Internal Field Separator (IFS) that is used for word splitting after expansion and to split lines into words with the read builtin command.

In fact, I m suspecting from the following multi-line environment function, and it should be the cause because we see an EOF error in our error message..
[applmgr@ermanprod ~]$ module() { eval `/usr/bin/modulecmd sh $*`;
> }
Eventhough it does not produce any errors, it is defined there.. It is multi- line.. I guess  weblogic start routine processes the environment variable in some way that it can not handle multi-lines and encounters the following error because of this multi-line environment function(module)

In brief, with the gathered knowledge; I guess that Weblogic start process cant handle multi-line environment variables or functions.. 

Tuesday, June 10, 2014

EBS 12.2 -- StartCD -- JDK differences

It is important to know the jdk version differences according to the StartCD version used for EBS installations.. As you see below, while startCD 12.2.0.46 delivers 1.6_0_31 for the client side plugin, StartCd 12.2.0.47 and above deliver java 1.7_0_25 .. This may create some issues in your environments, if you use different startCD versions for installing TEST enviromment and PROD environment.
I mean, you may test your client side using Java 1.6.0_31, but when it comes to the PROD, you may find your tests outdated because of using for example 12.2.0.48 startCD for your PROD installation..


EBS 12.2 -- StartCD 12.2.0.48 for EBS 12.2 is Available

New Rapid Install StartCD 12.2.0.48 for EBS 12.2 is Available since Jun 3. It is availabe via patch 18086193.

Important fixes included in 12.2.0.48 are as follows;

18703814 - QREP:122:RI:ISSUE WITH CHECKOS.CMD
18689527 - QREP:122:RI:ISSUE WITH FNDCORE.DLL SHIPPED AS PART OF R122 PACKAGE
18548485 - QREP1224:4:JAR SIGNER ISSUE DUE TO THE RI UPGRADE AUTOCONFIG CHANGES
18535812 - QREP:1220.48_4: 12.2.0 UPGRADE FILE SYSTEM LAY OUT IS AFFECTING THE DB TABLES
18507545 - WIN: UNABLE TO LAY DOWN FS PRIOR TO 12.2 UPGRADE WITHOUT AFFECTING RUNNING DB
18476041 - UNABLE TO LAY DOWN FS PRIOR TO 12.2 UPGRADE WITHOUT AFFECTING PRODUCTION DB
18459887 - R12.2 INSTALLATION FAILURE - OPMNCTL: NOT FOUND
18436053 - START CD 48_4 - ISSUES WITH TEMP SPACE CHECK
18424747 - QREP1224.3:ADD SERVER BROWSE BUTTON NOT WORKING
18421132 - *RW-50010: ERROR: - SCRIPT HAS RETURNED AN ERROR: 1
18403700 - QREP122.48:RI:UPGRADE RI PRECHECK HUNG IN SPLIT TIER APPS NODE ( NO SILENT )
18383075 - ADD VERBOSE OPTION TO RAC VALIDATION
18363584 - UPTAKE INSTALL SCRIPTS FOR XB48_4
18336093 - QREP:122:RI:PATCH FS ADMIN SERVICE RUNNING AFTER RI UPGRADE CONFIGURE MODE
18320278 - QREP:1224.3:PLATFORM SPECIFIC SYNTAX ERRORS WITH DATE COMMAND IN DB CHECKER
18314643 - DISABLE SID=DB_NAME FOR RI UPGRADE FLOW IN RAC
18298977 - RI: EXCEPTION WHILE CLICKING RAC NODES BUTTON ON A NON-RAC SERVER
18286816 - QREP122:STARTCD48_3:TRAVERSING FROM VISION PASSW SCREEN TO PROD
18286371 - QREP122:STARTCD48_3:AMBIGUOUS MESSAGE DURING STAGE AREA CHECK ON HP
18275403 - QREP122:48:RI UPGRADE WITH EOH POST CHECKS HANGS IN SPLIT TIER DB NODE
18270631 - QREP122.48:MULTI-NODE RI USING NON-DEFAULT PASSWORDS NOT WORKING
18266046 - QREP122:48:RI NOT ALLOWING TO IGNORE THE RAC PRE-CHECK FAILURE
18242201 - UPTAKE TXK INSTALL SCRIPTS AND PLATFORMS.ZIP INTO STARTCD XB48_3
18236428 - QREP122.47:RI UPGRADE EXISTING OH FOR NON-DEFAULT APPS PASSWORD NOT WORKING
18220640 - INCONSISTENT DATABASE PORTS DURING EBS 12.2 INSTALLATION FOR STARTCD 12.2.0.47
18138796 - QREP122:47:RI 10.1.2 TECHSTACK NOT WORKING IF WE RUN RI FROM NEW STARTCD LOC
18138396 - TST1220: CONTROL FILE NAMING IN RAPID INSTALL SEEMS TO HAVE ISSUES
18124144 - IMPROVE HANDLING ERRORS FOUND IN CLUVFY LOG DURING PREINSTALL CHECKS
18111361 - VALIDATE ASM DB DATA FILES PATH AS +<DATA GROUP>/<PATH>
18102504 - QREP1220.47_5: UNZIP PANEL DOES NOT CREATE THE CORRECT STAGE
18083342 - 12.2 UPGRADE JAVA.NET.BINDEXCEPTION: CANNOT ASSIGN REQUESTED ADDRESS
18082140 - QREP122:47:RAC DB VALIDATION IS FAILS WITH EXIT STATUS IS 6
18062350 - 12.2.3 UPG: 12.2.0 INSTALLATION LOGS
18050840 - RI: UPGRADE WITH EXISTING RAC OH:SECONDARY DB NODE NAME IS BLANK
18049813 - RAC LOV DEFAULTS NOT SAVED UNLESS "SELECT" IS CLICKED
18003592 - TST1220:ADDITIONAL FREE SPACE CHECK FOR RI NEEDS TO BE CHECKED
17981471 - REMOVE ASM SPACE CHECK FROM RACVALIDATIONS.SH
17942179 - R12.2 INSTALL FAILING AT ADRUN11G.SH WITH ERRORS RW-50004 & RW-50010
17893583 - QREP1220.47:VALIDATION OF O.S IN RAPIDWIZ IN THE DB NODE CONFIGURATION SCREEN
17886258 - CLEANUP FND_NODES DURING UPGRADE FLOW
17858010 - RI POST INSTALL CHECKS (SSH VERIFICATION) STEP IS FAILING
17799807 - GEOHR: 12.2.0 - ERRORS IN RAPIDWIZ AND ADCONFIG LOGS
17786162 - QREP1223.4:RI:SERVICE_NAMES IS PRINTED AS SERVICE_NAME IN RI SCREEN
17782455 - RI: CONFIRM DEFAULT APPS PASSWORD IN SILENT MODE KICKOFF
17778130 - RI:ADMIN SERVER TO BE UP ON PRIMARY MID-TIER IN MULTI-NODE UPGRADE FS CREATION
17773989 - UN-SUPPORTED PLATFORM SHOWS 32 BIT AS HARD-CODED
17772655 - RELEVANT MESSAGE DURING THE RAPDIWIZ -TECHSTACK
17759279 - VERIFICATION PANEL DOES NOT EXPAND TECHNOLOGY STACK
17759183 - BUILDSTAGE SCRIPT MENU NEEDS TO BE ADJUSTED
17737186 - DATABASE PRE-REQ CHECK INCORRECTLY REPORTS SUCCESS ON AIX
17708082 - 12.2 INSTALLATION - OS PRE-REQUISITES CHECK
17701676 - TST122: GENERATE WRONG S_DBSID FOR PATCH FILE SYSTEM AT PHASE PREPARE
17630972 - /TMP PRE-REQ INSTALLATION CHECK
17617245 - 12.2 VISION INSTALL FAILS ON AIX
17603342 - OMCS: DB STAGING COMPLAINS WHILE MOVING IT TO FINAL LOCATION
17591171 - OMCS: DB STAGING FAILS WITH FRESH INSTALL R12.2
17588765 - CHECKER VERSION AND PLUGIN VERSION
17561747 - BUILDSTAGE.SH FAILS WITH ERROR WHEN STAGE HOSTED ON 32BIT LINUX
17539198 - RAPID INSTALL NEEDS TO IGNORE NON-REQUIRED STAGE ELEMENTS
17272808 - APPS USERS THAT HAVE DEFAULT PASSWORD AFTER 12.2 RAPID INSTALL

EBS 12.2 -- Applying NLS translation patches (during a fresh install)

EBS 12.2 comes as 12.2.0 and it is mandatory to upgrade it to the 12.2.3 ..
In this context, usually, we apply NLS translation patches according to our languages before the upgrade..
So, after we apply nls translation patch in to our environment, we continue with our upgrade process. Every patch, that needs to be applied on the way , must be applied with its nls_tranlated version(if it is available)

So basically, we install our EBS 12.2.0 and patch our application components like Weblogic , Webtier home and database as required before the 12.2.3 upgrade process.

After meeting those requirements , we apply our NLS tranlation patch before the 12.2.3 upgrade as follows;

1)Download related NLS zip
2)unzip it , for ex: unzip Vpartname.zip -d $APPL_TOP_NE/../patch
3)License your language using Oracle Applications Manager
4)Maintain Multi lingual tables using adadmin
5)cd to patch directory
6)adop phase=prepare
7)adop phase=apply patches=10124646_TR:u10124646.drv (for turkish)
8)adop phase=finalize , finalize_mode=full gathers statistics to help improve performance. Finalize will take about one hour longer if this mode is specified.finalize_mode=quick does not gather statistics, and therefore completes more quickly. This is the default. So; adop phase=finalize should be run here, by default it will be quick.. (it is vision I felt no need to gather stats)
9)adop phase=cutover
10)adop phase=cleanup

Alternatively, we can upgrade 12.2.3 directly without applying the nls translations, and then request a translation syncronization patch..I mean, we can complete the American English Upgrade up to the recommended Release Update Pack level, and then upgrade our NLS software for existing languages using the Translation Synchronization Patch followed by the NLS Online Help patch.
Following document can be used in this context Requesting Translation Synchronization Patches (Doc ID: 252422.1)

So, the choice is ours..  

Monday, June 9, 2014

EBS 12.2-- installation, rapidwiz RW-00022 split configuration

While installing EBS 12.2 with a split tier configuration, you may encounter RW-00022 in the Filesystem precheck phase.
As known, In a split tier configuration we usually have a database tier and one or more application tiers, which reside on different servers.
Ok, I will keep it short.. So, while building a fresh splitted EBS environment, we first  invoke rapid for the database installation and install the database home and datafiles.. After the rapidwiz completes successfully, our database becomes up and running automatically.
Altough we use rapidwiz to install the database tier, we also specify some information related to potential future application tier installations. For example, Rapidwiz wants us to specify the application tier directory paths for instance.. Inst_top for example, appl_top also..
Anyways , when we supply this info, rapidwiz creates a file named config file and uploads the file contents to the database, after the database installation is finished..
In detail; rapid creates a config_{SID}.txt file in the $ORACLE_HOME/appsutil directory , and upload the file contents into the table named fnd_oam_context_files.. (Note that context files are also placed in this table)..
Afterwards, when the time comes and when we start our Application tier installation, we invoke the rapidwiz again, and this time rapidwiz wants us to supply a database connection string.. This is because rapidwiz is designed to gather the application tier configuration info by reading the config file which is created during the database tier installation. It actually connects to the database and read the fnd_oam_context_files table , checks the NAME column for a value like config.txt and read the file contents from the TEXT column.. This way, rapidwiz automatically understands where to place the application filesystems , and dont asks us to supply them again and again.. (consider a multi tier installation) It is basically an artistic design..

But what I have seen today was not so artistic :)
I mean , altough I supplied inst_top pointed to  directory /apps during the database installation, rapidwiz tried to install the inst_top into /oracle directory and encountered errors during the prereq check filesystem phase, and because there was not any directory named /oracle in my application server, the filesystem checker just wanted to create /oracle directory and it encountered permission denied errors , as /oracle was under the / directory (RW-00022 in rapidwiz world) ..

So what I did to fix this error, was just two simple modifications.
I modified the TEXT column in the fnd_oam_context_files table with Toad.. (edit fnd_oam_context_files) and corrected inst_top and related directory paths.. It was a little tricky because database directory structure information was also there..
After that I corrected the config file residing in the filesystem. I mean config_SID.txt file in $ORACLE_HOME/appsutil directory.. This was acutally not mandatory, but who knows.. Maybe one day that config file would be used again.