Friday, May 30, 2014

EBS R12 -- HR FRM-40735, ORA-01403 PAYWSDET form

You may encounter FRM-40735 in PAYWSDET forms screen.

FRM-40735: ON-INSERT trigger raised unhandled exception ORA-01403
Total Compensation : Basic
Element Description15:19

To reproduce the error:
Global HRMS Manager> Total Compensation : Basic > Element Description   save..
FRM-40735: ON-INSERT trigger raised unhandled exception .

Usually, these kind of errors should be investigated using EBS built in session tracing , unless there is a known issue in Oracle Support.
,To generate a Trace File for Forms Navigate to Menu - Help > Diagnostics Select one of the following options:

Regular Trace
Trace with Binds
Trace with Waits
Trace with Binds and Waits

I prefer trace with binds, as we need to see the bind variable values here, because we are encountering an error based on the data (no data found)
So , after enabling the trace for our form session, a window, which will specify the trace file for our session, will be displayed . After this point, we reproduce the problem in the same session and open the trace to make the analysis.

Okay lets come back to our subject;  I mean to fix the ORA-01403 error in PAYWSDET forms;
I have analyzed the trace file and found the problematic sql..
It was not returning any values..
This was the cause.

As, the query is related with legislations.. I got suspected from Hr Legislative installation... 
So I reapplied hrglobal.drv just in case and that action have fixed the issue. So it must have been some issue with its previous install.

RDBMS-- an unusual ORA-01555

We have seen and learned that ORA-01555 errors mostly caused by long queries which try to get overwritten older images of changed blocks.. As these images are overwritten due to undo management mechanism, those queries have returned ORA-01555 errors.

Yesterday however, I have faced a unsual ORA-01555 error that have made me rethink the behaviour of Oracle in Undo management.
The error was produced for a very short running query ,that returns data in only 1 second.
This was not usual , as the possibility of the needed undo to be overwritten is very low.
The undo management was Auto. Undo tablespace was 7.5 gb and retentio was 900.

So, after analyzing a little further, I have seen that this query was inside a procedure, which was used to manage the partitions.. The procedure was run by a scheduler .. Using this procedure, partitions were periodically dropped and  created at night.
The query was something like;
Select partition_name, ...., .... from user_tab_partitions where table_name=:B1'

So the procedure was running the query above to see the current partitions, and then executes DDL 's to recreate some partitions..
The problem was that this procedure was run by the scheduler multiple times for different tables one after another, and as these DDL operations are autocommit, this process was doing fetch across 

Select Q1
Select Q1
Select Q1

Note that: When you commit, your undo will be overwritable in normal circumstances.

I have suspected from the delayed block cleanout first, but to fix the problematic situation that may arise from the delayed block cleanout; the procedure should be modified.. I mean, procedure should be modified such that, it should execute the query once and make its DDL operations using the data returned from the query, or it should execute the query for every DDL but it should not execute the same query twice.. (maybe a sort or hint can be added)

Delayed Block Cleanout represenation: (Ref : Oracle Support)

Stage 1 - No Changes Made
Description: This is the starting point. At the top of the data block we have an area used to link active transactions to a rollback segment (the 'tx' part), and the rollback segment header has a table that stores information upon all the latest transactions that have used that rollback segment.
In our example, we have two active transaction slots (01 and 02) and the next free slot is slot 03. (Since we are free to overwrite committed transactions.)
Stage 2 - Row 2 is Updated
Description: We have now updated row 2 of block 500. Note that the data block header is updated to point to the rollback segment 5, transaction slot 3 (5.3) and that it is marked uncommitted (Active).
Stage 3 - The User Issues a Commit
Description: Next the user hits commit. Note that all that this does is it updates the rollback segment header's corresponding transaction slot as committed. It does *nothing* to the data block.
Stage 4 - Another User Selects Data Block 500
Description: Some time later another user (or the same user) revisits data block 500. We can see that there is an uncommitted change in the data block according to the data block's header.
Oracle then uses the data block header to look up the corresponding rollback segment transaction table slot, sees that it has been committed, and changes data block 500 to reflect the true state of the datablock. (i.e. it performs delayed cleanout).

Anyways, as this requires modification, I have tried increasing Undo Tablespace size. I have done this to increase the count of rollback segments.. By increasing the rollback segments , the work would separate to more rollback segment and thus decrease the chance of overwriting a rollback transaction slot ..  

Actually, increasing the size of undo have help, and the error is not appeared any more..
But still, I feel the need to write the possible causes of ORA-01555 errors..

Causes, which may lead to ORA-01555; (Ref: Asktom)

1)The rollback information itself is overwritten
2)The transaction slot in the rollback segment's transaction table (stored in the rollback segment's header) is overwritten

Possible fixes;

1. Increase size of rollback segment which will reduce the likelihood of overwriting rollback information that is needed.
2. Reduce the number of commits.
3. Run the processing against a range of data rather than the whole table.
4. Add additional rollback segments. This will allow the updates etc. to be spread across more rollback segments thereby reducing the chances of overwriting required rollback information.
5. If fetching across commits, the code can be changed so that this is not done.
6. Ensure that the outer select does not revisit the same block at different times during the processing. This can be achieved by : - Using a full table scan rather than an index lookup - Introducing a dummy sort so that we retrieve all the data, sort it and then sequentially visit these data blocks. Note that: This is only applicable if the reason is the rollback information overwritten(1)

Sunday, May 25, 2014

EBS 11i/R12 -- Workflow mails are not sent --"undelivered","undeliverable"

If you configured smtp and imap, and if your emails are not sent to the receivers, then you may be facing this problem.

Check your Notification mailer logs... If you see your Imap thread(inbound) is trying to process an email which has a undeliverable or mesage delayed subject, then you are facing this issue ..
Here is an example;

Customer complains about the notification emails are not sent to the receivers.
We start our investigation and see the messages are sent in the notification mailer's log file
WFMLRSVC-371100-10006 : oracle]:Updating notification {1532500} status {SENT} {} {} ..
So everyting seems proper.

After analyzing the log, we see an IMAP related line, which seems not related with our issue in first sight, because the problem should be in the SMTP, so why looking IMAP logs?
WFMLRSVC-371100-10006 :]:Processing Message SUBJECT[Teslim Edilmedi: Closed : ERMAN --> "ERMANANAAA"]

But, there is an imporant clue here.. Even if it is related with IMAP, we see that the mail has a "Teslim Edilemedi" tag in front of its subject. Teslim Edilemedi means undelivered, undeliverable in English.
So , this means our smtp request was rejected by the mail server.. 
Normally when an email is rejected, the mail server sends a notification mail for giving the information to the sender that the mail cant be delivered. So in this time, this undeliverable information is sent to the IMAP account. This reveals the problem , as the email cant be sent even if it seems SENT in the log file.

That 's it. Your smtp server can not deliver the emails to the receivers. Our part is done.   At this point, we have already exposed the problem.. It should be a problem with accounts. a disabled account in a mail group or something like that..
Now it is time to let the Mail Server Admins work :)

Awr Analysis -- Part 1

In this post, I will write an article realted to Automatic Workload Repository report, which is a sophisticated tool for investigating the performance on an Oracle Database. Also if we can analyze ASH report in addition to the findings in AWR, we would have a full detailed investigation against an Oracle Databse.
My references in this article are Oracle Support, an Oracle Awr interpretation presenation of Oracle ACS and my own field experiences.

*AWR is first introduced in Oracle Database 10g.
*Statistics gathering for Awr is enabled by default. (Statistics_level=TYPICAL or ALL. This setting enables automatic statistic gathering for AWR. By default hourly and retention is 7 days(10g), 8days(11g))
*Manuel statistic gathering can be invoked by -> EXEC DBMS_WORKLOAD_REPOSITORY.create_snapshot;
*DBMS_WORKLOAD_REPOSITORY.modify_snapshot_settings can be used to modify default settings.
retention => 21600,         -- Minutes (= 15 Days).
interval  => 30                 -- Minutes.
topnsql => 100,               -- the number of Top SQL to flush for each  criteria
dbid => 3310949047);    -- the local database identifier is used by default .
Current value is retained if NULL

*Awr data is stored in SYSAUX tablespace.
*$ORACLE_HOME/rdbms/admin/awrinfo.sql script can be used to display general information about AWR(size, data distribution in SYSAUX)
*$ORACLE_HOME/rdbms/admin/awrsqrpt.sql script can be used to diplay statistics of a sql for a range of awr snapshots.
*$ORACLE_HOME/rdbms/admin/awrextr.sql script can be used to extract awr data from a database. This script will extract Awr data into datapump export file.
*$ORACLE_HOME/rdbms/admin/awrload.sql script can be used to import the AWR data from an export dump file.(The target database version must be >= the database version of exported data)


Db Time: Total time spent by user processes either actively working or actively waiting in a database call..
Response Time:Service (CPU) Time  + Wait Time.  | Service Time = time the request is actively being processed on the CPU.
Elapsed Time:Wall Clock Time  , Compare Db time with Elapsed Time to calculate the workload of the instance.
Average active sessions=DB Time /Elapsed time.

Db Time increases when :
Users(more db calls) increases -> DB Time increases , Large transactions -> DB Time increases , IO performance decreases -> IO Wait time increases -> Db time increases , CPU wait time increases -> DB Time increases


Instance Efficiency
Execute to Parse : 100*(Executes-Parses)/Executes , how often sqls were executed without having to parse
Parse CPU to Parse Elapsed : 100*Parse CPU Time / Parse Elapsed, how much of the total parse time was spent on CPU.
Soft Parse % : 100* (Parses - Hard Parses)/Parses, the ratio of soft parses to hard parses
Latch Hit%: shows how often latches were acquired without having to wait. This percentage is based on the ratio of the total number of latch misses to the number of latch gets for all latches.
% Non-Parse CPU: shows the percent from CPU time (DB CPU) that is dedicated to Execution and not parse.
Buffer Hit%: measures % of buffer found on buffer cache -- without disk IO.

Shared Pool Statistics:

Memory Usage % :the amount of occupied memory from shared pool
% SQL with executions: the percentage of SQLs executed more then once.
% Memory for SQL w/exec: the percentage of SQLs executed more then once, that were in memory.

Top 5 Timed Events:

These events are based on %DB Time.
Average wait time=wait time/# of waits
For IO related waits , avg wait time can help recognizing IO related performance problems.
Taking ASH report for the same time period can help identifying problematic sql's, processes or operations..
Top sql with Top Events section from ASH shows the statements waiting/working on the CPU.
Top SQL with Top Row Sources section has similar information ordered by Row Source of execution plan.


Every wait has a wait type/wait Class.
A session is either consuming CPU or posting a wait event.
Analyzing Wait events is the key for throubleshooting a performance problem.
Wait events are the results of the underlying problems.
Sometimes the root cause of a performance problem, can be a wait event which has no direct impact to the system resouces.(for ex: to DB time)
For example, Latches can cause high CPU usage at OS level=>CPU starvation at OS level =>other waits are inflated.

Wait Classes:

User I/O - waits for user IO
System I/O - waits for background process IO
Concurrency - waits for internal database resources.
Cluster - waits related to RAC resources
Commit - contains one wait event - log file sync - wait for redo log write confirmation after a commit
Application - waits resulting from user application code
Administrative - waits resulting from DBA commands that cause users to wait
Configuration - waits caused by inadequate configuration of database or instance resources
Idle - waits that signify the session is inactive, waiting for work
Other - Waits which should not typically occur on a system
Queue - wait events that signify delays in obtaining additional data in a pipelined environment
Scheduler- waits related to Resource Manager

Wait Events:

*Db file sequential read - block accessed via  index, rowid, rollback.

*Db file scattered read - multi block read via full table can,  index fast full scan.

These waits have 3 attributes , P1 value = file no ,P2 value = block no(starting block number) ,P3 value = blocks(specifies the number of blocks.)

*So following query can be used to identify the object ;

SELECT relative_fno, owner, segment_name, segment_type FROM dba_extents WHERE file_id = &file AND &block BETWEEN block_id AND block_id + blocks - 1;

*Direct path read - Generally used by Oracle when reading directly into PGA memory (as opposed to into the buffer cache).Direct path reads are faster than scattered reads and have less impact on other processes because they avoid latches.

*Read by other session/buffer busy - Waits related to contention on  hot blocks.
This wait happens when a session wants to access a database block in the buffer cache but it cannot as the buffer is "busy".
This wait event means either,
Another session is reading the block into the buffer. Starting with 10g this wait time is counted in read by other session wait.
Another session holds the buffer in an incompatible mode to our request.

This wait evet has 3 attributes: P1 value = file no  ,P2 value = block no , P3 value =  id (Reason Code)/Block Class# in 10g

So following query can be used to identify the object ;

SELECT relative_fno, owner, segment_name, segment_type FROM dba_extents WHERE file_id = &file AND &block BETWEEN block_id AND block_id + blocks - 1;

*Log file Sync -  When a user session commits or rolls back , Oracle/LGWR needs to flush the contents of log buffer to the Redolog files. Log file sync occcurs in these operations.
When we wait for log file sync, we actually wait for;  LGWR to wake up if idle, gather the redo to be written, issue IO (this IO is a log file parallel write, can be monitred using log file parallel wait event), process after the IO and acknowledge user session/foreground.. In addition, we wait for user session/foreground to wake up lastly.

So the possible bottleneck of the log file sync seems the IO, which is log file parallel write.
If avg time per log file sync operation ? avg wait time per log file parallel write operation  -> IO is bottleneck.
If the avg wait time per log file sync operation >> avg wait time per log  -> CPU bottlenecek , Run queue problem or bug. In such a situation, check CPU consumption..

*Enqueues - Enqueues are locks. They are held by database session/sessions in a mode that block the lock request of others.
This Type of wait has a P1 parameter which contains the Lock Type & Mode requested.
enq: TX - row lock contention is a type of lock we often see. It represents transaction locks.
If its mode is 6, then there is an application issue..
If its mode is 4, then there is  unique key/Foreign Key/Bitmap index contention

Note that: insert /*+ append */ - during direct-path INSERT, the database obtains exclusive locks on the table (or on all partitions of a partitioned table).
enq: TM -  table modification /lock mode=6 Exclusive occurs on append hint, which means no concurrent dml can be made on the table during the append operation. Only queries can be run, but they will show the data before the append takes place.
The MOS document 1317447.1 expains Append hint and TM Enqueue lock behaviour.
Unindexed foreing key constraints and truncate operations can also be the cause of enq TM, as well.

Example Scenario:
You have indexes on tables, which are accessed by the application so often.
These indexes are on the columns whose values are populated with monotonically increasing values by a sequence or similar thing.
When you look at your AWR, you see db file sequential read , enq TX - index contention and buffer busy wait in Top 5 Timed Events.
Also if your application is doing insert on such tables, There will be inserts into the indexes,to Leaf/Branch block split is the wait event you may see in this scenario.. This may slow you down..
You will see this kind of indexes in the Segment by Buffer Busy Waits Section.
At this point, if that is the scenario, you can use the reverse key indexes to eliminate the contention which is caused by inserts. On the other hand, it can affect your Range scan performance. Also you can use hash partitioned indexes as an alternative. This index type can spread the contention to the partitions.
Another option to alleviate the problem is to increase the cache size of the sequences and also the NOORDER keyword should be used to avoid an additional cause of SQ enqueue contention that is forced ordering of queued sequence values.

*Latch Waits -
Latches are like locks but they are defined for protecting memory structures.
They are low level serialization mechanisms used to protect shared data structures in the SGA.
By using the latches, Oracle prevents more than one process from executing the same critical code sections at a given time.
Latches are different than Locks in the manner of ordering of waiters. There is no ordered queue of waiters in Latches. Waiters spin on the Cpu to get the latch..
If there are multiple waiters for the same latch, they all spin on the Cpu and retry getting the latch, and there is no order between them.
That is ; anyone might get the latch and conceivably the first one attempting to obtain the latch might be the last one to actually get it.
There are two modes in latch requests. Willing to wait mode and no wait mode.
willing-to-wait mode that will loop, wait, and request again until the latch is obtained. Examples : shared pool and library cache latches.
no wait mode in which the process will request the latch and if it is not available, instead of waiting, another one is requested. Only when all fail does the server process have to wait. Example: the redo copy latch.

If a process cannot get a latch immediately, it spins, tries again and if still unavailable, spins again. When the process fails that, the miss count statistic (available also in AWR -  Latch Sleep Breakdown) is incremented..
The loop is repeated up to a maximum number of times (parameter _SPIN_COUNT). If after this entire loop, the latch is still not available, the process must yield the CPU and go to sleep. Initially it sleeps for one cs. This time is doubled in every subsequent sleep.

Some Common Latches:
latch: library cache - protect operations within the library cache itself. This latch must be acquired in order to add a new statement to the library cache.
latch: shared pool latch - protect critical operations when allocating and freeing memory in the shared pool. Library cache latch protects operations within the library cache, the shared pool latch is used to protect critical operations when allocating and freeing memory in the shared pool. To reduce contention on either of the two latches: Avoid hard parses when possible, parse once, execute many, eliminate  literal SQL, check if shared_pool size is adequate.
latch: library cache pin – is acquired when a statement in the library cache is re-executed. Using private rather than public synonyms or direct object references such as OWNER.TABLE might help reducing this latch.
latch: row cache objects - protects the access of the data dictionary cache in the SGA. This latch is acquired when loading, referencing and freeing objects in the data dictionary cache. To reduce contention for this latch the size of the shared pool should be increased.
latch: cache buffers chains - acquired whenever a block in the buffer cache is accessed (pinned). Reducing contention for the cache buffer chains latch usually requires reducing logical I/O rates by tuning.
latch: cache buffers LRU chain - acquired in order to introduce a new block into the buffer cache and when writing a buffer back to disk, specifically when trying to scan the LRU (least recently used) chain containing all the dirty blocks in the buffer cache. To reduce it check if the buffer cache size is adequate.

so , if you see a latch contention in Awr, you can understand where the Oracle spents time in code, because latches are used for protecting the code to be run by multiple processes concurrently.
Thus, you can understand what Oracle does and take the necessary actions to decrease the load.. What I try to mean is , when you see something in Awr, it must not necessary be the problem, but it can be the result of something..
For example, if you see a high number of latch: library cache wait, then you need to focus on the question why..

*Buffer Cache -

I will not explain what Oracle Buffer Cache is, as I suppose you already know that.
On the other hand, I will give information about accessing the Oracle buffers in memory, as they are related with the AWR.. They can slow you down if there would be contention on them.
First of all Accessing Oracle Buffer Cache is based on Hash bucket and Doubly linked list.
Following figure illustrates the access mechanism used in Buffer Cache.

As the above picture represent, to access a block in the buffer cache, oracle first hashes the block address --DBA and obtain bucket number. Then, gets bucket latch and looks the buffer header. After that , finds the block in cache and reads it.If the block is not in buffer cache it will be read from disk. So in this process we need a bucket latch.. This is under the control of Latch:Cache buffer chains..

So, we can say that accessing the hot blocks ,which are concurrently get accessed, can increase the requests for the same buffer headers which will increase these cache buffer chain latches.
So as Cache buffer chains is a latch, processes trying to get it use our cpu cycles, and our work is serialized. So we need to tune this. In order to decrease this wait event we have to consider reducing the number of buffers read from buffer cache by tuning the sqls, partition the index with hot block by hash and spread out data to reduce contention.
Note that: The copies of a block needed to satisfy the consistent read operations, are maintaned in the same bucket with the block itself, so the same latch is required to reach the buffer containing the consistent get image of that block. So when there is DML activity going on against the hot blocks, the saame latches are required, thus even the dml increases the change of these type of latch contention.
On the other hand, In Oracle 11g there is an optimization in reaching the buffers. It is a feature about the consistent gets for reducing the latch contention. It is controlled with _fastpin_enable hidden parameter. I will explain this hidden parameter in my next post.

*Library Cache-

We have a similar pinning mechanism in library cache.. Sql statements/text are hashed and put in a structure like a hash table. Oracle finds the related bucket using these hashes , the bucket is searched for the Sql, if the sql is not present then Oracle parses it.
Locking mechanism is used in Library Cache too. Oracle uses Locks and pins for the libary cache objects..For example, to execute a cursor Oracle pins the cursor object in share mode, but not puts any locks on it.On the other hand, in a cursor complation, Oracle pins and locks the object in exclusive mode..

To implement such a protection, we have latches ofcourse;
The Library cache latches are library cache pin allocation , library cache lock allocation ,library cache lock ,library cache ,library cache pin  and library cache load lock..

Following picture represent the protection mechanism in Library cache;

So, as I explained above, we have share mode and exclusive mode pins & locks in library cache. Share mode locks do not create contentions naturally, but exclusive mode locks and pins create contention as it is in  their nature. So in order to have an environment without a contention, we need to decrease the hard parsing activity in the first place... Hard parsing is caused by the low shared pool size and/or Sqls which are not shared -- due to using literals. Invalidation can also trigger contention in the Library cache , as cursor compilation will be needed.. High soft parsing activity can also be a reason for the contention in Library cache. If you have high soft parsing , consider increasing session_cached_cursors...
So we need to take necesssary actions, if we see high values in library cache latches..

Note that , In 11g there are mutexes,which replace the library cache latches.. Mutexes are introduced in 10g firstly. Mutexes works faster than latches, as they are using the cpu 's compare &swap instructions.
There is famous wait event Mutexes- Cursor: pin S wait on X, which worths explaining..
This wait event occur when a session request a shared pin on an object while another session has a exclusive pin on it.. This wait event occurs mostly when a session want to execute a cursor while another session is parsing it. If you see that wait, use its first argument to find the related cursor. Use following sql to find the statement;
SELECT sql_id, sql_text, version_count FROM V$SQLAREA where HASH_VALUE='P1 VALUE'Also there are other wait events about the mutexes.
Cursor Mutex X is one of them. This wait event occurs  when trying to get the cursor 's mutex in exclusive mode, to parse it
Cursor Mutex is another wait , which occurs  when trying to get the cursor 's mutex in share mode to parse it
Cursor: pin S occurs  when trying to get the cursor 's pin in share mode to parse it
Library Cache : mutex S occurs when trying to get the library cache mutex is share mode to perform a library cache operation.

*IO Stats Section

IO stats in terms of tablespace and datafiles are another area that we need to focus on the awr reports.
When we see User I/O type waits(like db file sequential read or db file scattered read) in the Top 5 events section, we need to analyze the User I/O section carefully ..

For example, following is taken from an IO bound system's AWR report.

Event Waits Time(s) Avg Wait(ms) % Total Call Time Wait Class
db file sequential read 1.599.822 64.774 40 44,5 User I/O
read by other session 2.268.702 46.324 20 31,8 User I/O
log file sync 38.909 13.074 336 9 Commit
db file scattered read 130.722 9.146 70 6,3 User I/O
CPU time 6.771 4,6

As you see aobe, there are all I/O waits and they are consuming nearly all of the database time..
So in this scenario, we need to look at IO stats section of the same AWR to ensure that IO Subsystem is sufficient to meet the IO requests of the running applications..

Here is the IO stats section of the same AWR report;

AWR File IO Stats section

The important thing in here is to understand whether the IO Subsystem is resposing slow or the the load is so high.. So we dont make any comments directly after sessing these values.. The reason  may be the sql's which are making Oracle do a lot of IO requests which go beyond the limitations of the underlying Storage System..
To understand this, we need to consider response times for highly loaded tablespace/datafiles as well as the response times for quiet tablespaces. Also IO Waits in the Top 5 timed event section should also correspond with these values.. So in other words, if your IO subsystem is a bottleneck, you should see at least one of the following wait events in Top 5 ,

db file sequential read – Random I/O , single row , mostly index IO
db file scattered Read – Sequential I/O , Full table scan, index fast full scan
direct path read – Sequential I/O , Full table scan , directly to the PGA
direct path read temp – Read I/O for sorts
direct path write temp – Write I/O for sorts
read by other session – sessions waiting  while another session is reading the same from the disk. 

In the File IO stats ;
You will see statistics like  average number of read requests per second,Average read time in milliseconds, write waits,number of buffer waits and average wait time in milliseconds for buffer waits..

Av Reads/s - average number of read requests per second.
Av Rd(ms) - average read time in ms.
Av Blks/Rd - average number of blocks per read.
Buffer Waits - number of buffer busy waits.
Av Buf Wt(ms) - average wait time in milliseconds for buffer busy waits.
There is also a Tablespace IO stats section in AWR , in that section the statistics is just grouped by the tablespaces.

These values indicate read and write operations of ARCH, Buffer Cache Reads, Data Pump, DBWR, Direct Reads, Direct Writes, LGWR, RMAN, Recovery, Smart Scan, Streams AQ, XDB, and Others.

Among the values, we need to focus on Avg  Rd/ms, in the first place . It will give you average time for waiting a disk read, under 20 ms is acceptable.. Av Rd(ms) indicates the average time to read data from disk into a buffer. Note that, according to your hardware , even 8ms may be slow for you.. So check these values according to your hardware but usually anyting under 20ms is acceptable.
This is also an important metic, but should be combined with Avg Blkd per Read.  We can have single block reads in the file, and we can have a little amount of them which will increase our avg rd/ms but it s not a real problem always. 
That is, idle files should be ignored, because they can produce high values for IO  due to disk spinup and etc.
Also we can have multiblock read at a time, it is threaded like a single block IO. That is we start the timer, issue one or more blocks, we stop the timer.(Asktom)

Av Buf Wt(ms) is also very important as in a system with low DML activity and a well configured buffer cache and IO susbystem, there should be almost zero buffer waits. Buffer wait helps us see the  how much sessions/processes have to wait for a buffer which is read into the buffer cache.

If we consider the values in the above picture;

After all those analysis, if you decide you have an IO problem, dont upgrade your hardware directly but do the following;
  • Reduce the I/O requirements of the database by tuning SQL:
  • Reduce the I/O requirements of the database by tuning instance parameters:
  • Optimizing I/O at the Operating System level
  • Balancing the database I/O by usage of Oracle ASM (Automatic Storage Manager)
  • Balancing the database I/O by usage of Striping, RAID, SAN or NAS
  • Redistribute database I/O by manual placement of database files across different filesystems, controllers and physical devices
  • Reducing the data volumes of the current database by moving older data out.
  • Investing in more and/or faster hardware.
That's all for now. I will write more on Awr investigation as well as rac related stastics in my next post (Awr Analysis 2)

Thursday, May 15, 2014

EBS 12.2 -- Implementing HR Organization Chart

Organization Chart is an integrated application for E-Business Suite, which supplies s a visual representation of relationships between workers in an organization.Even though Using the Organization Chart Feature in Oracle SSHR (Doc ID 1408355.1); states “12.1 HRMS RUP4 and
later”, it will be safe to use and follow as 12.2.3 is later than that listed within the note.

For the server side; Organization Chart requires a dedicated Weblogic Server for itself. ADF should be deployed to the Weblogic, as well. This weblogic server can be installed into a one of the existing EBS servers, or it can be installed into a dedicated server. (Note: Be careful .. dont use the same weblogic ports as EBS.. Especially admin and node manager ports.. EBS 12.2 have 2 different port pool, so it uses 7001(fs1) and 7002(fs2) for admin server by default.., for node manager also default ports are in use )The requirements would differ , but generally a 2 core machine with 4gb memory should be enough for Hr Organization Chart application to be run properly.

For the client side, Flash 10 or above is required..

Anyways , In this post, I will try to explain the installation and configuration of HR Organization Chart Feature in EBS 12.2..
Note that: We use the following documents for the source of information, but we need to mix them actually.

The source documents:
  • Configuring and Deploying Organization Chart White Paper
  • Using the Organization Chart Feature in Oracle SSHR(1408355.1)
  • Oracle E-Business Suite Software Development Kit for Java Release 11i and 12
Note that , we dont need to apply any patches in our EBS instance, as our version is 12.2 , and the profile options which are deployed with 12715586, already exist in our Environment.

So, we start with the installation of Weblogic Server. It is acutally a standart installation.

Installation of Weblogic Server:

We use a separate OS account for Weblogic installation.. (like hrorgweb)
Weblogic version should be certified with the HrOgrChart that is going to be installed..
In this case, for HrOgrChar, I have used Weblogic Server 10.3.6, as it seemed to be certified.
To install the weblogic into a 64 bit machine, I downloaded wls1036_generic.jar wihch is available 
through "Oracle Weblogic Server 11gR1(10.3.6) + Coherence Package Installer File 1: Generic : 1GB
So Generic file should be downloaded..
After downloading Weblogic Server, we have to be sure that the java version is certified for the installation platform..

For HrOrgChart, latest java 6 seemed certified , so I have used that one.
So if the server that we are installing Weblogic has not a certified java, we need to upgrade the java.
If it i s the case, we download the java from the vendors site, and install it following installation instructions.(platform specific).. For ibm , java can be downloaded from ..

You can refer the certification document for choosing the right java for your system.. example : For installing Weblogic 10.3.6 into a 64 bit Redhat Linux System : We need at least jdk 1_7_0_2

After installing the appropriate version of java , we are ready to invoke the weblogic installer, with the following command;

export JAVA_HOME=full path of certified java that we installed previously.
JAVA_HOME/bin/java -jar wls1036_generic.jar..

Note: we need X environment for this.. Use Vncserver, and also set DISPLAY to vnc port before running the jar file..

In the installation screens, we use the typical installation type and create a new Middleware home by supplying directory names.. Finally, we click next on the installation summary screen to start the installation process.

When the installation complete, we de-select Run quickstart and click Done to exit the Installer.

Secondly we need to deploy ADF or later into the Weblogic Server we just installed, as HR Organization Chart is a Weblogic application which requires ADF to be present.

Deployment of ADF :

To install ADF, we downloaded unzip file and basically run the runInstaller located in the directory named Disk1.

Below is an example of ADF installation;

Okay, after installing Weblogic and deploying ADF , our base environment is ready for deploying Hr Organization Chart application.. So at this point, we download the patch 13691576 and unzip it , as Hr organization Chart is available through that patch.

When we unzip the patch, we pull through OrgChartApplication.ear file. Then we unzip this ear file, and we have fndext.jar file. Next we copy the jar file in to a directory named /tmp/work for instance.

This jar file contains the method for connecting to an Apps database using Application users. We will use this jar for our AppsDataSouce later.

So in this point, we start our work for creating the AppsDataSource..We do this before we deploy the Hr Organization Chart application, as we are making the environment ready for use.

For creating Apps Data Source, we complete the following actions;

Register the node.. I mean the node, that our weblogic resides... (if the weblogic server used for Hr Org Chart resides on one of the EBS nodes, then this step is not required, as EBS nodes are already registered, so you can use the standard dbc file) To register the external node and create a new Dbc file we use AdminDesktop utility: Here is the usage: java apps/apps CREATE NODE_NAME=ermanhost [IP_ADDRESS=] DBC=/u01/TEST.dbc Note that, your standard dbc file should already be created for the use of AdminDesktop utility. Anyways, when AdminDeskop utility is executed, it will produce a dbc file in a name format like TEST_erptest.dbc --> erptest is the nodename.

The required profile options should be set to their appropriate values. I mean; "FND: Validate User Type" should be set to Desktop Only at site level. "FND: Validate IP address" should be set to Desktop only at site level. "FND: Dekstop Nodes allowed" should be set to the nodename. It should be set at the user level. The user should have the Apps Schema Connect Role.. (if you have multiple nodes; use a comma for specifiying multiple host for ex: erman1,erman2)

Run the query "select * from fnd_nodes where server_address='*'", and make sure that server_id='SECURE' for the AUTHENTICATION node. If not; edit s_appserverid_authentication context variable in the context file, set it to SECURE and run autoconfig in apps tier to make the server_id='SECURE' for AUTHENTICATION node.

Create an ASADMIN user (or another) and grant it APPS_SCHEMA_CONNECT role.. (Apps Schema Connect).. Grant this roles from the User Management Responsibility.. Make sure the user is active. It should look like the following; Also , ensure that your workflow listeners are running, because if the workfload agent listeners are not running, this modification will not be visible (I mean the role seems unassigned) in the WF tables, and this creates Invalid username/password problems while creating the AppsDataSource.

Rebuilding fndext.jar is not a mandatory thing, but if you want to rebuild the jar , follow section Rebuilding the fndext.jar file for Oracle WebLogic Server on UNIX/LINUX of Software Development Kit for java.

Next, Deploy your new jar file to Weblogic Server for the AppsDataSource to the Weblogic server To accomplish this step, shutdown your weblogic server , copy fndext.jar to $DOMAIN_DIR/lib directory and start the weblogic server.Note that: when you put your fndext.jar file into the DOMAIN_DIR/lib directory, normally Weblogic server appends it system path with that jar file. But if your weblogic doesnt append the system path properly, set your classpath in weblogic start script and manually append your jarfile into it.

Now, it is time to configure your AppsData Source. Login to the weblogic using http://<your host>:<your port>/console. Navigate to Domain Structure > Services > Data Sources Enter values for: Database Name: (SID of Oracle E-Business Suite database), Hostname (DB hostname),Port (DB port), DB username - Enter the valid Oracle E-Business Suite username (created in Set Up Necessary Oracle E-Business Suite Users) in UPPERCASE, The user should have the APPS_SCHEMA_CONNECT role as described previously. You do not use the APPS schema name here. Password for the Oracle E-Business Suite user) Enter the Driver Class Name: : In this case -> oracle.apps.fnd.ext.jdbc.datasource.AppsDataSource. For database driver use ->Oracle's Driver (Thin) for Instance connections; Versions:9.0.1,9.2.0,10,11 Also, In the Properties field, add a new property after the user property: dbcFile=<full path of your desktop DBC file> Click the Test Configuration button. You will get a message indicating that the test has succeeded or failed. If you encounter Invalid username password error use the following document for throubleshooting: Actions to Verify When Running Into ORA-01017 While Creating Datasource (Doc ID 1556742.1)

Now you are ready to deploy the Hr Organization Chart application.. Log in to the Oracle WebLogic Server Administration Console and navigate to Domain Structure->Deployments and click the "Install". Enter the location of your application ear file (which is the directory containing: OrgChartApplication.ear ) On the next screen, select "Install this deployment as an application", and click Next. Name the deployment and click the "Finish" button. The 'Successful deployment message' appears. Navigate to the Configuration tab, save your configuration as Plan.xml, and then click the "OK" button.

After deploying the Hr Organization Chart application, you need to edit the plan.xml file manually and deploy the application again.

Here is an example of a modified Plan.xml file.. The plan file is located in your application directory under your base_domain. Its location is actually displayed when you deploy the application in Weblogic Console.

Simply add the lines written in bold...

<?xml version='1.0' encoding='UTF-8'?>
<deployment-plan xmlns="" xmlns:xsi="" xsi:schemaLocation="" global-variables="false">
<value>F7014942D926056FE04373211EACB56832646424207052220632767349498362</value> --Note that this value is app_server_id in your DBC file.

<module-descriptor external="false">
<module-descriptor external="false">
<module-descriptor external="true">
<module-descriptor external="false">
<module-descriptor external="false">
<module-descriptor external="false">
<module-descriptor external="true">
<module-descriptor external="false">
<module-descriptor external="false">


After modifiying the deployment plan xml file, you must redeploy the application with this modified plan. Return to the Oracle WebLogic Server Administration Console and navigate to Domain Structure -> Deployments, check the box corresponding to your deployed application , then click the "Update" button.

Select "Redeploy this application using the following deployment files:", and make sure it shows your modified Plan.xml file. Click "Next".

On the next screen, click the "Finish" button to complete the redeployment.

Okay, our work is almost finished...Lastly, Set the External ADF Application URL profile option at the Site level to integrate Oracle SSHR with Applications Development Framework (ADF) and display the organization chart. Enter the location of the Oracle WebLogic Server instance as the profile option value.

For Example:

Note that: Dont use ip address. For Org chart to Work both EBS Server and WLS servers domain names should match.IP Address will not work.

Okay that's all.. Now you are ready to deliver your environment to your Functional HR Administrators.

I hope you will find this post useful.

Monday, May 12, 2014

EBS -- 12.2 Add Custom Top

To add custom tops in EBS 12.2 , we can use adsplice utility...

Just like the method we use for the localizations we can prepare our newsprod.txt, xxcustomprod.txt and xxcustomterr.txt files beforehand, and execute adsplice to add the custom top automatically.

Adsplice runs an autoconfig at the end of its processing , and makes all the necessary declarations both in the database tier and application run filesystem. Patch filesystem will also be syncronized automatically in the first patching cycle.
newsprod.txt , xxcustomprod.txt and xxcustomterr.txt files should be in a special format.
Example files can be downloaded from Oracle Support. I suggest you to download and use the files provided by the adsplice patch (13725897)
Adsplice patch contains the necessary txt files for adding CLE application to the  EBS. So you can change the names of the files and the contents of them to install your own custom application in to EBS.
After you download the cle adsplice patch, you will have cleprod.txt , cleterr.txt and newsprod.txt files; So;

Rename cleprod.txt to xxcustomprod.txt, and modify the file
Rename cleterr.txt to xxcustomterr.txt and modify the file
Modify the newsprod.txt file

Example modified newsprod.txt:

%%% Splicer control file format 12.0.A

# Add Product,EMEA Add-On Localizations
# Table spaces may need to be editted to fit database setup
# End control file

Note that : these file should be located under $APPL_TOP/admin directory.. Also, you need to specify a unique application id in the relevant file.
So, once you complete your modifications, run adsplice utility from the $APPL_TOP/admin directory...
That's all.. Using this approach, adding a custom top is simple, and quick.. At least, I felt like that.

Sunday, May 11, 2014

EBS -- Using OVM and EBS templates Advantages & Disadvantages

In this post, I will try to specify the advantages and disadvantages of using EBS templates in EBS implementations..

In my opinion; the most important ones are the ones in bold letters below.
  • Oracle presents an opportunity to use a VM based architecture in the new Oracle Database Appliance.. Your customer may want to use this opportunity.. So having the knowledge of deploying EBS templates to Oracle VM Server may be mandatory for you..
  • Ebs templates are preconfigured, so you dont need to install OS itself and required packages. Furthermore, you dont need to create OS accounts or configure the environment for Apps and db. This saves you time and requires less labor force.
  • You dont need to install and upgrade the EBS. You dont need to build a big stage.. You dont need to dowload the patches.. You dont need to analyze the documents and fix the errors encountered on the way.. 
  • You can create new EBS instances just by creating several virtual machines from a single EBS template. This provides you easiness in management and operation.
  • When you use templates for deploying the EBS, you miss the opportunity to learn the new EBS architecture. Especially while this new EBS -- 12.2, brings a lot of innovation in to the Apps Dba 's world, you need to know your environment... Online patching, Weblogic server and etc.. When you install EBS, you play with these components ..You learn them. You fix the errors related to them. You recompile , stop and start.. You do a lot of activities, which can be needed in production..  For example, when you install EBS 12.2.0 and upgrade it to 12.2.3(which is required), you get familiar with almost all of the application components and you get familiar with the new patching tool , etc.. 
  • I didnt really make the test, but my instincts tell me that, when you use VM for EBS  you can feel a little performance decrease --when you compare it with a phyiscal machine.
  • Oracle VM templates require Oracle VM Server, which requires a complete different knowledge.. I mean, this kind of installation is not based on the Apps Dba skills.
That's what comes to my mind for now... The list can be extended.

EBS/OVS-- Oracle VM Server 3.2.8 installation & Importing EBS 12.2 Templates --"ova/assembly"

Oracle VM Server 3 is the latest version of Oracle Virtualization Technology. It is in the same track as ESX, as you can use Oracle VM to create several virtual machines and run them on a single Server, using the hypervisor technology.

Oracle VM Server 3 OS is available for X86 and Sparc , and it uses the power of Xen in Linux.(LDOMs in Sparc) It has intergrated management tools for the administors.. In this context, Oracle VM Manager is  a mandatory tool to ease the management of Oracle VM Server.. It has a powerful web interface based on Oracle 's new Weblogic application server, and its database tier is formed on mysqlserver.

I know that ESX is more common in a sense.. On the other hand; as we are working on Oracle , certainly, we need to have the knowledge to deploy and administer such a mission critic server concept supplied by Oracle. In addition , in the new release of Oracle Database Appliance Engineered System , Oracle calls attention to the opportunity of virtualization.. Having abilitiy to use virtualization technology on the new Oracle Systems will make us learn Oracle VM Server  as a matter of fact.. Also, note that having the deep knowledge on this product can save you significant time, as we have important product templates for Oracle VM Server nowadays.. Oracle EBS 12.2 is one of these template that can directly be deployed to Oracle VM Server.
As, I did the installation and upgrade of Oracle EBS 12.2.3 several times. I know that it is an hard job to do, and  if you dont have a fast server , it is really a time consuming process.
So what I try to say is ; using Oracle VM Server templates, you can save some significant time to install your applications ..

Okay lets see, how it works basically;

So to install EBS 12.2.3 using Oracle VM 's templates;
You must;

  • Install Oracle VM Server 3.2.8 
  • Install Oracle VM Manager 3.2.8
  • Download the EBS 12.2.3 templates
  • Import the relevant template in to VM Server-- using VM Manager
  • Create a Virtual Machine from the Template
  • Start the newly created VM which is an EBS machine.
  • Configure the network through the prepared scripts
  • Configure the EBS's db tier through the prepared scripts
  • Configure the EBS's apps tier through the prepared scripts

-> and you are ready to go
No staging, no patching and no upgrades needed .. you will have an EBS 12.2.3 right away, up & running. Okay maybe you need to install localizations and NLS translations :) But anyways it sounds good..

Let me elaborate the Oracle VM EBS template installation process with a real life example.
Actually, I have finished this work few days, ago. My job was a little harder than what I told above.. I needed to install Oracle VM and import an EBS 12.2 assemby(ova file) into it.

Following picture represents the needed work for this to be done.
Note that, I  will be explained all the steps ,later in this post...

So I have started with the installation of Oracle VM Server 3.2.8 firstly.
My server didnt have a DVD/CD drive.. So I needed to boot the server with usb. It was as little challenge..
To format the usb and import the iso into it, I had to use the program called unetbootin and also I created a directory in the usb and put the iso file in to that directory.. This was the most important trick of the installation, as Oracle VM server's installer couldnt find the needed drivers without that directory.. During the installation, when the installer have asked the drivers, I choosed the hard drive and then choosed the usb drive and point that directory for the requested drivers..
So after installing the Oracle VM Server, I had to install Oracle VM Manager to ease the management of it.. There was no Oracle VM Manager 3.2.8 Template on Oracle, that's why I used a Oracle Database Server Template for this job. It was another channel because I downloaded the DB Template, configured and started it using command line..
Note that, in my environment;  to start/stop Vm manager:

Connect the VM Server as root
cd /ovsdata/seed_pool/OVM_OL5U7_X86_64_PVM_10GB ile vm makinanın oldugu yere gidilir.
Xm create vm.cfg  (to control use -> xm list) --this will start the VM Manager's virtual machine..

Stop: /sbin/service ovmm stop ; /sbin/service ovmm_mysql stop  --> this will stop VM manager
Start: /sbin/service ovmm start ; /sbin/service ovmm_mysql start  --> this will start VM manager

I configured my bond devices, add the necessary vm disk to the Vm machine template and started  it using xm utility.
By the way; the method for add a vm disk manually;

dd if=/dev/zero of=vmmgrdisk1.img bs=4M count=4096
chmod 777 vmmgrdisk1.img
ls -lrt
vi vm.cfg -> write the new disk into this config file using a comma..For EX:
disk = ['file:/OVS/seed_pool/OVM_OL5U7_X86_64_PVM_10GB/System.img,xvda,w',
So after starting the VM machine, which would be used for the VM Manager installation.
I downloaded VM Manager 3.2.8, and installed it on the newly created VM server. The installation was easy.

After installing the VM manager, the weblogic was up and running, so I connect to the vm console.. I have created my server pool without a problem but could not create my repository... Creating the repository was another challenge, because I had only one disk.. (one raid 5 consists of 5 disks)
I had a disk named  /dev/sda and partitions named /dev/sda1,/dev/sda2. /dev/sda1 was containing the Os and VM Manager installations, and the /dev/sda2 was an empty partition.. I had planned to create the repository on this sda2 device, but Oracle VM didnt accept it.. Later it have turned out that, Oracle VM doesnt accept a partition.. It wants to have a completely free disk to create its repository..
So forcedly, I created a NFS share in my NAS device and configured/declared it as a VM storage through my VM Manager.. After that , I could create my VM repository on this NFS share..
At this point; everyting have seemed okay,  my plan was to import the EBS assembly/ova file in to the VM server using VM Manager.. So I copied to it to the local filesystem, and have used the import assembly feature of VM Manager but another suprise was waiting for me ,as I have realized that the VM manager was only accepting an http or ftp protocols for importing the assemby.. So I installed Microsoft IIS (web server) to a PC on the network, and configured mime types for ova file, and using this IIS I have made Vm Manager to import the ova file in to Oracle VM Server.
After Importing the assembly, I have created a  template from it. It was a time consuming activity.
So after creating the template, I have created a VM from that template.. It was another time consuming activity , as my repository was on NFS.
At this point , my EBS VM server was created and it was ready to go. So after I have created a NIC for it, I started my EBS VM server using Oracle VM Manager..
So my job was finished.. My EBS template have become a VM server running on Oracle VM Server 3.2.8..

My environment have become something like pciture below;

As, this was an ova file, it was a used template. This means someone could configure it already for his/her enviroment.. At least , in my case,  it was like that..
So normally, when you boot Oracle EBS template, it makes you configure the network, the database and the application tier, automatically.. But once it is configured, it doesnt ask you to configure it again.. So I needed to run the needed scripts manually, and configure my environment..
I have used the following scripts:

configure the network interface (/u01/install/scripts/ or
configure the database tier (/u01/install/VISION/scripts/
configure the apps tier (/u01/install/VISION/scripts/
Lastly, I will share the EBS template scripts and their descriptions...(just in case)

The scripts to manage the Oracle E-Business Suite VISION instance are:

SCRIPTS BASE_DIR         :  /u01/install/VISION/scripts/
START SCRIPT             : /u01/install/VISION/scripts/
STOP SCRIPT              : /u01/install/VISION/scripts/
DB VM RECONFIG SCRIPT    : /u01/install/VISION/scripts/
DB VM CLEANUP SCRIPT     : /u01/install/VISION/scripts/

The scripts to manage the Oracle E-Business Suite PROD instance are:

SCRIPTS BASE_DIR          :  /u01/install/PROD/scripts/
START SCRIPT              : /u01/install/PROD/scripts/
STOP SCRIPT               : /u01/install/PROD/scripts/
DB VM RECONFIG SCRIPT     : /u01/install/PROD/scripts/
DB VM CLEANUP SCRIPT      : /u01/install/PROD/scripts/

The scripts to manage the Oracle E-Business Suite application tier instance are:

SCRIPTS BASE_DIR                      : /u01/install/APPS/scripts/
START SCRIPT                          : /u01/install/APPS/scripts/
STOP SCRIPT                           : /u01/install/APPS/scripts/
APPS VM RECONFIG SCRIPT               : /u01/install/APPS/scripts/
APPS VM CLEANUP SCRIPT                : /u01/install/APPS/scripts/
CONFIGURE A NEW WEB ENTRY POINT       : /u01/install/scripts/

The scripts to manage the Oracle E-Business Suite single node Vision installation are:

SCRIPTS BASE_DIR                      : /u01/install/VISION/scripts/
START SCRIPT FOR DB                   : /u01/install/VISION/scripts/
STOP SCRIPT FOR DB                    : /u01/install/VISION/scripts/
START SCRIPT FOR APPS                 : /u01/install/VISION/scripts/
STOP SCRIPT FOR APPS                  : /u01/install/VISION/scripts/
DB RE-CONFIG SCRIPT                   : /u01/install/VISION/scripts/
APPS RE-CONFIG SCRIPT                 : /u01/install/VISION/scripts/
DB CLEANUP SCRIPT                     : /u01/install/VISION/scripts/
APPS CLEANUP SCRIPT                   : /u01/install/VISION/scripts/
CONFIGURE A NEW WEB ENTRY POINT       : /u01/install/scripts/

The scripts to reconfigure the network for the virtual machines are:

SCRIPTS BASE_DIR            :  /u01/install/scripts
CONFIGURE DHCP              :  /u01/install/scripts/
CONFIGURE STATIC IP         :  /u01/install/scripts/
SCRIPT TO CLEANUP VM        :  /u01/install/scripts/

After configuring the EBS environment , your EBS 12.2.3 box is ready to use..
You can apply localization patches and nls translations any time .. In the new EBS instances, this kind of operations can be done online..

   Okay, that 's all for now. I hope you'll find it useful. If you have any questions, feel free to ask.

Thursday, May 8, 2014

Weblogic -- Deploying ADF into Weblogic Server

In this example, I will deploy ADF into an Oracle Weblogic Server 10.3.6.

After downloading ADF (, we unzip download file, and when we unzip the downloaded file, we end up with two Disk directories (Disk1 and Disk2)

Then we go to directory Disk1, and execute RunInstaller..

[root@erptest adf11_1_1_4]# cd Disk1/
[root@erptest Disk1]# ls

doc install readme.htm runInstaller setup.exe stage

[root@erptest Disk1]# ./runInstaller

When we invoke the installer, it just wants us to supply the jdk location. In my case, it was as follows;

Starting Oracle Universal Installer...

Checking if CPU speed is above 300 MHz. Actual 3066 MHz Passed
Checking Temp space: must be greater than 150 MB. Actual 25792 MB Passed
Checking swap space: must be greater than 512 MB. Actual 21648 MB Passed
Checking monitor: must be configured to display at least 256 colors. Actual 16777216 Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2014-05-08_01-48-04PM. Please wait ...

Please specify JRE/JDK location ( Ex. /home/jre ), <location>/bin/java should exist :
Please specify a valid JRE/JDK location :
Please specify a valid JRE/JDK location :/etc/alternatives/java_sdk_1.6.0

After specifying the jre or jdk location , we continue with the installer dialogs..

It is a quite simple installation.. The only trick is that; you need to supply the directory that will be used for the installation. The directory should be where your Weblogic Server is located.. So you will install ADF into the same directory that your weblogic server resides..

In my case; it was /u01/WLS

[root@erptest u01]# cd WLS/

Before the installation;

[root@erptest WLS]# ls

coherence_3.7 domain-registry.xml logs modules ocm.rsp registry.dat registry.xml utils wlserver_10.3

After the installation;

[root@erptest WLS]# ls

coherence_3.7 domain-registry.xml logs modules ocm.rsp oracle_common registry.dat registry.xml utils wlserver_10.3..

So as you see above, after the installation a directory name oracle_common is added to the directory structure of the Weblogic Server directory.

As I have mentioned, the Installation is simple..

After the installation. We execute the located in $WLS_HOME/common/bin.
Using, we start the Fusion Middleware Configuration Wizard.
By using the wizard, we create a Weblogic Domain.
Creating Weblogic Domain is also simple. We create our domain by specifying our admin and managed servers (in this case ADF server)

Here are the screenshots taken during the Domain Creation.

After creating the weblogic domain, we just start our weblogic server using located in the /u01/WLS/user_projects/domains/base_domain --in my case..

When the server become started, we login to the weblogic console using http://servername:port/console --we use the user.pass information that we provided during the installation..

After the login we see start our ADF managed server, as we see that it is not started yet.

We use control checkbox and start button to start the managed server..

Note that, our nodemanager should be up already. If we want to start our managed servers from the console, then we need to have a node manager running.

To start our node manager we use located in /u01/WLS/user_projects/domains/base_domain --in my case

Okay, after starting our Managed server, we use deployments tab to check our deployed applications and that's it..