Friday, August 30, 2013

Database -- How to correct an unused corrupt block.

Normally, corrutions on unsued blocks can be ignored, as Oracle will create a new block image should the block need to be used.. On the other hand; if you use rman to backup the database; rman backups can fail with ORA-19566 error..
So, if you want to format this block anyways; here is the procedure..

The logic behind this procedure is simply --> identify the block, create a table, fill it with data till you reach the corrupted block..

I suppose that you already identified the corrupted block ; so I will not write about that part..

Create a table which will be used to reformat the corrupt unused block.

create table s (
n number,
c varchar2(4000)
) nologging tablespace <tablespace name having the corrupt block> ;

Create a trigger to report/warn when the corrupt block will be reformatted.

corrupt EXCEPTION;
IF (dbms_rowid.rowid_block_number(:new_p.rowid)=&blocknumber)
and (dbms_rowid.rowid_relative_fno(:new_p.rowid)=&filenumber) THEN
RAISE corrupt;
WHEN corrupt THEN
RAISE_APPLICATION_ERROR(-20000, 'Corrupt block has been formatted');

Use following loop to extent the table, we just created.. You should run this loop till the unused block will belong to the scott.s table..

for i in 1..1000000 loop
EXECUTE IMMEDIATE 'alter table scott.s allocate extent (DATAFILE '||'''the full path and dbf name which the corrupt block belongs to''' ||'SIZE 8K) ';
end loop;
end ;

You can check, whether the corrupt block belongs to our table with the following  query;

select segment_name, segment_type, owner
from dba_extents
where file_id = <Absolute file number>
and <corrupt block number> between block_id
and block_id + blocks -1 ;

After the block will belong to table; insert data into it with the floowing loop.. 
This operation will actually reformat the block..

FOR i IN 1..1000000000 LOOP
INSERT /*+ APPEND */ INTO scott.s select i, lpad('REFORMAT',3092, 'R') from dual;
commit ;

Once the block will be refomatted , check the block corruption again...

For Db version <=10gr2
Rman> Backup validate check logical datafile <fileno>,<fileno> ;
For Db version >= 11gr1
Rman> Backup validate check logical datafile <fileno> ;
Rman> validate datafile <fileno> block <blockno reported corrupt>, <blockno reported corrupt> ;
SQL>Select * from v$database_block_corruption ;

Friday, August 23, 2013

Oracle E-Business Suite Starting/Stopping/Controlling Notification Mailer with PLSQL

It is possible to stop/start/control Workflow Notification Mailer using sql/plsql.. Using this method to stop/start/control Workflow Mailer, Application Database Administrators save time..
In addition to that; by using this method, it gets easy to automatize the stop/start operations of the Workflow Noficiation Mailer..

To stop/start/control Workflow Notification Mailer using sql, you need to login to the database as APPS schema owner.

Click here for the scripts..

Thursday, August 22, 2013

Database 12c: -- A clear article about Oracle Multitenant Option

1. Why ?
2. What is it ?
3. How ?

-- >  

Wednesday, August 21, 2013

Database -- Index Block Corruption/ index recreation or index rebuild

It 's better to use drop and create an index to correct an index block corrution..

As Oracle Documentation says:

ALTER INDEX ... REBUILD is faster than dropping and re-creating an index, because this statement uses the fast full scan feature. It reads all the index blocks using multiblock I/O, then discards the branch blocks. A further advantage of this approach is that the old index is still available for queries while the rebuild is in progress.

The rebuild operation does a fast full index scan, which reads the index blocks.. (consider one of the index block is corrupt..) I didnt test it, but it probably will encounter an error if an index block is corrupt.

So drop/create index seems the only solution for now.

But there is another option ALTER INDEX ... REBUILD ONLINE;

As Oracle Support Doc (When Does Offline Index Rebuild Refer To Base Table? (Doc ID 278600.1) explains, with this method accesses the table directly instead of the old index

So rebuild online becomes another solution..

Linux -- Process State Transition

Linux -- Process states and descriptions

TASK_RUNNING: The process is either running on CPU or waiting in a run queue to get scheduled.
TASK_INTERRUPTIBLE: The process is sleeping, waiting for some event to occur. The process is open to be interrupted by signals. Once signalled or awoken by explicit wake-up call, the process transitions to TASK_RUNNING.
TASK_UNINTERRUPTIBLE: The process state is similar to TASK_INTERRUPTIBLE except that in this state it does not process signals. It may not be desirable even to interrupt the process while in this state since it may be in the middle of completing some important task. When the event occurs that it is waiting for, the process is awaken by the explicit wake-up call.
TASK_STOPPED: The process execution is stopped, it's not running, and not able to run. On receipt of signals likeSIGSTOP, SIGTSTP, and so on, the process arrives at this state. The process would be runnable again on receipt of signal SIGCONT.
TASK_TRACED: A process arrives at this state while it is being monitored by other processes such as debuggers.
EXIT_ZOMBIE: The process has terminated. It is lingering around just for its parent to collect some statistical information about it.
EXIT_DEAD: The final state (just like it sounds). The process reaches this state when it is being removed from the system since its parent has just collected all statistical information by issuing the wait4() or waitpid() system call.

Linux -- D state processes & TASK_KILLABLE state

The following figure show the process state transitions:

D state is a special sleep mode..
In D state , the code can not be interrupted..
When the process in D state, actually It seems blocked from our perspective, but actually nothing is blocked inside the kernel.
For example, when a process issues an I/O operation , the kernel is triggered to run the relevant system call..
This code goes from filename to filesystem, from filesystem to block device and device driver, and then device driver sends the command to the hardware to fetch a block on disk.
The process, on the other hand ; is put in sleeping state (D). When the data is fetched, the process is put in runnable state again. After this point, the process will run(continue its work) when the scheduler allow it to.
D state processes,  can not be killed with kill signals..
The exact name for the D state is TASK_UNINTERRUPTABLE, and as mentioned in its name, the processes in this state can not be interrupted..
In the most cases, this unkillable processes appear when NFS shares are used for the IO. This is probably because the error detection in local disks are very fast when you compare it with the TCP timeout (300 seconds approx.)

The new task_killable state introduced in Linux 2.6.25 is a solution to kill these kind of unkillable processes.
Task_killable works just like the Task_interruptable + it can respond to fatal signals..

The functions for the TASK_KILLABLE state is below;
Reference: Ibm

  • int wait_event_killable(wait_queue_t queue, condition); 
    This function is defined in include/linux/wait.h; it puts the calling process to sleep killably in queue until the conditionevaluates to true.
  • long schedule_timeout_killable(signed long timeout); 
    This is defined in kernel/timer.c; this routine basically sets the current task's state to TASK_KILLABLE and callsschedule_timeout(), which makes the calling task sleep for timeout number of jiffies. (In UNIX systems, a jiffy is basically the time between two consecutive clock ticks.)
  • int wait_for_completion_killable(struct completion *comp); 
    Defined in kernel/sched.c, this routine is used to wait killably for the completion of an event. This function callsschedule_timeout() for MAX_SCHEDULE_TIMEOUT (defined to be equal to LONG_MAX) jiffies if there are no fatal signals pending.
  • int mutex_lock_killable(struct mutex *lock); 
    Defined in kernel/mutex.c, this routine is used to acquire mutex lock. However, if the lock is not available and the task is waiting to get the lock, and in the meantime it gets a fatal signal, the task would be removed from the list of waiters waiting for the mutex lock to process the signal.
  • int down_killable(struct semaphore *sem); 
    Defined in kernel/semaphore.c, it is used to acquire the semaphore sem. If the semaphore is not available, it's put to sleep; if a fatal signal is delivered to it, it would be removed from the waiters' list and would have to respond to the signal. The other two methods of acquiring a semaphore are by using the routines down() or down_interruptible(). The function down() is deprecated now; you should use either down_killable() or down_interruptible().

Friday, August 16, 2013

Database -- Finding Current Buffer Cache Size of the Database Objects

Database -- Why we need the serial# to kill a session

SID ->Session identifier

SERIAL# -> Session serial number. Used to identify uniquely a session's objects. Guarantees that session-level commands are applied to the correct session objects if the session ends and another session begins with the same session ID

To kill a session in Oracle , we use alter system kill session 'serial#,sid'

One question comes to our minds.. Why not using sid alone , as we know sid is a unique identifier..

I will write a short example scenario to explain this..

Suppose you want to kill sid 250.
And you prepared kill statement, as alter system kill session 250 (this is not a correct syntax, but suppose it s a correct synax and it is possible)
--> Here is the important part.
Just before, you execute your kill statement, suppose the the session with sid 250 closed itself. ( For example, it was a toad session, and the user closed its Toad program.).. and just after 1 second, another session is connected to the database and assigned the sid 250.
So when, you execute your kill command you will kill sid 250 , but actually you will do something wrong as sid 250 will belong to another session at that time..

So this is the need for Serial#  in kill command...

Database -- How to force a sql statement to be hard parsed

Forcing a sql to be hard parsed everytime.. Unlike MS SQL Server, there is no direct Oracle function to do this...

There are some ways to make Oracle to hard parse a sql everytime it runs. These ways like changing the stats etc.. are inconvenient. Adaptive cursor sharing can deal with it, but it s a 11g feature.

The most applicable solution will be Oracle RLS..

By using Oracle RLS, the queries can be modified on the fly, so indirectly make Oracle to hard parse them..

Demo: -- Reference: http://oracle-randolf.blogspot.com
create table t1 as select * from all_objects where rownum <= 1000; 
exec dbms_stats.gather_table_stats(user, 't1'); drop sequence s1; 
create sequence s1 cache 1000; 
create or replace package pkg_rls_hard_parse 
function force_hard_parse (in_schema varchar2, in_object varchar2) return varchar2; 
end pkg_rls_hard_parse;
create or replace package body pkg_rls_hard_parse is 
function force_hard_parse (in_schema varchar2, in_object varchar2) 
return varchar2 
s_predicate varchar2(100); 
n_random pls_integer; 
select s1.nextval || ' = ' || s1.currval into s_predicate from dual; 
return s_predicate; 
end force_hard_parse; 
end pkg_rls_hard_parse;
exec dbms_rls.add_policy (user, 't1', 'hard_parse_policy', user, -'pkg_rls_hard_parse.force_hard_parse', 'select');

Database -- Damage/Corrupt Oracle Block using dd command

To  implement our recovery test scenarios, we need to have real corruptions sometimes.
Following method can be used to corrupt an Oracle Block..
In this method we will use dd command in Linux/Unix to corrupt a datafile 's block.

First, we copy the datafile up to the block no(block we want to corrupt) into a temp file.
Secondly, we create another temp file by copying one unrelated block into it.
Then , we copy the datafile contents by starting from the block that we want to corrupt +1 into another tempfile.
Finally we concetanate this tree files into a single file , and overwrite the actual datafile with this single file..

Some information about dd;

if = input file
of = output file
bs = block size
count = copy only BLOCKS input blocks
skip  = skip BLOCKS ibs-sized blocks at start of input
block size (bs) option will override both ibs and obs.

For example to corruput block no 15 on a database with blocksize 8192 bytes;

shutdown the Oracle Database

copy /actual_path_of_the_datafile/datafilename.dbf /tmp/

cd /tmp

dd if=datafilename.dbf of=f1 bs=8192 count=15
dd if=datafilename.dbf of=f2 bs=8192 count=1
dd if=datafilename.dbf of=f3 bs=8192 skip=16

cat f1 > /tmp/datafilename.dbf
cat f2 >> /tmp/datafilename.dbf
cat f3 >> /tmp/datafilename.dbf

cp datafilename.dbf /actual_path_of_the_datafile/datafilename.dbf

start the Oracle Database and issue the relevant queries, that will use the corrupted block..
Following errors will be encountered;

ORA-01578: ORACLE data block corrupted (file # 4, block # 15)
ORA-01110: data file 4: '/actual_path_of_the_datafile/datafilename.dbf'

So, we have a corrupt block now.. At this point, we can test our recovery scenarios-- like Rman 's blockrecover..

Tuesday, August 13, 2013

EBS-- Running Multiple version of JRE Plugins

Although it is not recommended, here is the way to allow multiple JRE plugins to access a single EBS.

*Copy the exe file of the desired plugin to $OA_HTML directory to allow user download of the Plug-in at runtime.
*Add the following under [myExtensions] in the appsweb.cfg file ($FND_TOP/admin/template/appsweb.cfg)
Change the values below according to your environment. 

[J16021]       <--- Desired JRE plugin version to be added
connectMode=servlet   <-- Take it from your appsweb.cfg

*Set ICX: Forms Launcher profile at user level for the desired E-Business Suite users. This users will use the newly added plugin..

After this point, the desired users whose client machines have jre plugin version 21 will access EBS through the JRE version 21 instead of default...

--Note that, we supposed the JRE version 21 is the highest available version of JRE available on those particular client machines. Because , for Internet Explorer , environment may launch the highest version of JRE on the client.. If you have multiple versions of the JRE plug-in installed on your desktop, try to control the use of the desired JRE..

Here is the Steven Chan's post about the static versioning when using Internet Explorer 

Static Versioning Not Directly Supported With Native Sun Plug-In

Sun's native Java plug-in from version 1.5.0_06 and later does not support static versioning by default when using Microsoft Internet Explorer. If you have multiple versions of the JRE plug-in installed on your desktop, the latest version installed will run.

For example:

If Oracle E-Business Suite is set up to run JRE 1.5.0_12 on the server and both JRE 1.5.0_12 and JRE 1.5.0_13 are installed on the desktop, the environment will incorrectly launch using JRE 1.5.0_13 rather than JRE 1.5.0_12.
If Oracle E-Business Suite is set to run on JRE 1.5.0_13 on the server and both JRE 1.5.0_13 and JRE 1.6.0 are installed on the desktop, the environment will incorrectly launch using JRE 1.6.0 rather than JRE 1.5.0_13.

Monday, August 12, 2013

Database -- ORA-700 Soft Internal Error

I wrote this post because the error number 700 attracted my attention..
ORA-700 errors are introduced in Oracle 11G.
These are referred as internal errors like ORA-00600, but ORA-00700 errors are named as Soft Internal Errors, and do not cause anything fatal to occur in the process.

Some(less harmful) ORA-00600 errors,which are in Oracle 10G, are similar to ORA-00700 errors in Oracle 11G.
For example, if the code finds that a hint in a query is corrupt, it can just raise a soft-assert since the query can be run without the hint being used, and so doesn't need to crash the process. So in 11g, for such a condition an ORA-700 error would be raised.

Note that ; these errors take place in the error lookup tool, which is named as ORA-600/ORA-7445/ORA-700 Error Look-up Tool

Saturday, August 3, 2013

LDAP Command Examples

Following ldap command examples can be useful for Database Administrators, especially when working on the middletier.. I used these types of command to export/import the user information from Microsoft's Active Directory to Oracle 's Internet Directory.

*To Export all the user information (with all attributes) from a ldap directory

ldapsearch -p ldap_port -h  ldap_host -D "cn=adminuser" -w adminpasswd -L -s one -b "cn=Users,dc=company,dc=com" "(cn=*)" > users.ldif

*To Export all the user information (specifying attributes) from a ldap directory
ldapsearch -p ldap_port -h  ldap_host -D "cn=adminuser" -w adminpasswd -L -s one -b "cn=Users,dc=company,dc=com" "(cn=*)" attribute1 attribute2 attribute3 > users.ldif

*To Export a single user from an ldap directory (with all attributes)
ldapsearch -p ldap_port -h  ldap_host -D "cn=adminuser" -w admipasswd -L -s one -b "cn=Users,dc=company,dc=com" "(cn=user_you_want_to_export)"  > user.ldiff

*To Export a single user from an ldap director 

ldapsearch -p ldap_port -h  ldap_host -D "cn=adminuser" -w adminpasswd -L -s one -b "cn=Users,dc=company,dc=com" "(cn=user_you_want_to_export)" attribute1 attribute2 attribute3  > user.ldiff

*To Import an ldiff export file
ldapadd -h ldap_host -p ldap_port -D "cn=adminuser" -w adminpasswd -c -v -f /home/appoid/ldiff_file.ldiff

*To check a user/password /ldapbind
ldapbind -h LDAP_HOST_IP -p LDAPHOST_PORT -D"CN=Erman,OU=OU_erptemp,OU=OU_Users,DC=testdom,DC=com" -w"password_of_erman"

Thursday, August 1, 2013

Installing Oracle Single Sign on 10gR3 against OID 11G using an Oracle 11GR2 ( ) database

Important Note: This is not a supported method for installing SSO 10gR3 against OID 11g, as the database version we are using here, is
So according to Oracle Support: we need a 10g database fot the installation of SSO 10gR3.. If there is a need for using a 11gR2 database, we need to upgrade the database tier, after the SSO installation..
But, I installed it my way, actually the way adressed in "" ..
It seems there is a conflict between  these document.. Whatver, I  fixed the errors during the installation , and it seems sso is working, and after all it is supported now..

So procedure is as follows;

First of all, we install OID We follow the steps in  the following link
Dont do the step in "OID and weblogic to use service-ip.." steps if you are not using a cluster.

After the installation of OID, we make a quick health check and if everyting seems ok we will start the Oracle Single Sign 10gR3  installation against OID 11g..

The root document to follow  for this operation is :
This document explains how to install Oracle Single Sign-On and Oracle Delegated Administration Services Release 10g ( against Oracle Internet Directory 11g Release 1 (11.1.1).

*So in this context; first of all , we run with option 1 argument .

$OID11gR1_ORACLE_HOME/perl/bin/perl \
$OID11gR1_ORACLE_HOME/ldap/bin/ OID_HOST OID_PORT {-ssl | -nonssl} \
{-op1 | -op2 | -op3}

*Next, we install Oracle Repca in to our windows client(very important.. the version must be
Regional setting of the Windows client should be "English". Otherwise, we can have some problem in the filename conversion and lower/uppercase conversion during the installation ..
Repca is an old release.. So if you dont have the installation files, you have to open Sr for this.. Repca zip file name is V18656-1. It has two Disk folder in it. We use Disk1/runInstaller to install the Repca in our client first.
After the installation, we use runRepca.bat (which resides in the folder where we installed our Repca) to load the Sso schemas.
After loading the Sso schemas, we will use Repca's register option to register the schemas.. Note we need to use the Ssl port of OID for the registration.

* When we finish our work with Repca, we take the below actions one by one;
Unlock the ods user in OID database(if it s locked)
Change the password of ods schema (back to the same).
Set the TNS_ADMIN environment variable to point to the $ORACLE_INSTANCE/config
Create the wallet using the following (note that we can use our OID database SID in the connect string)
$OID11gR1_ORACLE_HOME/ldap/bin/oidpasswd \
connect=CONNECT_STRING create_wallet=true
Restart OID.

*Now we execute the script once again but this time with op2 argument (option 2)

$OID11gR1_ORACLE_HOME/perl/bin/perl \
$OID11gR1_ORACLE_HOME/ldap/bin/ OID_HOST OID_PORT {-ssl | -nonssl} \

At this point , we created our sso schemas in OID database and also registered them with OID..
So we are ready to Install Oracle Single Sign-On and Oracle Delegated Administration Services Release 10g (
Oracle Single Sign-On and Oracle Delegated Administration Services Release 10g( is an old release.. So if you dont have the installation files, you have to open Sr for this..
The installation files we will use are (Note that: we are installing it on Aix..) ; B34349-01, B34350-01, B34351-01, B34352-01 -> 4 zip files.

Before the installation, to prevent the ORA-24247: network access denied by access control list (ACL) errors, we create and assing an ACL to PUBLIC. This can be accomplished by the following;
Note that this error is encountered in the configuration phase.

exec DBMS_NETWORK_ACL_ADMIN.create_acl (
   acl => 'sso_oid.xml',
   description => 'ACL for SSO to connect to OID',
   principal => 'PUBLIC',
   is_grant => TRUE,
   privilege => 'connect');

exec dbms_network_acl_admin.add_privilege

   acl => 'sso_oid.xml',
   host => '*');

!!!DO NOT forget to commit!!!

After this , we invoke the installer and Install Oracle Single Sign-On and Oracle Delegated Administration Services Release 10g (
In the installation when the installer request us to run the root script; we dont immediately do that.
(Important)At that time, we apply patch 5649850 for release, and when installation of the patch 5649850 become successful, we run the root script and continue to installation.
Note that: This patch allows Oracle Single Sign-On and Oracle Delegated Administration Services Release 10g ( to connect to a Release11.x database.

During the installation we can encounter a warning says " opmn cannot be started" or something similar, we ignore this.

In the configuration phase of the installation, we can encounter a timezone check error on the Enterprise Manager step.
For the solution, we change the line with "return $rc;" to return 0 in the $ORACLE_HOME/bin/ The line number is =~ 313

After the installation of SSO, we need to install the upgrade patch to upgrade our SSO to version
Note that, upgrade patch number is 7215628 , and it s available in Oracle Support.
Before beginning the 721528 installation, we need to apply patch 6265268.. This patch will prevent 11g database connection problem of the runInstaller that we use for the installation.
(Important)Note that ( we need to make all the copy operations in that patch's Readme..!)

So that is it, we installed Oracle Single Sign-On and Oracle Delegated Administration Services Release 10g ( against Oracle Internet Directory 11g..

According the scenario, you can go further..

*So if you have an EBS R12 , and if you want to integrate it with this SSO.

You run the following command from the FND_TOP/bin to accomplish that..

$FND_TOP/bin/ -script=SetSSOReg  (note that , if you have multiple application nodes , you have to run this in all the nodes..)

Note that: There are three components that can be registered or de-registered in Release 12 with the SSO/OID registration utility when integrating with Oracle Single Sign-On Server 10gR3.
The registration utility automatically detects the registered components and performs registration for the un-registered components.
So there is no requirement to pass individual registration arguments.

Reference Support Note: Registering Oracle E-Business Suite Release 12 with Oracle Internet Directory 11gR1 and Single Sign-On [ID 1370938.1]

*If you have a cluster and want OID to use the cluster hostname..
-Change ServerName in httpd.conf to virtual server name.
-Ensure theres no added "VirtualHost" configuration in the httpd.conf.
-run the following -> $ORACLE_HOME/sso/bin/ http virtual_server_name 7777
-In OID odsm;open  cn=OracleContext>cn=Products>cn=DAS and select cn=OperationURLs, The right hand pane will display a number of fields or attributes. The last field will be orcldasurlbase and will
should be set to the Infrastructure HTTP Server and port..
-Clear oid cache by connecting to OID database using sqlplus.. (with orasso user) and executing the following;
sqlplus orasso/(orasso_password) --> find it first

-To find Orasso Password:

ldapsearch -h "thehostnameoftheoidserver" -p oidnonsslport \ -D cn=orcladmin -w orcladmin-password \ -b "cn=IAS Infrastructure Databases, cn=ias, cn=Products, cn=OracleContext" \ -s sub "orclResourceName=ORASSO" orclpasswordattribute
- reregister sso with the OID using -oracle_home_path $ORACLE_HOME -site_name virtualhostname:port -config_mod_osso TRUE -mod_osso_url http://kaoidoapst3:7777

-Clean the OID cache once again and restart sso services using opmnctl.

Reference Support Notes:
Preparing and Configuring Virtual Hosts on OracleAS 10g HTTP Server (Doc ID 293697.1)
Note:292380.1 How to Refresh Cache for OID Parameters

Unix signals added..

The signals listed below link may be available for use with kill. When known constant, numbers and default behavior are shown.