Wednesday, December 28, 2016

EBS R12 -- XML Publisher PDF report, Font problem, xdo.cfg and font mappings

A strange font problem was escalated to me.
The problem was on an Internal EBS instance, where we were trying to build a PDF report based on the statistics data related with our issue tracking system.
We were trying to run a concurrent program to build a PDF output through the XML publisher.
The fonts that were used in the Report's template (rtf) were Times New Roman, Times New Roman Bold and Times New Roman Italic.
On the other hand, the fonts in the output that was generated by Oracle Reports and XML publisher was a little strange.

The Problem ( XML Publisher Report output -- font issue):

The problem was in non-latin bold characters.. Th non-lating characters could be displayed properly but they were not bold.
That is, for intance the character "Ş" or "İ" were displayed properly but they were not bold, altough they were configured bold in the Reports template.
We knew that Times New Roman had non-latin characters and it had bold feature as well.
So this was not acceptable at all.

This problem actually made me revisit the XML Publisher funcitonality of EBS once again.
So, before going forward and giving you the solution, let's recall what XML publisher is (for EBS perspective);

What is XML Publisher (EBS perspective):

XML publisher aka BI Publisher is a template-based reporting tool that is used for data extraction and display.
It can be used to create report based on XML data extracts from the Oracle Application Concurrent Request/programs. 
XML Publisher can be used to generate PDF, HTML, RTF, EXCEL (HTML), or even text outputs.

Additional Info: In standard/default EBS configuration, getting Reports PDF outputs which contain special characters(like turkish characters) is not supported.
That is ; by default you can not produce proper Report PDF outputs through Concurrent Requests if your characterset contains special characters.
In 12.2, Oracle Reports PDF is supported only for US7ASCII, WE8ISO8859P1 and

WE8MSWIN1252 charactersets.
So, in order to get Oracle Reports PDF outputs, Oracle recommends using BI Publisher(XML Publisher) if your characterset is different than those 3 charactersets above.

What actually happens when creating a XML Publisher based Reports in EBS is as follows;

Concurrent request collects the data.
Output Post Processors calls XML publisher to merge the template files(rtf) with the data that is collected by the Concurrent program/request.

In order to create an XML Publisher based report from a concurrent program output; the following should be done;

Register your concurrent program as a Data Definition in the XML Publisher Template Manager. 
Design a XML Publisher template (rtf)
Register that template to the XML publisher Template Manager
Select XML output for the concurrent program
Submit concurrent request (choose  template, language format)

Well, after this information, lets introduce you 2 configuration files which are relevant with the font issues in EBS report outputs.

1) "uifont.ali":  If  an Oracle Report (not XML Publisher report) is executed, then uifont.ali is uısed for the font mappings.

That is;

When the report is run, first the reports engine looks in uifont.ali if there is a font mapping to be applied for the specified fonts. Then it looks in the uiprint.txt file for the printer defined in TK_PRINTER and
in turn examines the PPD file which the uiprint.txt entry points to. The PPD is
a printer definition file with a lot of information about the capabilities of a
certain printer.
Finally it is necessary to provide an AFM file for each of the fonts
which contains the font metrics for the reports engine to generate the output

"So, uifont.ali may be related with the font issues but is not related with XML Publisher Reports."

2) "xdo.cfg":

The XML publisher uses the font mappings defined in xdo.cfg file when producing outputs.

-- The locations of xdo.cfg:
11i: under $AF_JRE_TOP/jre/lib R12.0 under $AF_JRE_TOP/lib
R12.1 under $XDO_TOP/resource
12.2 under fs_ne/EBSapps/appl/xdo/resource

"So, xdo.cfg is definitely related with the font issues  in XML Publisher Reports."

As this issue was on a XML Publisher based PDF report, the configuration file that we looked for, was xdo.cfg.

Here is key info :

XDO uses Albany fonts for non-Latin-1 strings in the PDF output by default.

This info was the key for our problem as well!

Let's go back to our problem;


Let me remind the problem to you:

The problems were in non-latin bold characters in an XML publisher based PDF Report..
Those character were not bold.
That is, for instance the character "Ş" or "İ" were displayed properly but they were not bold, altough they were configured bold in the Report's template.

With the key info(above) in mind, we checked  the PDF output using Acrobat Reader and saw that the Albany fonts were there.

Why was the problem only in bold characters then?

Well , it is because Albany fonts have no bold feature..
Look at the following table:

Ref: E1: XMLP: Bold Font Specified in a RTF Template is Not Reflected in the Actual PDF Output (Doc ID 1212076.1)

Weight  = normal means, no bold feature.

You see Albany fonts have no bold (weight!=bold)...


We configured XML Publisher to use Times New Roman for producting the PDF reports.
In order to do that, we found the ttf files from Windows client, where the XML Publisher Template file was prepared and copied the ttf files to Linux, where our application server (concurrent tier) was running. 
After that, we made the font mappings in the xdo.cfg and run the concurrent program once again.
These actions fixed the problem and PDF could be generated with all its italic, bold and normal characters. The output displayed perfectly and it contained the Times New Roman, Times New Roman Italic, and Times New Roman bold characters.

Take a look at what we did closely;

We copied the Times New Roman fonts (from C:\Windows\Fonts\) from our Windows client to the Linux EBS Application Server (to /home/applmgr/fonts).
We created the xdo.cfg in $XDO_TOP/resource directory and put the following font mappings in it;

<font family="Times New Roman" style="normal" weight="bold">
<truetype path="/home/applmgr/timesbd.ttf" />
<font family="Times New Roman" style="normal" weight="normal">
<truetype path=" /home/applmgr/times.ttf" />
<font family="Times New Roman" style="italic" weight="normal">
<truetype path=" /home/applmgr/timesi.ttf" />

Well... That's the tip of the day.  I hope you will find this article useful.

Friday, December 16, 2016

EBS 12.2 - ISG Metadata Provider, CASDK-0005

This will be a quick tip and it is on EBS 12.2 ISG .(Integrated SOA Gateway)

That is, you may encounter CASDK-0005 errors, while you are trying to connect to EBS 12.2 through ISG. (i.e using ICS adaptor)
So, if you see this error message; you are probably missing some required patches.

The full error message is as follows;

CASDK-0005: Verify if Metadata Provider service is deployed with alias 'provider' in Oracle E-Business Suite. Ensure that all its methods are deployed with GET verb. For possible resolution

In short, The Metada Provider should be deployed to fix this error and it is not even installed  if you don't apply the patch named  "Patch 23510855: ISG R12.2 CONSOLIDATED PATCH (ICS 16_2_5)".

--actually this patch is documented in EBS-ISG document Installing Oracle E-Business Suite Integrated SOA Gateway, Release 12.2 (Doc ID 1311068.1) and it is mandatory, but still; I want to underline this patch, as I m facing the CASDK-0005 errors in some ISG implementations.

The action plan for solving this problem is as follows;
  1. apply the patch 23510855 to EBS 12.2 instance..
  2. connect to EBS
  3. use ISG responsibility to open the Integration Repo
  4. Hit the search button
  5. Search for %Meta% (this will return Metadata Provider)
  6. Click the on the name
  7. Open the REST Web service tab.
  8. Choose all methods displayed there.
  9. Click the deploy button.

Oracle VM/XEN -- resizing Xen Virtual block Device (xvd) online.. Is it really possible?

Recently needed to extend an xvd device in a virtualized ODA X5 environment.
Note: The xvd device was attached on an ODA BASE machine.

The ODA VM and Xen Versions of the environment were:

ODA VM Version:
[root@vmsrv1 /]# cat /etc/issue
Oracle VM server release 3.2.9
Hypervisor running in 64 bit mode with Hardware Virtualization support.

Xen version:
[root@vmsrv1 /]# dmesg | grep Xen\ version
Xen version: 4.1.3OVM (preserve-AD)

Well...I had to resize the partition and the filesystem without rebooting the guest VM .
Note: I want to remind you that the guest VM was in ODA_BASE domain, bytheway.  (ODA BASE Node1)

Here is the approach that I took for this;

  • First, I extended the xvd device , actually the backend device which was a file, from DOM0..

dd if=/dev/zero bs=1M count=20000 >> u02.img (a 20GB extend)

  • Later on, I unmounted the relevant partition from ODA_BASE.

umount /u02

  • Then, I detached and attached the block device using xm.. I needed to do this, because there was not other way to make the ODA BASE or let's say make DOMU to see the up-to-date info about the extended disk. In other words; there was no rescan feature available for xvd devices. If it was a scsi device, we would directly rescan the scsi bus, but for xvd we have no such feature.. Note that, there is a xm block-configure command which can be executed from DOM 0 , but that command can only be used for CDROM devices. SO , I executed the following xm command from DOM 0;

 xm block-detach oakDom1 /dev/xvdb
 xm block-attach oakDom1 file:/OVS/Repositories/odabaseRepo/VirtualMachines/oakDom1/u02.img  xvdb w

  • After reattaching the virtual device to the guest (node 1 in OAKDOM), I checked to see its size using fdisk and ensured that the info that fdisk was giving to me, was up-to-date.

  fdisk -l /dev/xvdb

  • Lastly, I recreated the partition on the disk named xvdb to extend the current partition and used resize4fs to extend the filesystem.
  • Once my resize operations were finished, I mounted the device back to its mount point (/u02) , thus that my work was finished.

So at the end of the day, I could resize a xvd disk without rebooting the related guest machine (ODA Base node 1). However; I did umount it right?  So it was not fully online operation.

After this short real life story, I want to give my answer to one thing: Is it really possible to resize an xvd online? My answer is "Unfortuneatly, NO!"

There is no way at the moment. At least I couldn't find..
More specifically, there is no way to rescan an xvd or tell the DomU that xvd configuration is changed. (without detaching and attaching)

What can be done? Any alternatives?
Resizing an xvd based partition can be done online using LVM!

(Note that : xvd based partition != xvd device)

That is, we can always add a new xvd from DOM0 to DOMU (i.e ODA BASE nodes) using block-attach and extend our logical volumes on DOMU after that.

Ofcourse, we need to be using LVM based partitions in DOMU to be able to do this.

(Note that : the ODA BASE mount point, which come by default with ODA virtualized deployment are not based on LVM)

Monday, December 12, 2016

GRID/RAC -- a real life story on a GRID upgrade.. How to check the status of Grid Upgrade and How to proceed when OUI is not available?

I was upgrading GRID home using OUI. It was a direct upgrade (installation and upgrade at the same time-- by selecting " Upgrade Oracle GI or Automatic Storage Management option") and it was an out-of place upgrade.

The GUI / OUI (Oracle Universal Installer) was doing its job perfectly and after a while, it requested to be executed on all the RAC nodes.

I was working on my customer's desktop and he was using Xming on top of putty to display the X screens.
Note that:Xming is a client software, that displays the X screens on the client.. But it is not like vncserver. That is if xming crashes, the job that we run on it crashes as well.

Anyways, I executed on node1 without any problems.
On the other hand, while I was executing it on node 2, the customer's desktop crashed. That is, Xming crashed because of a Client related problem and putty terminated itself...

It was a catastrophic incident that made the to stop immediately.
So I was like in the middle of nowhere.. That is, the rootUpgrade script was executed on node 2, but its state was ambiguous. Was it able to complete its job? Should it be executed again? Or should we cancel the upgrade at that moment?

The answers for these questions could be given by checking the binaries that are in use and executing the following commands on node 2;

The outputs should have convinced me that our GRID infrastructure is upgraded successfuly. That is,  all the outputs should have pointed me to the newly(upgraded) GRID Oracle Home and to the GRID version.
  • Check Oracle ASM is up & running from upgraded home (use ps -ef)
  • Check the used files and   see they are the files stored in (use lsof) 
  • Rebooted the server and check cluster services are automatically openned from home without any errors..  (optional)
Analyze the outputs of following commands;
  • crsctl stat res -t -init 
  • crsctl stat res -t 
  • ps -ef |grep d.bin 
  • ps -ef |grep -i ohasd 
  • cat /etc/oracle/olr.loc 
  • crsctl query crs activeversion 
  • crsctl query crs releaseversion 
  • crsctl query crs softwareversion 
  • cat inventory.xml
In my case, all checks passed except the one for "inventory.xml".
This meant the rootUpgrade scripts were completed successfully, however the remaining work of OUI was missing..

The CRS=true flag in the inventory.xml of nodes was set in the line that was describing the old Oracle Home.

In order to fix this; I followed the Oracle Support Document named "How to Complete Grid Infrastructure Configuration Assistant(Plug-in) if OUI is not Available ( Doc ID 1360798.1 )"

I have verified that the CRS=true flag was migrated to the new home in all inventory.xml files and convinced that the GRID upgrade was successful.

At that moment; I was good to proceed with RDBMS Upgrade and I did the RDBMS upgrade without any problems.

So, here is the important conclusion;
  • Always use vncserver and vnc session while doing important works.
  • If your upgrade is terminated because of a server or client problem, don't panic. Check Oracle Support, check using your own method and try to analyze the situation and make a decision accordingly.. If you do your analysis consciously and if you are lucky, then you may be good to proceed.
  • Create a proactive SR (a SR opened before the upgrade) before these kind of important upgrades. A proactive SR which is opened as Severity 1 may give you an extra comfort in cases where something might go wrong.

RDBMS/GRID -- Upgrading RAC Database and GRID from to

Here is the plan that I used in a critical RAC database upgrade.
The environment was a 2 Node RAC. The OS layer was Oracle Linux 6 64 bit. GRID and RDBMS versions were
This plan worked like a charm and the steps(the methodology, flow and steps) in it, is indeed approved by Oracle Support. (through an Oracle SR)

So, tested, verified and approved :)
Note that, this plan includes after upgrade actions such as applying the latest GI PSU into the environment , as well..

General overview :

-Check the readiness for GRID and RDBMS homes for the upgrade.
-Perform backup of OCR, voting disk and Database (IMPORTANT)
-Install/Upgrade the GRID home.
-Install/Upgrade the database/databases.
-Apply PSU (if required)

Important fAct:

Oracle Clusterware and Oracle ASM upgrades are always out-of-place upgrades. With 11g Release 2 (11.2), we cannot perform an in-place upgrade of Oracle Clusterware and Oracle ASM to existing homes.
So, if we have an existing Oracle Clusterware installation, then we upgrade our existing cluster by performing an out-of-place upgrade. We cannot perform an in-place upgrade.
In prior releases, we could use Database Upgrade Assistant(DBUA) to upgrade either an Oracle Database, or Oracle ASM. That is no longer the case. We can only use DBUA to upgrade an Oracle Database instance. We can use Oracle ASM Configuration Assistant (ASMCA) to upgrade Oracle ASM.
Oracle recommends that weleave Oracle RAC instances running. When we start the root script on each node, that node's instances are shut down and then started up again by the script.
Before, upgrading the ORacle databases, we need to upgrade Oracle Clusterware first.

                      --Grid+ASM upgrade section--
Backup (RMAN + Guarenteed Restore point in 14t step actually)

Run orachk and fix the errors reported by it: Document id:1457357.1

Run Cluster Verification utility and fix the errors reported by it.

Example run:
./ stage -pre crsinst -upgrade -n node1,node2 -rolling -src_crshome
/u01/app/grid/ -dest_crshome /u01/app/grid/ -dest_version -fixup -fixupdir /home/grid/fixup -verbose

unset ORA_CRS_HOME (from shell and all the profile files. i.e .profile .cshrc .bash_profile etc..)

Oracle recommends that you leave Oracle RAC instances running.
When you start the root script on each node, that node's instances are shut down and then started up again by the script.

Start the installer (runInstaller) with the OS owner ASM user. (i.e oracle, grid)
select the option to upgrade an existing Oracle Clusterware and Oracle ASM installation.
On the node selection page, select all nodes.
Select installation options as prompted.
When prompted, run the script on each node in the cluster that you want to upgrade.
Run the script on the local node first. The script shuts down the earlier release installation, replaces it with the new Oracle Clusterware release, and starts the new Oracle Clusterware installation.
Because the Oracle Grid Infrastructure home is in a different location than the former
Oracle Clusterware and Oracle ASM homes, update any scripts or applications that use utilities, libraries, or other files that reside in the Oracle Clusterware and Oracle ASM homes.

Note that:
The recommended practice for upgrading Oracle ASM is to upgrade an Oracle ASM instance with the Oracle Universal Installer (OUI) executable file that is located in the Oracle Grid Infrastructure home directory. OUI automatically defaults to upgrade mode when it detects an Oracle ASM instance at an earlier release level.
Oracle ASM Configuration Assistant enables you to upgrade an existing Oracle ASM instance to the current software level and upgrade an older Oracle ASM instance to the latest Oracle Grid Infrastructure home.
You can upgrade an Oracle ASM instance to an Oracle Restart 11g release 2 (11.2) configuration. The recommended practice is to upgrade an Oracle ASM instance with Oracle Universal Installer (OUI).

At the end of the upgrade, if you set the OCR backup location manually to the older release Oracle Clusterware home (CRS home), then you must change the OCR backup location to the Oracle Grid Infrastructure home (Grid home). If you did not set the OCR backup location manually, then this issue does not concern you.

                      --DB+RDBMS upgrade section--
run pre-upgrade tool
sqlplus /as sysdba
SQL> SPOOL upgrade_info.log
SQL> @$11g_ORACLE_HOME/rdbms/admin/utlu112i.sql
Check the output of the Pre-Upgrade Information Tool in upgrade_info.log.

Note: Any invalid SYS/SYSTEM objects found before upgrading the database are stored in the table named registry$sys_inv_objs. Any invalid non-SYS/SYSTEM objects found before upgrading the database are stored in registry$nonsys_inv_objs.
After the upgrade, run ORACLE_HOME/rdbms/admin/utluiobj.sql to  identify any new invalid objects due to the upgrade.


Ensure the following--
No dbf in backup mode:  SELECT * FROM v$backup WHERE status != 'NOT ACTIVE';
No distributed transaction pending: SELECT * FROM dba_2pc_pending;
No gap between standby dbs: SELECT SUBSTR(value,INSTR(value,'=',INSTR(UPPER(value),'SERVICE'))+1) FROM v$parameter WHERE name LIKE 'log_archive_dest%' AND UPPER(value) LIKE 'SERVICE%';
All batch and cron jobs are disabled.
Ensure that the SHARED_POOL_SIZE , LARGE_POOL_SIZE as well as the JAVA_POOL_SIZE are greater than 150MB
Ensure parameter memory_target is not smaller than 1536m.

PURGE dba_recyclebin

Shut down the database cleanly.

Check invalids and try to validate them.

 SQL>select substr(comp_name,1,40) comp_name, status, substr(version,1,10) version from
 dba_registry order by comp_name;
 SQL>select substr(object_name,1,40) object_name,substr(owner,1,15) owner,object_type from
 dba_objects where status='INVALID' order by owner,object_type;
 SQL>select owner,object_type,count(*) from dba_objects where status='INVALID' group by
 owner,object_type order by owner,object_type ;


Create a guaranteed Restore point (just in case)
SQL>create restore point before_upgrade guarantee flashback database;
SQL>select * from v$restore_point;

Lastly, we install and upgrade our databases using runInstaller
To do this, we simply unzip the installation files, then we go to database subdirectory and execute RunInstaller. When the runInstaller starts, we select "Upgrade ...."
Once the upgrade is completed, we check init.ora parameters (especially, old home references), environment variables, invalid objects and v$version to ensure that our upgrade is complete.
Again, runInstaller (upgrade option) both installs software and upgrades the chosen databases in one go.
At this point, if we want to upgrade more databases from the same home, we just execute dbua from the new home and that's it.

*As a post uprade step, we update the Enterprise Manager configuration. The steps are as  follows;
Log in to dbconsole or gridconsole.
Navigate to the Cluster tab.
Click Monitoring Configuration
Update the value for Oracle Home with the new Grid home path.

*As db post upgrade instructions, we should take the following actions.
We can review Upgrade Guide for these.(

Upgrading the Recovery Catalog After Upgrading Oracle Database
Upgrading the Time Zone File Version After Upgrading Oracle Database
Upgrading Statistics Tables Created by the DBMS_STATS Package After Upgrading Oracle Database
Upgrading Externally Authenticated SSL Users After Upgrading Oracle Database
Installing Oracle Text Supplied Knowledge Bases After Upgrading Oracle Database
Updating Your Oracle Application Express Configuration After Upgrading Oracle Database
Configuring Fine-Grained Access to External Network Services After Upgrading Oracle Database
Enabling Oracle Database Vault and Revoking the DV_PATCH_ADMIN Role After Upgrading Oracle Database

*Also, as db post upgrade instruction, we should take a look at the recommended tasks below...

Back Up the Database
Reset Passwords to Enforce Case-Sensitivity
Understand Changes with Oracle Grid Infrastructure
Understand Oracle ASM and Oracle Grid Infrastructure Installation and Upgrade
Add New Features as Appropriate
Develop New Administrative Procedures as Needed
Set Threshold Values for Tablespace Alerts
Migrate From Rollback Segments to Automatic Undo Mode
Configure Oracle Data Guard Broker
Migrate Tables from the LONG Data Type to the LOB Data Type
Test the Upgraded Production Database
Back Up the Database

                      --GI PSU apply section--
Upgrade opatch using patch 6880880 (upgrade both GRID and RDBMS opatch)

Download GI PSU:
Patch 24436338: GRID INFRASTRUCTURE PATCH SET UPDATE (OCT2016) (Oct 2016) Grid Infrastructure Patch Set Update (GI PSU) (Doc ID 24436338.8)

Reference: Readme of patch 24436338.

Create a ocm response file (ocm.rsp) -- My Oracle Support Document 966023.1 How To Create An OCM Response File For Opatch Silent Installation.

Apply PSU using opatch auto option:

root user > opatch auto <UNZIPPED_PATCH_LOCATION>/24436338 -ocmrf <ocm response file>

Load Sql side of PSU:

cd $ORACLE_HOME/rdbms/admin
sqlplus /nolog
SQL> @catbundle.sql psu apply

cd $ORACLE_HOME/rdbms/admin
sqlplus /nolog
SQL> @utlrp.sql

Linux/Oracle Linux -- Inodes & partition/filesystem size.. --using debugfs, --No space left on device

In this blog post, I will shortly explain the inodes and the maximum inode count for a filesystem in Linux.

I decided to give this subject a place in my blog, because I think it is important. Especially when you want to store millions of files in a partition, the configuration of inodes become crucial.
Storing millions of files may be required in a case where you want to store lots of pictures in a filesystem, or in a case where your applications use filesystem to create some audit file or some debug files or log files.

It is important because a misconfiguration can make the system hang. In addition, you may be surprised to be in a siutation where you can't create files on your affected partition, although "df" command reports lots of free space available.

Let's introduce you the inodes briefly.

Inodes are used for storing metadata information about files. This metada includes owner info, size, timestamps and so on. To create a single file in Linux, we need to have at least 1 inode available for our filesystem.
For ext2 and ext3, the inode size is 128 bytes . This is fixed value. However, using -I argument with mk2fs, it is possible to utilize inodes larger than 128 bytes to store extended attributes.
For ext4, the inode records are 256 bytes, but the inode structure is 156 bytes. So ext4 has 100 bytes (256-156) free space in each Inode for storing extra/extended attributes.

Let's make a quick demo and take a look at the inode structure, see what is stored in it , and as a bonus "let's update the contents of an inode using debugfs :)" ->
  • Just check the file from the shell and gather info using stat command
[root@erpdb boot]# stat erm1
File: `erm1'
Size: 6 Blocks: 2 IO Block: 1024 regular file
Device: 6801h/26625d Inode: 6027 Links: 1
Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root)
Access: 2016-12-06 21:34:50.000000000 +0300
Modify: 2016-12-06 21:34:29.000000000 +0300
Change: 2016-12-06 21:34:29.000000000 +0300
  • Check the inode number 6027 using debugfs to see the file owner's uid.
debugfs:   stat <6027>

debugfs:  stat <6027>
Inode: 6027   Type: regular    Mode:  0644   Flags: 0x0   Generation: 4071970905
User:     0   Group:     0   Size: 6
File ACL: 0    Directory ACL: 0
Links: 1   Blockcount: 2
Fragment:  Address: 0    Number: 0    Size: 0
ctime: 0x584704b5 -- Tue Dec  6 21:34:29 2016
atime: 0x584704ca -- Tue Dec  6 21:34:50 2016
mtime: 0x584704b5 -- Tue Dec  6 21:34:29 2016
  • Check to see the oraprod's uid.. (we will make the oraprod the owner of the erm1 in next steps)
[root@erpdb ~]# id oraprod
uid=501(oraprod) gid=500(dba) groups=500(dba)
  • Use "mi" command to modify the inode of the file named erm1. Note that, the inputs that "mi" command are requesting here, are acutally the values that are stored in the inode attributes. (such as user id, group id, size, creation time etc...) Note that, we will only update the User ID (owner) in this example.
debugfs:  mi  <6027>
                          Mode    [0100644] 
                       User ID    [0] 501  (entered oraprod's uid)
                      Group ID    [0] 
                          Size    [6] 
                 Creation time    [1481049269] 
             Modification time    [1481049269] 
                   Access time    [1481049290] 
                 Deletion time    [0] 
                    Link count    [1] 
                   Block count    [2] 
                    File flags    [0x0]                     Generation    [0xf2b55859] 
                      File acl    [0] 
           High 32bits of size    [0]               Fragment address    [0]                Fragment number    [0] 
                 Fragment size    [0] 
You see the major attributes stored in an inode above..
  • Well, lastly unmount and mount the filesystem, in which the file is located and use "ls" command to check the owner. (note that, we need to remount the filesystem after making a change(write) using debugfs.. Unless the filesystem is remounted, our change is not seen because of inode caching)
[root@erpdb boot]# ls -al erm1
-rw-r--r-- 1 oraprod root 6 Dec  6 21:34 erm1

Well, after taking a look at the inodes, let's come back to our topic or should I say, let's start with our topic, since we didn't go into our actual topic yet.

The filesystems has a defined number of inodes.
Actually, we don't care about them while creating the filesystems but the inodes are there and created according to a default ratio.

Let's create an ext4 filesystem and look at the situation.
[root@erpdb /]#mkfs.ext4 /dev/sdb
[root@erpdb ~]# mount /dev/sdb /u03
[root@erpdb ~]# df -i /u03
Filesystem            Inodes   IUsed   IFree IUse% Mounted on
/dev/sdb             64004096      11 64004085    1% /u03
[root@erpdb ~]# df -h /u03
Filesystem            Size  Used Avail Use% Mounted on
/dev/sdb              962G  200M  913G   1% /u03

As seen with "df -i" output above, our newly created filesystem has 64004096 inodes and the size of the filesystem itself is 962G.  So , mkfs.ext4 by default creates 64004096 inodes for our filesystem.

let's create 100000 files on these filesystem;

for i in {0..100000}
    touch "File$(printf "%03d" "$i").txt"

Then, check the inode used&free counts;

[root@erpdb u03]# df -i /u03
Filesystem            Inodes   IUsed   IFree IUse% Mounted on
/dev/sdb             64004096  100015 63904081    1% /u03

As seen above, 100015 inodes are used (don't bother the extra 15 inodes, they were there when the filesystem is created)

So, we created 100000 "empty" files and we have spents 100000 inodes for them.

Now, let's check the size of our filesystem to see its used and free space.

Filesystem            Size  Used Avail Use% Mounted on
/dev/sdb              962G  203M  913G   1% /u03

As seen above, our filesystem is still empty. 203Megabytes were already used before we created our 100000 empty files.

So we are using inodes but we don't use bytes for storing anything.
Let's suppose we go further and try to create 63904082 files , each sized 10k.
Can you imagine the result?
We will  need to have 63904082x10K (624060 Mbyes --almost 620GB)  sized free space in our filesystem. In this case we have that space right? That is, we have 913G free space available as seen in the about df -h output).
On the other hand, we will not able to create the 63904082th file on our filesystem because, we only have 63904081 inodes available and that's why we will end up with the "No space left on device" error. In this kind of a scenario, we will need to reformat our filesystem with a higher number of inode counto to store more files on it.

Let's make  a conclusion for this part;
What have we learned?

1)Inodes are created automatically when we create the filesystem.
2)Inodes are occupied when we create files on our filesystems.
3)In order to be able to create x amount of files in our filesystem, we need to have at least x amount of inodes available in our filesystem
4)Small sized(or let's say almost zero sized) files occupy near-zero space, but they still occupy inodes.
5)If we create a high number of small sized files, we may occupy all the inodes in our filesystem and we may encounter "No space left on device" errors, eventhough we have plenty of free space in our filesystem.
6)Once the filesystem is created, the only way to increase its inode counts, is to reformat it.

So, in brief we know that there may be cases where we should adjust the inode counts according to our needs.

At this point, lets make a visit to man page of mkfs.ext4 (it is an up-to-date fileystem available in Linux)

man mkfs.ext4 ->

-i bytes-per-inode
              Specify  the bytes/inode ratio.  mke2fs creates an inode for every bytes-per-inode bytes of space on the disk.  The larger the bytes-per-inode ratio,
              the fewer inodes will be created.  This value generally shouldnât be smaller than the blocksize of the filesystem, since in  that  case  more  inodes
              would  be  made  than  can ever be used.  Be warned that it is not possible to expand the number of inodes on a filesystem after it is created, so be
              careful deciding the correct value for this parameter.

As seen above, we have a "-i" parameter, which lets us to adjust the inode counts by providing bytes-per-inode value. The larger the bytes-per-inode ratio, the fewer inodes will be created.
The default bytes-per-inode for ext4 is 16384.

Sol ,let's format our filesystem once again by providing a smaller value(1024 bytes) for the bytes-per-inode ratio.

[root@erpdb /]#umount /u03
[root@erpdb /]#mkfs.ext4 -i 1024 /dev/sdb
[root@erpdb /]# mount /dev/sdb /u03

[root@erpdb ~]# df -i /u03
Filesystem            Inodes   IUsed   IFree IUse% Mounted on
/dev/sdb             1023967232      11 1023967221    1% /u03
[root@erpdb ~]# df -h /u03
Filesystem            Size  Used Avail Use% Mounted on
/dev/sdb              733G  217M  684G   1% /u03

You see, we now have lost of inodes. That is, now we have 1023967232 inodes and it is a quite large number when compared with the earlier value(64004096).
However, as we see in "df -h" output, we now have 684G available space. It was 913G earlier..

So, we have more inodes, but less space. (more inodes occupy more space)
Now lets revisit our example above;
Suppose we go further and try to create 63904082 files , each sized 10k.
Can you imagine the result?
We will  need to have 63904082x10K (624060 Mbyes --almost 620GB)  sized free space in our filesystem. In this case we still have that space right? That is, we have 684G free space available as seen in the about df -h output).
So, this time we will be able to create the all these files on our filesystem because, we  have 1023967232  inodes available and that's why this time we will  not end up with the "No space left on device" error....

Well... At this point, I guess you understand what I want to express with this article right?
So, we need to be aware of the filesystem structures while creating a cooked filesystem.
In this article, the inodes come forward, but  there are other tunables as well.
In short, we may need to adjust some parameters while creating the filesystems.
We need to analyze our goal and make those adjustments, just like we do in creating our Oracle Databases or installing our EBS systems.

In case of ASM and ACFS , we are dependent to Oracle.
ASM has a limit of 1 million files per Diskgroup. ACFS, on the other hand; supports 2^40 (1 trillion) files in a file system.

Thursday, December 8, 2016

ODA -- creating External Redundancy Diskgroup , ORA-15018 and ORA-15072 // _disable_appliance_check parameter & appliance.mode attirbute

Recently did a POC with an ODA X6 machine. It was a "ODA X6-2 M" and the disk capacity made available with the standard deployment was not sufficient to store a big reporting database.
So , I decided to reconfigure the ASM Diskgroups manually to gain some extra space in the ODA storage.

What I did was the following;

I dropped te RECO diskgroup and planned to use the 2 free disks (as the result of dropping RECO diskgroup) to recreate a new disk group named DATA2 using external redundancy.
When I dropped RECO , 2 NVMe disks(partitions) become available . (RECO was build on top of 2 disks-normal redundancy diskgroup , not counting the quorum..)
So I tried to create a new diskgroup named DATA2 with external redundancy using these 2 NVMe disks.
I used "asmca" for doing this.
asmca could see the disks as candidates (actually FORMERS), but it could not create a external redundancy group and encountered the following errors;

ORA-15018: diskgroup cannot be created
ORA-15072: command requires at least 1 failure groups, discovered only 0

Then I used sqlplus /as sysasm to do the same thing, but sqlplus encountered the same errors above, as well.

These error were not expected, so I thought that the configuration of this machine was somewhat incompatible with the external redundancy setting. (it was an appliance, an enginnered system...)

Note that, I knew that external redundancy was not supported with enginnered systems, but I tried to find the reason behind these errors.

Disks were seen, discovered but could not be used to create external redundancy disk groups...

Anyways, I could proceed my POC by scattering the db files to the RECO and DATA diskgroups and POC was succeded. 

However, I was still curious about  the reason behind those errors.
Since the POC was over, I had no ODA machine to test it and find the cause and solution for it.

Then I raised this question to the Oracle Community. The answer came from Viacheslav Leichinsky. 
"When the hidden parameter _disable_appliance_check is set to TRUE and ASM attribute 'appliance.mode' is set to FALSE, the external diskgroup can be created in ODA environments."



'/dev/mapper/SSD_E0_S12_133243434p1' NAME SSD_E0_S12_133243434P1
ATTRIBUTE 'compatible.asm'='', 'compatible.rdbms'='','sector_size'='512','AU_SIZE'='4M','compatible.advm'='';
Diskgroup created

I found this story interesting, that's why I m sharing it with you.
It might come in handy one day.

Friday, December 2, 2016

Linux -- Displaying "X windows" in Windows Clients using "putty" and "xming"

This method might come in handy in a situation where you don't have a vncserver installed on your Linux/Unix server.

By using putty's X11 forwarding and XMING server, you can display the X windows on your Windows client without a need to connect a Vnc server.

The method to enable this functionality on your clients(desktops) is pretty straight forward.
It is all about installing xming to your windows client by downloading it from "" and configuring putty.

The installation of xming is very easy . (just next, next and next :)
Once the xming is installed and started, you open putty and do the following configuration (enabling X11 forwarding)

At this moment, you are done. You just connect to your server using putty and start displaying X windows in your client machine.

This was today's quick tip. Easy and practical right? :)


In an EBS 12.1 environment, we encountered a strange problem in one of our custom XML/BI Publisher reports.
The problem was in the output. Actually the output could be created but the graphs, that needed to be embedded to the output, were not there.
In other words, rather than the dynamic graphs, there was empty spaces in the PDF output of our custom XML Publisher report.


We checked the concurrent request and OPP logs.. All were clean.. --no errors.

Then, we enabled debug for XDO log by following the steps below;

  • Connect to the apps node
  • Create $XDO_TOP/temp and $XDO_TOP/resource directories
  • Create an xdodebug.cfg file in $XDO_TOP/resource directory
  • Add the following lines to the xdodebug.cfg
    •     LogLevel=STATEMENT 
    •     LogDir=[full XDO_TOP]/temp  (we use the full path of XDO_TOP here)
  • Restart the Apache

Next, we resubmitted the report and checked the xdo.log.

XDO.LOG contents:

120216_031530408][][ERROR] java.lang.NullPointerException
        at javax.swing.MultiUIDefaults.getUIError(
        at javax.swing.UIDefaults.getUI(
        at javax.swing.UIManager.getUI(
        at javax.swing.JPanel.updateUI(
        at javax.swing.JPanel.<init>(
        at javax.swing.JPanel.<init>(
        at oracle.dss.dataView.Dataview.<init>(
        at oracle.dss.graph.Graph.<init>(
        at sun.reflect.GeneratedConstructorAccessor31.newInstance(Unknown Source)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(
        at java.lang.reflect.Constructor.newInstance(
        at java.lang.Class.newInstance0(
        at java.lang.Class.newInstance(
        at oracle.apps.xdo.template.rtf.img.RTFChartUtil.generateChartAsBase64(
        at oracle.apps.xdo.template.rtf.XSLTFunctions.chart_svg(
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        --output truncated

We analyzed the XDO debug log and concluded that our search keyword should be the oracle.dss.graph.Graph.<init>, as all the other lines in the error/call stack were generic..
Also, as we were dealing with a graph problem, why not searching for something related with graph.. (something related with drawing the graphs in java:)



Once we did our search, we reached the applicable document. Note that, this environment was 12.1


R12 BI Publisher reports with graph included fail with error "String index out of range: -1" (Doc ID 1251964.1)

This time , the solution was applying the patch "Patch 10192670 - 12.1.4:10192626 FORWARD PORT: BI :GRAPH IS NOT WORKING ON 64 BIT LINUX", as our BI Bean version was not the up-to-date one.
(we checked it by the command "cat $COMMON_TOP/java/classes/oracle/dss/graph/version.txt" and  saw that it was, not

--Important note, in order to put graphs to our reports, apps tier should be able to open X sessions. That means, we need to have a X server (i.e vncserver) running from the DISPLAY where our concurrent manager 's are configured. (DISPLAY env variable defined in Also we need to give permission to our apps Os user to use that DISPLAY.  (I'm talkin about xhost + here...)

EBS R12 -- Problem in XML publisher report outputs (when charts are added)

This blog post will be a little thing for the newbies. I will try to give you the general concept and methods that we use in fixing EBS tech. errors.
Here is the famous 4 steps (Debug,analyze the related log ,search and apply the solution) which need to be taken for fixing a weird problem in EBS. (it may not be weird for anyone :))
This time , we are dealing with the XML BI Publisher, which comes built-in with EBS.
When we talk about XML/BI publisher, most of the time we are actually talking about java.
Also, when we start our analysis for diagnosing the XML/BI Publisher errors, we most of the time, find ourselves reviewing the OPP log.

Note that: For dealing with the XML publisher-related errors, we review the logs in the following sequence.

1)Conc request log 2) OPP(Output Post Processor) log 3)XDO log <after enabling xdo debug>

The problem that we are dealing in this example is an XML Publisher output problem. That is, in this example, we are dealing with a problematic situation where XML Publisher report output(PDF in this case) can not be created, when a chart(graph) is added to a specific report. (custom report)

Here is the usual way of solving these kinds of problems.. ( We find the cause by analyzing the log,  we search it if it is not something that we know, we apply the solution once it is find)

First, we analyze.. This time it is in the opp.log.


[12/2/16 3:53:03 PM] [UNEXPECTED] [601115:RT590163] java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
         ......   (output truncated)
at oracle.apps.xdo.oa.schema.server.TemplateHelper.runProcessTemplate(
at oracle.apps.xdo.oa.schema.server.TemplateHelper.processTemplate(
at oracle.apps.xdo.oa.schema.server.TemplateHelper.processTemplate(
at oracle.apps.fnd.cp.opp.XMLPublisherProcessor.process(
Caused by: oracle.xdo.parser.v2.XPathException: Extension function error: Error invoking 'chart_svg':'java.lang.NoClassDefFoundError: Could not initialize class sun.awt.X11GraphicsEnvironment'
at oracle.xdo.parser.v2.XSLStylesheet.flushErrors(
at oracle.xdo.parser.v2.XSLStylesheet.execute(
at oracle.xdo.parser.v2.XSLStylesheet.execute(
at oracle.xdo.parser.v2.XSLProcessor.processXSL(
at oracle.xdo.parser.v2.XSLProcessor.processXSL(
at oracle.xdo.parser.v2.XSLProcessor.processXSL(
... 18 more

We find (acutally sense) the root cause by looking at the log. "InvocationTargetException" is generic one, so we don't go with it. 
 On the other hand, we go with the Error invoking 'chart_svg', as it is written in the line starting with "Caused by" and it seems promising: ) 
After choosing our keyword, we go to Oracle Support and use our keyword to do our search.


Error invoking 'chart_svg' 
A support search like this, I mean a search with correct keyword, brings us to the solution.

XML Publisher Report With Pie Chart Error invoking 'chart_svg':'java.lang.NoClassDefFoundError (Doc ID 1992454.1) 

Well, in this case, XML/BI Publisher needs to have a correct DISPLAY set, as it was trying to using "X libraries" to draw the graph/chart that is supposed to be added to the output of the XML publisher report.

So, we stop the apps tier services, modify the context file (DISPLAY context variable), modify the start scripts in $ADMIN_SCRIPTS_HOME directory. (only the ones in which the DISPLAY env variable is set), run autoconfig on apps tier and start the apps tier services.