Thursday, December 8, 2016

ODA -- creating External Redundancy Diskgroup , ORA-15018 and ORA-15072 // _disable_appliance_check parameter & appliance.mode attirbute

Recently did a POC with an ODA X6 machine. It was a "ODA X6-2 M" and the disk capacity made available with the standard deployment was not sufficient to store a big reporting database.
So , I decided to reconfigure the ASM Diskgroups manually to gain some extra space in the ODA storage.

What I did was the following;

I dropped te RECO diskgroup and planned to use the 2 free disks (as the result of dropping RECO diskgroup) to recreate a new disk group named DATA2 using external redundancy.
When I dropped RECO , 2 NVMe disks(partitions) become available . (RECO was build on top of 2 disks-normal redundancy diskgroup , not counting the quorum..)
So I tried to create a new diskgroup named DATA2 with external redundancy using these 2 NVMe disks.
I used "asmca" for doing this.
asmca could see the disks as candidates (actually FORMERS), but it could not create a external redundancy group and encountered the following errors;

ORA-15018: diskgroup cannot be created
ORA-15072: command requires at least 1 failure groups, discovered only 0

Then I used sqlplus /as sysasm to do the same thing, but sqlplus encountered the same errors above, as well.

These error were not expected, so I thought that the configuration of this machine was somewhat incompatible with the external redundancy setting. (it was an appliance, an enginnered system...)

Note that, I knew that external redundancy was not supported with enginnered systems, but I tried to find the reason behind these errors.

Disks were seen, discovered but could not be used to create external redundancy disk groups...

Anyways, I could proceed my POC by scattering the db files to the RECO and DATA diskgroups and POC was succeded. 

However, I was still curious about  the reason behind those errors.
Since the POC was over, I had no ODA machine to test it and find the cause and solution for it.

Then I raised this question to the Oracle Community. The answer came from Viacheslav Leichinsky. 
"When the hidden parameter _disable_appliance_check is set to TRUE and ASM attribute 'appliance.mode' is set to FALSE, the external diskgroup can be created in ODA environments."



'/dev/mapper/SSD_E0_S12_133243434p1' NAME SSD_E0_S12_133243434P1
ATTRIBUTE 'compatible.asm'='', 'compatible.rdbms'='','sector_size'='512','AU_SIZE'='4M','compatible.advm'='';
Diskgroup created

I found this story interesting, that's why I m sharing it with you.
It might come in handy one day.

Friday, December 2, 2016

Linux -- Displaying "X windows" in Windows Clients using "putty" and "xming"

This method might come in handy in a situation where you don't have a vncserver installed on your Linux/Unix server.

By using putty's X11 forwarding and XMING server, you can display the X windows on your Windows client without a need to connect a Vnc server.

The method to enable this functionality on your clients(desktops) is pretty straight forward.
It is all about installing xming to your windows client by downloading it from "" and configuring putty.

The installation of xming is very easy . (just next, next and next :)
Once the xming is installed and started, you open putty and do the following configuration (enabling X11 forwarding)

At this moment, you are done. You just connect to your server using putty and start displaying X windows in your client machine.

This was today's quick tip. Easy and practical right? :)


In an EBS 12.1 environment, we encountered a strange problem in one of our custom XML/BI Publisher reports.
The problem was in the output. Actually the output could be created but the graphs, that needed to be embedded to the output, were not there.
In other words, rather than the dynamic graphs, there was empty spaces in the PDF output of our custom XML Publisher report.


We checked the concurrent request and OPP logs.. All were clean.. --no errors.

Then, we enabled debug for XDO log by following the steps below;

  • Connect to the apps node
  • Create $XDO_TOP/temp and $XDO_TOP/resource directories
  • Create an xdodebug.cfg file in $XDO_TOP/resource directory
  • Add the following lines to the xdodebug.cfg
    •     LogLevel=STATEMENT 
    •     LogDir=[full XDO_TOP]/temp  (we use the full path of XDO_TOP here)
  • Restart the Apache

Next, we resubmitted the report and checked the xdo.log.

XDO.LOG contents:

120216_031530408][][ERROR] java.lang.NullPointerException
        at javax.swing.MultiUIDefaults.getUIError(
        at javax.swing.UIDefaults.getUI(
        at javax.swing.UIManager.getUI(
        at javax.swing.JPanel.updateUI(
        at javax.swing.JPanel.<init>(
        at javax.swing.JPanel.<init>(
        at oracle.dss.dataView.Dataview.<init>(
        at oracle.dss.graph.Graph.<init>(
        at sun.reflect.GeneratedConstructorAccessor31.newInstance(Unknown Source)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(
        at java.lang.reflect.Constructor.newInstance(
        at java.lang.Class.newInstance0(
        at java.lang.Class.newInstance(
        at oracle.apps.xdo.template.rtf.img.RTFChartUtil.generateChartAsBase64(
        at oracle.apps.xdo.template.rtf.XSLTFunctions.chart_svg(
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        --output truncated

We analyzed the XDO debug log and concluded that our search keyword should be the oracle.dss.graph.Graph.<init>, as all the other lines in the error/call stack were generic..
Also, as we were dealing with a graph problem, why not searching for something related with graph.. (something related with drawing the graphs in java:)



Once we did our search, we reached the applicable document. Note that, this environment was 12.1


R12 BI Publisher reports with graph included fail with error "String index out of range: -1" (Doc ID 1251964.1)

This time , the solution was applying the patch "Patch 10192670 - 12.1.4:10192626 FORWARD PORT: BI :GRAPH IS NOT WORKING ON 64 BIT LINUX", as our BI Bean version was not the up-to-date one.
(we checked it by the command "cat $COMMON_TOP/java/classes/oracle/dss/graph/version.txt" and  saw that it was, not

EBS R12 -- Problem in XML publisher report outputs (when charts are added)

This blog post will be a little thing for the newbies. I will try to give you the general concept and methods that we use in fixing EBS tech. errors.
Here is the famous 4 steps (Debug,analyze the related log ,search and apply the solution) which need to be taken for fixing a weird problem in EBS. (it may not be weird for anyone :))
This time , we are dealing with the XML BI Publisher, which comes built-in with EBS.
When we talk about XML/BI publisher, most of the time we are actually talking about java.
Also, when we start our analysis for diagnosing the XML/BI Publisher errors, we most of the time, find ourselves reviewing the OPP log.

Note that: For dealing with the XML publisher-related errors, we review the logs in the following sequence.

1)Conc request log 2) OPP(Output Post Processor) log 3)XDO log <after enabling xdo debug>

The problem that we are dealing in this example is an XML Publisher output problem. That is, in this example, we are dealing with a problematic situation where XML Publisher report output(PDF in this case) can not be created, when a chart(graph) is added to a specific report. (custom report)

Here is the usual way of solving these kinds of problems.. ( We find the cause by analyzing the log,  we search it if it is not something that we know, we apply the solution once it is find)

First, we analyze.. This time it is in the opp.log.


[12/2/16 3:53:03 PM] [UNEXPECTED] [601115:RT590163] java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
         ......   (output truncated)
at oracle.apps.xdo.oa.schema.server.TemplateHelper.runProcessTemplate(
at oracle.apps.xdo.oa.schema.server.TemplateHelper.processTemplate(
at oracle.apps.xdo.oa.schema.server.TemplateHelper.processTemplate(
at oracle.apps.fnd.cp.opp.XMLPublisherProcessor.process(
Caused by: oracle.xdo.parser.v2.XPathException: Extension function error: Error invoking 'chart_svg':'java.lang.NoClassDefFoundError: Could not initialize class sun.awt.X11GraphicsEnvironment'
at oracle.xdo.parser.v2.XSLStylesheet.flushErrors(
at oracle.xdo.parser.v2.XSLStylesheet.execute(
at oracle.xdo.parser.v2.XSLStylesheet.execute(
at oracle.xdo.parser.v2.XSLProcessor.processXSL(
at oracle.xdo.parser.v2.XSLProcessor.processXSL(
at oracle.xdo.parser.v2.XSLProcessor.processXSL(
... 18 more

We find (acutally sense) the root cause by looking at the log. "InvocationTargetException" is generic one, so we don't go with it. 
 On the other hand, we go with the Error invoking 'chart_svg', as it is written in the line starting with "Caused by" and it seems promising: ) 
After choosing our keyword, we go to Oracle Support and use our keyword to do our search.


Error invoking 'chart_svg' 
A support search like this, I mean a search with correct keyword, brings us to the solution.

XML Publisher Report With Pie Chart Error invoking 'chart_svg':'java.lang.NoClassDefFoundError (Doc ID 1992454.1) 

Well, in this case, XML/BI Publisher needs to have a correct DISPLAY set, as it was trying to using "X libraries" to draw the graph/chart that is supposed to be added to the output of the XML publisher report.

So, we stop the apps tier services, modify the context file (DISPLAY context variable), modify the start scripts in $ADMIN_SCRIPTS_HOME directory. (only the ones in which the DISPLAY env variable is set), run autoconfig on apps tier and start the apps tier services.

Tuesday, November 29, 2016

EBS 12.2 -- problem installing EBS on XFS -- no server JVM at

This will be a quick tip, as it is about a problem that one of my collegues faced with, few days ago.
During my research, I have seen that the information related with the error is not publically available or documented, so that I thought it maybe a good thing to write about it :)

The problem was encountered while doing an EBS 12.2.6 installation on Oracle Linux 7.3 (64bit).The error was seen during very basic Rapidwiz executions such as while executing RapidWizVersion or while executing wrapper scripts that calls java such as

Well, this error was actually a java/jvm error and it was displayed in the following format;

Error: no `server' JVM at `<your_stage_location/startCD/Disk1/rapidwiz/jre/Linux_x64/1.6.0/lib/i386/server/'

Note: There may be other errors as well.

When ignored or skipped somehow, the installation may get stuck in the next stages.

Well... This error is caused by the filesystem, that EBS stage and installation directories reside, being XFS.
I wasn't there when the problem was encountered but according to my research, I can say that it is caused by the enhancements that are done in modern XFS filesystems.
The most significant feature that will cause this issue is the 64 bit inodes. Especially when the filesystem size is more than 1 TB, these 64 bit inodes became a problem for 32 bit java.. (You see the above path , I mean the Rapidwiz java path path.. It says i386 there..)

Anyways, a workaround for this problem is to use another fileystem, like ext3 or ext4 on Linux. (tested & verified)

Note that, ext4 also supports 64 bit inodes, but it is off by default. following is from the manual;

i_version ->  Enable 64-bit inode version support. This option is off by default.
Relocating the stage and startCD location to a smaller filesystem(<1tb) can be a workaround too .

Also, the mount point (with xfs filesytem) can be unmounted and remounted using "-o inode32" argument. After this move, the stage directory should be recreated in the mount point (recopied to the mount point)  to guarantee all its files are associated with 32 bit inodes...

As you see, this is an interesting topic and it worths to mention..

One final note on this: Use of XFS with Oracle Database is not supported. :)

Please comment and give alternative- suggestions if you can.

Thursday, November 24, 2016

RAC -- ORA-01578: ORACLE data block corrupted, "computed block checksum: 0x0", Bad header found during dbv

Yesterday, an interesting issue was escalated to me.
It was a production 2 node RAC environment, and the instances on node 1 could not be started.
It was interesting, becuase the instances on node 2, which belong to the same databases as the instance on node1, could be started and used without any errors.
The instance on node 1 was seeing the disks but they were reporting lots of corruptions.
The dbv when executed from node 1, was again reporting lots of corruptions, but the interesting thing was the corrupted block checksum.
The corrupted block checksum reported by dbv on node 1 has the value of 0x0, which means the checksum was okay.
However, even the dbv was reporting the corruptions..
The corruptions reported there were actually reported for the contents of the blocks.
So the checksum was okay but the contents of the block were not as expected.
In other words, Oracle database or dbv was thinking that the problematic blocks should not be there in their current location, as they belong some other place in the datafile/datafiles.

Here is an example output that I have gathered from the environment.
It was a dbv output, which was produced for a datafile, which had corrupted blocks.

Page 3043 is marked corrupt
Corrupt block relative dba: 0x00400be3 (file 1, block 3043)
Bad header found during dbv:
Data in bad block:
type: 6 format: 2 rdba: 0x1f6179e3
last change scn: 0x0008.a95bf9e0 seq: 0x1 flg: 0x04
spare1: 0x0 spare2: 0x0 spare3: 0x0
consistency value in tail: 0xf9e00601
check value in block header: 0xfe88
computed block checksum: 0x0

The important values and strings in this output were:

rdba: 0x1f6179e3  --> this is Hex. When converted to binary, its first 10 bits corresponds to file number.. In this case it seems like it is file 125.
corrupt block relative dba(rdba): 0x00400be3  -> file 1 block 3043
computed block checksum: 0x0
-Bad header found during dbv-
So, again checksum was okay, but rdba was different than corrupted block rdba. dbv reported "Bad Header found"as well. So the placement issue was obvious. In other words; the block were healthy (computed block checksum is 0x0) but the contents of them were actually the contents of  different blocks.

What I did to solve this was checking the OS layer of node 1 . (since node 2 was not encountering this)

I firstly, suspected from the storage, as the redundancy of the diskgroups was "external". However, the problem was there in the multipath.

The problem was in the multipath since there were conflicting paths.

What I mean by conflicting path is;

there was an ASM disk (customer was using asmlib) and its name was IBM19.

This disk was based on a multipath device called mpath13. That is, IBM19 was created using mpath13.

When I use multipath -ll , I saw the mpath13 was there.

However, when I checked the situation from the ASM perspective using oracleasm querydisk , I saw that, the disk IBM19 was based on 2 disks, mpath13 and mpath40. The mpath40 was displayed in the first line of the oracleasm querydisk output, and since asmlib disks goes through the first path that OS gives them , Oracle was reaching the disk through the wrong path. (It should have gone from mpath13, but it was going from the mpath40)

Note that, node 2 was also seeing the asm disk IBM19 in the same way. The only difference was, in node 2; mpath13 was displayed in the first line of the oracleasm querydisk, so that Oracle was reaching the disks through mpath13, thus there were no problems in node 2.

--mpath40 was based on newly added disks, and it was not included in any ASM diskgroup altough it was formatted using" oracleasm createdisk".

So, multipaths were conflicting. oracleasm was conflicting them somehow.

In other words, Oracle was using the mpath40 to reach the disks that supposed to be pointed by the mpath13, and thus Oracle was reaching the wrong disks.

What I did to fix this was , removing the devices that mpath40 was based on. I used, echo "scsi remote-single device" and then executed, multipathd.

The problem went away. After that, I added the devices back using echo" scsi add-single-device" and the conflict didn't appear again.

At the end of the day, the instance on node 1 could be started ( I only recreated the undo tablespace of node 1, as after the database went in to the open state, undo block corruptions were reported --probably caused by earlier instance terminations)

EBS 12.2 -- Using GSCC in 12.2 Upgrades, using frmcmp_batch and ( is already missing)

Nowadays, we have started doing EBS 12.2 upgrades. I'm sure about it, because I currently have on-going EBS upgrade projects and there are upgrade related questions asked in my forum, as well.(

This blog post will be based on one of these questions recently asked in my forum.
It was related with GSCC (Global Standards Compliance Checker), that we are using in our upgrades for ensuring our filesystem and db objects are compatible with the new EBS 12.2 release.

The instructions for using GSCC is documented in "Using the Online Patching Readiness Report in Oracle E-Business Suite Release 12.2 (Doc ID 1531121.1)".

When we look to that note, we see that there are for steps given as instruction and those 4 steps are all about running sql files.

However, we look carefully, we see that, the same note also gives us the readme of "R1222: STANDALONE READINESS REPORT AND GLOBAL STANDARDS COMPLIANCE CHECKER (GSCC)", wich actually gives us the necessary info that describes how to the the GSCC ( for checking our filesystem objects.


Using GSCC
The Global Standards Compliance Checker (GSCC) delivered in this patch consists
of the main, engine script $FND_TOP/bin/ and a variety of standards
enforcement code (EFC) modules in $FND_TOP/perl/GSCC/OpenEFC/ that check for
common standards issues.
After applying this patch to install the GSCC code, source the applications
environment file and then run the GSCC on a set files under a directory tree
like this:
cd /home/yourdir
$FND_TOP/bin/ -f '/home/yourdir/your-code/*'
In this example, will check all of the files located under the
your-code/ directory using the EFC modules located in
$FND_TOP/perl/GSCC/OpenEFC/ and generate a report named gscc-out.log in the
current working directory (/home/yourdir/ in this example). Invoking without arguments will print a usage message with additional

So, in addition to the 4 steps outline in the document 1531121.1, we need to use to check our filesystem objects (forms, reports etc..) for ensuring that they are compatible with the new EBS release.(to ensure custom code complies with E-Business Suite (EBS) online-patching coding standards.)

However, we can't use directly with forms (maybe reports as well) . That is, for instance we can't give fmb files as input to and this is why I m writing this blog post.

While just doesn't do anyting when given an input fmb files in some of the EBS 12.2 versions, it just gets error in some other versions, as shown below.

cd $XX_TOP/forms/US
$FND_TOP/bin/ -f XXOEXXX.fmb
The log file is :
more gscc-out.log
GSCC Compliance Test Results:
File: XXOEXXX.fmb

Configuration Errors (1):
0 - File conversion failed for
using the command line OVERWRITE=YES XXOEXXX.fmb
GSCC Totals:
Passes: 0
Fails: 0
Warnings: 0
Under Review: 0
GSCC Errors: 1

--* GSCC Compliance Test: Configuration Error *--

This error is caused by missing, as GSCC tries to use it internally.  (as seen in the above output: using the command line OVERWRITE=YES XXOEXXX.fmb )

Note that,  is not delivered with EBS.

One workaround for this can be taking it from an Client environment which has Oracle forms installed. (frm2xml is there when we have Oracle Forms)
So, we take the contents of the frmf2xml.bat (it is bat, since clients are generally on Windows) and convert it to Bash;

For example:
. $ORACLE_HOME/SID_host.env
export CLASSPATH=$CLASSPATH:$ORACLE_HOME/forms/java/frmxmltools.jar:$ORACLE_HOME/forms/java/frmjdapi.jar:$ORACLE_HOME/lib/xmlparserv2.jar:
for i in 'ls *.fmb'
java oracle.forms.util.xmltools.Forms2XML OVERWRITE=yes $i

So we can run this script for fmb files, get the xml files created and then give that xml files to the for validation.
However, this method has a problem. The problem is that, this java converter can crash when the fmb files are big. (too big:)

Fortuneatly, there is another & more efficient way and it is based on using frmcmp_batch.
The following script can be put on the directory, where the fmb files that we want to check reside and executed to get the whole gscc output in one go.

Note that, this script can be modified for using with the other types(such as rdf) as well;
It processes the filesystem objects one by one.. It produces one final gscc_log which includes all the gscc checks for fmb files.

for i in $(ls *.fmb);
echo $i
frmcmp_batch module=$i userid=apps/apps Script=YES Forms_Doc=YES module_type=FORM
$FND_TOP/bin/ -f `echo $i | sed '{s/.fmb/.txt/g; }'` -o gscc_`echo $i | sed '{s/.fmb/.log/g; }'`
echo "GENERATING GSCC OUTPUT --all FMBs together" > gscc_log
for i in $(ls *.log);
echo ---- GSCC OUTPUT FOR $i ----- >> gscc_log
cat $i >> gscc_log

Once we get it working, we will see violations in the gscc output, similar to the following..
When we get the violations, we send them to developers to make them fix them.

File: XXXXX_fmb.xml

Errors (1):
File.Gen.41 (Code Reviewers: CODE_REVIEWER)
516 - Potential standards violation found in Query/DML. Please check whether
is a direct table reference. If so, change it to use the APPS synonym.
516 - Potential standards violation found in Query/DML. Please check whether
is a direct table reference. If so, change it to use the APPS synonym.

File: XXXX_fmb.xml

Errors (1):
File.Gen.41 (Code Reviewers: CODE_REVIEWER)
814 - Potential standards violation found in Query/DML. Please check whether
is a direct table reference. If so, change it to use the APPS synonym.

Again, this was an issue reported in my forum and thanks "Linda" for pointing this out.

Friday, November 18, 2016

RDBMS-- Applying DB PSU and OJVM PSU to a Single Instance Oracle Database

In this blog post, I will show you the way and methodology for installing DB PSU and OJVM PSU to a Single Instance Oracle Database.

First let's see how we can find PSU and OJVM patches.
Again, this post is only for , but the method and the logic is the same for almost all the releases.

In order to find the patches, we use Oracle support .
We choose Patch  Search > Recommended Patch Advisor. Then we choose our platform , release and Product, which is the Oracle Database bytheway, and click the search button ,as depicted in the following screenshot;

That's all we need to do, as the advisor will list DB PSU, CPU(SPU) and the OJVM PSU for our database release.
Note that, SPU was Formerly known as Critical Patch Update. The program name which delivers SPUs will still be called Critical Patch Update,

The only thing that we need to do at this point is to download the patches and apply them by following their readme files.

Note that, if we apply PSU, we can't apply CPU(SPU) and we don't need to actually. (CPU can be though as a subset of PSU.)

Well, after learning how to find our PSU and CPU patches, let's take a look at the process of installing them.

Note that, I recommend applying DB PSU and OJVM PSU . These two are enough if we are on a stable and supported release. In addition, always apply DB PSU and OJVM PSU after a release upgrade. If there are fixes for your newly upgraded release, you should apply them , why not upgrading to a better state , right?

So, the following procedure is written for Oracle Databases which are running on Linux x86-64 platform.

Patch 24006111 - Database Patch Set Update (Includes CPUOct2016) ***PSU
*Shutdown listener and Databases running from target ORACLE HOME
*Add opatch to the path ($PATH env variable)


[oracle@coinactdbtst 24006111]$ opatch prereq CheckConflictAgainstOHWithDetail -ph ./
Oracle Interim Patch Installer version
Copyright (c) 2012, Oracle Corporation.  All rights reserved.

Oracle Home       : /u01/app/oracle/product/
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/oracle/product/
OPatch version    :
OUI version       :
Log file location : /u01/app/oracle/product/

Invoking prereq "checkconflictagainstohwithdetail"
Prereq "checkConflictAgainstOHWithDetail" passed.
OPatch succeeded.

--check readme if conflicts are reported, the action plan (what to do) is in the read me.

*DOWNLOAD LATEST OPATCH and install in to the target ORACLE_HOME

The patch number of Opatch is 6880880
We download it, we delete the old OPatch directory in ORACLE_HOME and then unzip the downloaded zip in to the ORACLE_HOME.. That's all for installing new version of OPatch.
OPatch Version: is enough for this work.

*WE APPLY PSU using Opatch apply
cd /home/oracle/oracle_11204/psu/24006111
opatch apply

The opatch should complete with the following messages:

Composite patch 24006111 successfully applied.
OPatch succeeded.


cd $ORACLE_HOME/rdbms/admin
sqlplus /nolog
SQL> @catbundle.sql psu apply
SQL> @utlrp.sql


select *
from sys.registry$history
where bundle_series = 'PSU'
order by action_time;

opatch lsinventory -bugs_fixed | egrep -i 'PSU|DATABASE PATCH SET UPDATE|APPLIED'

Patch 24433711 - Database Security Patch Update (CPUOct2016)  ***SPU
No need for this patch, as 24006111 fix the same issues.

Patch 24315821 - Oracle JavaVM Component Database PSU (Oct2016)

[oracle@coinactdbtst 24315821]$ opatch prereq CheckConflictAgainstOHWithDetail -ph ./
Oracle Interim Patch Installer version
Copyright (c) 2016, Oracle Corporation.  All rights reserved.

Oracle Home       : /u01/app/oracle/product/
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/oracle/product/
OPatch version    :
OUI version       :
Log file location : /u01/app/oracle/product/

Invoking prereq "checkconflictagainstohwithdetail"
Prereq "checkConflictAgainstOHWithDetail" passed.
OPatch succeeded.

--check readme if conflicts are reported, the action plan (what to do) is in the read me.

*WE APPLY PSU using Opatch apply
cd /home/oracle/oracle_11204/ojvm_psu/24315821
opatch apply


cd $ORACLE_HOME/sqlpatch/24315821
sqlplus /nolog
SQL> startup upgrade  (shutdown first, if the database is already started)
SQL> @postinstall.sql
SQL> shutdown
SQL> startup
SQL> exit
cd $ORACLE_HOME/rdbms/admin
sqlplus /nolog
SQL> @utlrp.sql


select * from sys.registry$history
where version like '%OJVM%'
order by action_time;

opatch lsinventory -bugs_fixed | egrep -i 'PSU|DATABASE PATCH SET UPDATE|APPLIED'

Thursday, November 17, 2016

RDBMS --- Upgrading a Single Instance Oracle Database from to using DBUA /Linux

In this post, I will give you a quick example about upgrading an Oracle Database from to in Linux 64 bit platform.
This blog post is written to show you the upgrade work required for upgrading a single instance Oracle Database, which is running on a cooked filesystem(non ASM) , to

In this blog post , we 'll go through an out of place upgrade process, because; Oracle strongly recommends that we should not upgrade Oracle Database to release or using an existing Oracle home.

Note that, the upgrade method demostrated here is for an example TEST environment, this method may change (at least minor changes may be required) according to your env. However, this blog post is useful for taking a look at the upgrade process, which is actually pretty straightforward and easy once we know what we are doing.

We download " PATCH SET FOR ORACLE DATABASE SERVER" using Patch 13390677 from Oracle Support.
Note that, this patch includes Oracle Database, ASM, Grid, Oracle RAC and so on.

We download 7 zip files and their descriptions are as follows; Oracle Database (includes Oracle Database and Oracle RAC) Oracle Database (includes Oracle Database and Oracle RAC) Oracle Grid Infrastructure (includes Oracle ASM, Oracle Clusterware, and Oracle Restart) Oracle Database Client Oracle Gateways Oracle Examples Deinstall

We create a directory for installing our Oracle home and configure its privileges accordingly(i.e chown -R oracle:dba)

we unzip the zip files, and execute the runInstaller from the <unzipped_directory>/database via a X windows session.

We follow the installer screens and install Oracle RDBMS as Software Only.

We run utlu112i.sql in the database which needs to be upgraded. (we run the utlu112i.sql located in new Oracle Home, but we run it when our environment is set to the old Oracle Home.)

We take corrective action by analyzing the output/spool file of utlu112i.sql.


Corrective Actions reported in a demo environment:
exec dbms_stats.gather_dictionary_stats;

We backup our source database . (only db backup, oracle home backup is not required as wer are doing an out-of-place upgrade)
Note that, DBUA provides backup options as well.

Next, we run dbua from the new Oracle Home and upgrade our database following the screens, as shown in the following example screenshots..
Note that, we shutdown our source database before executing dbua, but dbua can handle thse stop/start things as well.

Lastly, we check the difference between newly created spfile and the old spfile. We ensure nothing important is changed there.
--Note that, DBUA creates the new spfile according to the old spfile, but still checking is a good thing to do.

Lastly, we review te Post Upgrade section of the following document and take necessary action if our environment requires us to do so.

Oracle Database Upgrade Guide

Tuesday, November 15, 2016

Linux -- Huge Pages in real life, memory consumption, Huge pages on swap operations, using "overcommit" / nr_overcommit_hugepages

This blog post will be about Huge pages on Linux.
I actually wrote a comprehensive article about Linux Memory Optimization (including Hugepages) earlier, but this blog post will be a little different.
Today, I want to make a demo to show you the Hugepages in real life and the memory locking mechanism that we need to get used to, when we enable Hugepages.
The thing that made me write this article was a question that one my collegues asked last week.
My collegue realized that after rebooting his database server, the memory directly becomes "used". Even before starting the database, he could see the memory is in use when he executed the "free" command.
This question asked me on the phone and I directly answered that "it is because huge pages".
However, I wanted to make a demo and see this statement in real life.

Well let's revisit my earlier blog post and recall the general information about the Huge pages:
(I strongly recommend you to read this blog post,as well ->

When we use hugepages, we have smaller page tables in terms of size, because there will be less pages to handle, as Hugepages are 2MB(or more , it depends on the system) sized. In addition the hugepages are never swapped out, they are locked in memory.. Kernel does less work for bookkeeping of virtual memory, because of the larger page sizes.. Note that: Hugepages is not compatible with automatic memory management that Oracle does if configured to do..

Let's start our demo. (Note that, my demo env is an Oracle Linux 6.5 x86_64 and the kernel is an UEK 3.8.13-16.2.1.el6uek.x86_64)

HUGEPAGES OCCUPY MEMORY ONCE THEY ARE CONFIGURED (altough they are not used by any applications)

Firstly, I will show you the affect of hugepages. You will see the hugepages are never swapped out and when they are configured; they occupy memory, eventhough they are not used at all.

Initially, our hugepages are not configured as seen below;

[root@jiratemp ~]# cat /proc/meminfo |grep Huge
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB

Next, we sync and drop filesystem caches to have a clean environment in terms of memory. (we do this as we will use free command to see the affect of our actions)

[root@jiratemp ~]# sync;echo 3 > /proc/sys/vm/drop_caches ; free -m

[root@jiratemp ~]# free -m
             total       used       free     shared    buffers     cached
Mem:          7985        609       7375          0          4         47
-/+ buffers/cache:        557       7427
Swap:         8015          0       8015

Afterwards; we configure 2048 hugepages directly using proc fs  and directly check the memory usage using free command;
[root@jiratemp ~]# echo 2048 > /proc/sys/vm/nr_hugepages    (Hugepages are 2MB)
[root@jiratemp ~]#  free -m
                    total       used       free     shared    buffers     cached
Mem:          7985       4709       3275          0          4         49
-/+ buffers/cache:       4655       3330
Swap:         8015          0       8015

A quick explanation for the free command output:

Mem: total= Total physical memory
Mem: used = MemTotal - MemFree
Mem: free = Free memory
Mem: shared = meaningless nowadays, can be ignored
Mem: buffers Buffers
Mem: cached Cached memory
-/+ buffers/cache: used MemTotal - (MemFree + Buffers + Cached)
-/+ buffers/cache: free MemFree + Buffers + Cached
Swap: total Total swap
Swap: used SwapTotal - SwapFree
Swap: free Free Swap

You see 4709M are used. One page is 4K, one hugepage is 2M, so 2048 Hugepage makes 4096M
free command reports mb values when used with "-m" argument. See the used value is 4709 (609M was already used before we configure hugepage) . 4709-4096= 613 (almost equal to 609). So these used megabytes are caused by hugepages.
I remind you, we didn't use those hugepages, but once configured, they occupy memory as you see.

Well, it is certain that huge pages are reserved inside the kernel .

HUGEPAGES ARE NOT SWAPPED OUT EVEN UNDER PRESSURE (even when they are not used by any applications)

Hugepages can not be swapped out.. It is real. In order to test it, I wrote a python program. This program takes only one input , the memory size that we want it to allocate.

So , we use this program to create a memory pressure and to see if we can allocate hugepages when there is a memory pressure.

First, we configure 2048 hugepages;

[root@jiratemp ~]# echo 2048 > /proc/sys/vm/nr_hugepages    (Hugepages are 2MB)

[root@jiratemp ~]#  free -m
                    total       used       free     shared    buffers     cached
Mem:          7985       4709       3275          0          4         49
-/+ buffers/cache:       4655       3330
Swap:         8015          0       8015

As you see above, there is only 3275 mb free, so almost all used memory is occupied by Hugepages.
Now, we execute our  python program and try to allocate 4500MB of memory.

 PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND                                                                                                                           
 3740 root      20   0 4615m 3.4g  424 S  0.0 44.2   1:09.38 python ./erm 4500 

While our program is running, we take the free command output in every second to see the system wide memory usage ;

[root@jiratemp ~]# free -m -s 1
                     total       used       free     shared    buffers     cached
Mem:          7985       4701       3284          0          0         43
-/+ buffers/cache:       4656       3328
Swap:         8015          0       8015

                     total       used       free     shared    buffers     cached
Mem:          7985       4701       3284          0          0         43
-/+ buffers/cache:       4656       3328
Swap:         8015          0       8015

                    total       used       free     shared    buffers     cached
Mem:          7985       4703       3281          0          0         45
-/+ buffers/cache:       4657       3327
Swap:         8015          0       8015

                    total       used       free     shared    buffers     cached
Mem:          7985       5750       2234          0          0         46
-/+ buffers/cache:       5704       2281
Swap:         8015          0       8015

                    total       used       free     shared    buffers     cached
Mem:          7985       7928         56          0          0         46
-/+ buffers/cache:       7881        103
Swap:         8015          0       8015


You see , as our program allocates more memory in every second, free memory is getting closer to 0 (zero).

Moreover; because of this pressure, our server starts to hang and when we check the situation using top command (using our limited cpu cycles),  we see that kswapd is aggresively running..


 60 root      20   0     0    0    0 R 11.2  0.0   0:16.84 kswapd0  

[root@jiratemp 3740]# cat status|grep Swap
VmSwap:  1018512 kB

Morever, when we check our process; we see its Resident memory is 3.4G, as seen below;

 PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND                                                                                                                           
 3740 root      20   0 4615m 3.4g  424 S  0.0 44.2   1:09.38 python ./erm 4500 

However,  the virtual memory of our process is 4615m, as you see above.

(VIRTUAL MEMORY= 4615 MB but RES=3.4G)

So , this is a little interesting right? Because we requested 4500 m memory, but our resident memory is 3.4G..

The situation is the same when we  run a C program and try to allocate 5500 megabytes..

Program just slows down when it reaches the 3 GB of memory and swap activities are triggered.

Our application stuck at this point, but if we wait the swap daemon to swap out the memory , we can see that our program can actually allocate 5500 MB.. Look program says, I m allocating 5550 th MB ;

Currently allocating 5535 MB
Currently allocating 5536 MB
Currently allocating 5537 MB
Currently allocating 5538 MB
Currently allocating 5539 MB
Currently allocating 5540 MB
Currently allocating 5541 MB
Currently allocating 5542 MB
Currently allocating 5543 MB
Currently allocating 5544 MB
Currently allocating 5545 MB
Currently allocating 5546 MB
Currently allocating 5547 MB
Currently allocating 5548 MB
Currently allocating 5549 MB

But; when we look at the top output, we see the RES is only 3.6 G, however again VIRT is increased. So swap is there.

 4191 root      20   0 5418m 3.6g  152 R 64.0 45.7   1:10.96 ./a.out  

You see; when we look at the top output above, we see the RES is only 3.6 G, however VIRT is 5500. So swap is there...
So our page are swapped out!  (Remember VIRT = The  total  amount  of  virtual  memory  used by the task.  It includes all code, data and shared libraries plus pages that have been swapped out)

VIRT  --  Virtual Image (kb)
          The  total  amount  of  virtual  memory  used by the task.  It includes all code, data and shared libraries plus pages that have been swapped out. (Note: you can
          define the STATSIZE=1 environment variable and the VIRT will be calculated from the /proc/#/state VmSize field.)

Well, if we disable the hugepages; we can allocate "resident" memory using the same program... Here is an example output of top for the same program; (hugepages disabled),.

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND            
 4333 root      20   0 6623m 6.5g  376 S 67.2 82.9   0:03.33 a.out        

You see RES=6.5g.. So you see it allocates from the RESIDENT memory. (a.out is a C program which continously and endlessly allocates and uses memory)

So, this proves that hugepages are not swapped out . They are not swapped out, even in the case of a memory shortage. Also, when we have a memory shortage and if almost all the memory is allocated by the Hugepages, then we can see the pages that we recently allocated by our program, are swapped out to make room for our program to allocate more memory :).

Another interesting thing is; if there is not enough free memory, Hugepages can not be configured properly.
I mean, we can allocate regular pages from a self written program and we can test this.
When we do such a test, we see the hugepages will not be allocated altough we issue the commands;

Well we allocate all the memory by using a self written application and then configure 2048 hugepages..
Interesting thing is that, our command doesn't encounter any errors but hugepages are not allocated at all..

[root@jiratemp ~]# echo 2048 > /proc/sys/vm/nr_hugepages
[root@jiratemp ~]# echo $?
[root@jiratemp ~]# hugeadm --pool-list
      Size  Minimum  Current  Maximum  Default
   2097152        1        1        1        *
[root@jiratemp ~]# grep Huge /proc/meminfo
HugePages_Total:       1
HugePages_Free:        1
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB

As you see, setting hugepages is a task that should be done carefully. As they are not swapped out, system may hang in case of a memory shortage and the risk of memory shortage is actually increased when we you hugepages or let's say when we configure(not even used) hugepages.

Well, there is alternative way for configuring hugepages actually. User the overcommit  configuration, we can at least decrease the memory allocation of our hugepages when they are not used by any process.


Let's introduce the overcommint setting for the hugepages first;

/proc/sys/vm/nr_overcommit_hugepages specifies how large the pool of
huge pages can grow, if more huge pages than /proc/sys/vm/nr_hugepages are
requested by applications. Writing any non-zero value into this file
indicates that the hugetlb subsystem is allowed to try to obtain that
number of "surplus" huge pages from the kernel's normal page pool, when the
persistent huge page pool is exhausted. As these surplus huge pages become
unused, they are freed back to the kernel's normal page pool.

So, if we set the hugepages to a lower value and set the overcommit hugepages to a large value (large enough to meet our peak hugepage requests); then we can have a dynamic hugepage allocation in our environments.

Let's make a demo and see how it is done and how it behaves;

We set 100 hugepages and we set 1000 overcommit hugepages

[root@jiratemp ~]# echo 100 > /proc/sys/vm/nr_hugepages
[root@jiratemp ~]# echo 1000 > /proc/sys/vm/nr_overcommit_hugepages

We check the /proc/meminfo and hugepage pool list and see only 100 hugepages are allocated (as no processes use any hugepages at the moment)

[root@jiratemp ~]# grep Huge /proc/meminfo 
HugePages_Total:     100
HugePages_Free:      100
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB

[root@jiratemp ~]# hugeadm --pool-list
      Size  Minimum  Current  Maximum  Default
   2097152      100      100     1100        *

We sync and clear the caches to have a fresh start in terms of memory and allocate shared memory from the huge pages. (just like an Oracle Database does :)

[root@jiratemp ~]# sync;echo 3 > /proc/sys/vm/drop_caches ; free -m
             total       used       free     shared    buffers     cached
Mem:          7985        736       7249          0          0         21
-/+ buffers/cache:        713       7271
Swap:         8015        125       7890

Note: for allocating shared memory from the hugepages, I use the following C program:

int main(){
    int segment_id_1;
    char *shared_memory_1;
    struct shmid_ds shmbuffer;
    int segment_size;
    const int shared_segment_size=0x40000000;
    /*Allocate a shared memory segment*/
    segment_id_1=shmget (IPC_PRIVATE,shared_segment_size,SHM_HUGETLB | IPC_CREAT | SHM_R | SHM_W);
    return 0;

0x40000000 means 1GBytes, which means 512 Hugepages in Linux.
So , we tell our program to allocated 1GB ( 512 Hugepages) shared memory from the hugepages.

Remember, our hugepage count was 100, so there were 100 hugepages in our hugepage pool as shown earlier. On the other; we set 1000 overcommit hugepages. 

Well, when we execute this program, we see 512 pages are allocated. So our pool has enlarged :)

[root@jiratemp ~]# grep Huge /proc/meminfo 
HugePages_Total:     512
HugePages_Free:      511
HugePages_Rsvd:      511
HugePages_Surp:      412
Hugepagesize:       2048 kB

[root@jiratemp ~]#  hugeadm --pool-list
      Size  Minimum  Current  Maximum  Default
   2097152      100      512     1100        *

[root@jiratemp ~]# free -m
             total       used       free     shared    buffers     cached
Mem:          7985       1567       6417          0          3         25
-/+ buffers/cache:       1538       6446
Swap:         8015        125       7890

Now, our free memory decreased by 1024 Mbytes.
So overcommit works perfectly. We had 100 hugepages at first. So our hugepages were occupying only 200 Mbytes initially. However, when we need more, we could allocate it (thanks to overcommit)
We got ourselves an environment which can do a dynamic hugepage allocation..


Let's try with the Oracle database;

First of all our limits.conf should be configured properly to use hugepages.. In other words; oracle Os user must be able to lock memory when it is instructed to use hugepages (especially Hugepages only!)

This can be done in 2 ways.

1) By setting cap for oracle binary with root account
setcap cap_ipc_lock=+ep oracle

2) By adding the following (change the values according to your needs) to the limits.conf
oracle soft    memlock        unlimited
oracle hard    memlock        unlimited

If we don't do one of these configurations; we end up with the following ORA-27137 ;

[oracle@jiratemp ~]$ sqlplus "/as sysdba"

SQL*Plus: Release Production on Thu Nov 10 11:08:01 2016
Copyright (c) 1982, 2013, Oracle.  All rights reserved.
Connected to an idle instance.
SQL> startup nomount;
ORA-27137: unable to allocate large pages to create a shared memory segment
Linux-x86_64 Error: 1: Operation not permitted
Additional information: 14680064
Additional information: 1

Well, suppose we configured our memory lock parameters or set cap for oracle binary, 
and configured our memory related database parameters as follows;
--these parameters are used to configure the initial memory allocation of Oracle Database ,when it is started. (the parameter sga_target is for this actually)

sga_max_size = 1000M
sga_target = 500M
use_large_pages_only=ONLY --> this instruct oracle to use only the hugepages.

We set hugepage overcommit to 1000 and hugepage count to 100 ;

[root@jiratemp ~]# sync;echo 3 > /proc/sys/vm/drop_caches ; free -m

total used free shared buffers cached
Mem: 7985 205 7779 0 0 16
-/+ buffers/cache: 189 7796
Swap: 8015 30 7985

[root@jiratemp ~]# grep Huge /proc/meminfo

HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB

[root@jiratemp ~]# hugeadm --pool-list
Size Minimum Current Maximum Default
2097152 0 0 0 *

[root@jiratemp ~]# echo 100 > /proc/sys/vm/nr_hugepages
[root@jiratemp ~]# echo 1000 > /proc/sys/vm/nr_overcommit_hugepages

[root@jiratemp ~]# hugeadm --pool-list
Size Minimum Current Maximum Default
2097152 100 100 1100 *

So, we startup our Oracle database (note that , starting the database in nomount mode is enough for this test) as follows;

oracle@jiratemp ~]$ sqlplus "/as sysdba"

SQL*Plus: Release Production on Thu Nov 10 11:23:41 2016
Copyright (c) 1982, 2013, Oracle.  All rights reserved
Connected to an idle instance.
SQL> startup nomount;
ORACLE instance started.
Total System Global Area 1043886080 bytes
Fixed Size                  2296280 bytes
Variable Size             876611112 bytes
Database Buffers          159383552 bytes
Redo Buffers                5595136 bytes

We check our parameters are set.

SQL> show parameter sga_target   

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
sga_target                           big integer     500M

SQL> show parameter sga_max_size  
sga_max_size                     big integer   1000M

Once, our database is started, we directly check the huge page usage;

Remember, our overcommit hugepage number was set to 1000 (2048 Mb) and our hugepage number was set to 512 (1024 Mbytes)

[root@jiratemp ~]#  grep Huge /proc/meminfo 
HugePages_Total:     501
HugePages_Free:      252
HugePages_Rsvd:      252
HugePages_Surp:      401
Hugepagesize:       2048 kB

You see? The hugepages the total hugepages count is now 501. (it was 100 earlier), and the Hugepages surplus is 401.  So Linux let oracle to allocate more hugepages than configured by respecting the overcommit configuration. In other words;  Linux let Oracle to allocate(reserver+allocate) 512 Hugepages by enlarging the Hugepage pool automatically and dynamically.

As we set sga_target parameter to 500M, Oracle allocated almost 250 Hugepages.(see Hugepages_Free is 252), also Oracle reserved 252 more hugepages as sga_max_size was set to 1024MB.

Now, we set sga_target parameter to 1000Mbytes and see if Linux will let Oracle to use all the 512 Hugepages.

SQL> alter system set sga_target=1000M scope=memory;

System altered.
[oracle@jiratemp ~]$ grep Huge /proc/meminfo 
HugePages_Total:     501
HugePages_Free:        2
HugePages_Rsvd:        2
HugePages_Surp:      401
Hugepagesize:       2048 kB

Yes. Oracle allocated almost 500 Hugepages to build its 1024M sized SGA on top of these hugepages.

When we shutdown our Oracle database, we see the hugepage pool dynamically deallocated and the space occupied by Oracle 's Hugepages are freed as expected.

SQL> shu immediate;
ORA-01507: database not mounted
ORACLE instance shut down.

[root@jiratemp ~]#  grep Huge /proc/meminfo 
HugePages_Total:     100
HugePages_Free:      100
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB


Well, we see that, Oracle can go with the the hugepage overcommit setting(nr_overcommit_hugepages).
What actually Oracle does is, that it honours the sga_max_size in the instance startup and make commitment to allocate the necessary hugepages for satisfying the sga_max_size. However, actually it only allocates the necessary hugepages for satisfying the sga_target.
Although the configured number of hugepages is set to a lower value, Oracle can allocate the necessary amount of hugepages to satisfy its sga_max_size, because an hugepage overcommit configuration is in place.Furthermore, when Oracle database is shut down, the surplus of hugepages (overcommit count - number of hugepage count) is given back to OS.


So, following are the conclusions;

  • Hugepages are not swapped out in any circumstances.
  • Hugepages are not given to any process that want to allocate regular pages.
  • Hugepages occupy memory once they are configured , eventhough they are not used.
  • nr_overcommit_hugepages is good thing in the relevant cases. 
  • nr_overcommit lets a process to allocate more hugepages than configured.
  • When a process shuts down gracefully or release its memory, the hugepages used by that process are given to back to system and if that process uses overcommit hugepages, the nr_overcommit_hugapages value will be set to its default.
  • Oracle will go with the nr_overcommit_hugepages parameter.
  • Oracle will allocate(not use) shared memory using Huge pages based on the sga_max_size.(We can think like Oracle allocates shared memory using shmget in the instance startup, in other words; the segment size argument given in the shmget equals the value defined in sga_max_size) . So, when Oracle allocates shared memory in startup, it actually reserves the hugepages and the Reserved hugepage count increases. On the other hand, Oracle will use the number of hugepages based on the sga_target and that's why Hugepagess free(seen in /proc/meminfo) decreases accordingly.
  • If we use hugepages (only), we should set sga_max_size equal to the sga_target. (in order to not to waste our memory)

Well, after knowing these; I want to give an example, where overcommit memory can be used to address a memory wastage problem.


  • We have 3 databases in a single server environment. Let's say these are TEST , DEV and UAT.
  • We sometimes work only in TEST, sometimes in DEV and sometimes, in 2 of these databases and  sometimes in all of these 3.
  • We also have processes/session which do lots of PGA work.
  • We have 100 Gigabytes of Ram and we want to reserver 10GB of it to OS.
  • We want to have 20GB sized SGA for each of these databases.
  • We want to use sga_max_size/sga_target (ASMM -- not AMM) and use Hugepages as well.

Now,  consider which one of the following setting is a better idea?


 echo 10240 > /proc/sys/vm/nr_hugepages
 echo  30720> /proc/sys/vm/nr_overcommit_hugepages

 echo 30720> /proc/sys/vm/nr_hugepages
 echo  30720> /proc/sys/vm/nr_overcommit_hugepages


*My answer is 1) . 
The reason is explained earlier. :)
What is yours? please feel free to comment...