Monday, February 29, 2016

Linux -- adjusting root reserve and other stuff for an EXT4 partition, tune4fs , e4fsprogs package

As you may remember, I had a blog post about the root reserve on Linux ext* filesystems, http://ermanarslan.blogspot.com.tr/2014/09/linux-tune2fs-adjusting-reserved-blocks.html.

Today, tried it on a ext4 filesystem residing on an Oracle VM Server based Virtual disk in a Customer test environment and the result was;

[root@oracleserver TEST]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/xvda2            9.3G  4.4G  4.9G  48% /
/dev/xvda1            487M   27M  435M   6% /boot
tmpfs                 4.0G     0  4.0G   0% /dev/shm
/dev/xvdb1            197G  187G     0 100% /u01
/dev/xvdb1            197G  187G     0 100% /ortest

[root@oracleserver TEST]# tune2fs -m 2 /dev/xvdb1
tune2fs 1.39 (29-May-2006)
tune2fs: Filesystem has unsupported feature(s) while trying to open /dev/xvdb1
Couldn't find valid filesystem superblock.


This was actually expected. 
These kind of stuff on ext4 filesystem should be done using tune4fs. (not the tune2fs)

So, as you see below, using tune4fs everything went fine;

[root@oracleserver TEST]# tune4fs -r 0 /dev/xvdb1
tune4fs 1.41.12 (17-May-2010)
Setting reserved blocks count to 0

[root@oracleserver TEST]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/xvda2            9.3G  4.4G  4.9G  48% /
/dev/xvda1            487M   27M  435M   6% /boot
tmpfs                 4.0G     0  4.0G   0% /dev/shm
/dev/xvdb1            197G  187G   10G  95% /u01
/dev/xvdb1            197G  187G   10G  95% /ortest

tune4fs is available via e4fsprogs package.

e4fsprogs also brings other tools for playing with ext4 filesystem. Here is a short list ;

4fsck ->used to repair filesystem inconsistencies after an unclean shutdown
mke4fs->used to initialize a partition to contain an empty ext4 filesystem) 
debugfs -> used to examine the internal structure of a filesystem, to manually repair a corrupted filesystem, or to create test cases for e4fsck)
tune4fs -> (used to modify filesystem parameters) and etc..

Lastly, altough it is not documented(or I couldnt have time to check all the web:)) , e4fsprogs package is just a newer version of e2fsprogs. In this manner tune4fs is the newer version of tune2fs.
At the bottom line, tune4fs and all the other **4fs programs delivered within the e4fsprogs can deal with the ext4 filesystem.

Thursday, February 25, 2016

EBS 12.2 -- Java Color Scheme, Changing the color

This is a very basic subject and almost any APPS dba knows it.
That is, we change the color of the EBS using Java Color Scheme profile option.
We change the color in the events such as cloning to let the distinguish the environments by their colors.

The reason that makes me write  this blog post is however a new requirement on this process.
In EBS 12.2, after changing the Java Color Scheme profile option , we need to restart the oacore managed server (just in case restart the Oracle HTTP Server as well). Unless restarting, the color of form screens will not be changed.
Also, the profile option Java Look and Feel must be null or set to the value "oracle" to let the EBS regard the Java Color Scheme profile option value.

The Java color profile option value can be updated from backed as follow;

UPDATE fnd_profile_option_values erm
SET erm.PROFILE_OPTION_VALUE='PURPLE'
WHERE erm.PROFILE_OPTION_ID in ( SELECT erm2.PROFILE_OPTION_ID
FROM FND_PROFILE_OPTIONS_TL erm1,FND_PROFILE_OPTIONS erm2
WHERE erm1.USER_PROFILE_OPTION_NAME = 'Java Color Scheme' and
erm2.PROFILE_OPTION_NAME = erm1.PROFILE_OPTION_NAME);

ODA virtualized-- ODA_BASE, oakd , oakcli OAKERR:6002 No master node was found

You may encounter this problem when trying to use oakcli in ODA BASE nodes.
If you hit this, any oakcli command will return OAKERR:6002 , as it is caused by the lack of the oakd process.
In order to use oakcli, the oakd process must be up and running on at least one of ODA BASE nodes.
If for some reason, the oakd is stopped, then you can't run oakcli commands as shown the command outputs below;

[root@ermoravmsrv1 ~]# oakcli show vm
OAKERR:6000 Exception encountered in Module getMasterNode
OAKERR:6002 No master node was found

[root@ermoravmsrv1 ~]# oakcli show ismaster
Failed to connect to oakd.

So , the standard solution for this problem is restarting the oakd using the oakcli, but in reality, oakcli can not handle that.
Well, when this happens , we need to run oakd manually and save the day.

Here is the environment and the command that I have used for starting the oakd manually.
Note that: The environment setting is important, as without a proper environment, you can't run oakd..

export ORA_OAK_HOME=/opt/oracle/oak
export ORA_CRS_HOME=/u01/app/11.2.0.4/grid
export CONSOLE=/dev/console
export NODENUM=0
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/oracle/oak/lib:/usr/lib64/sun-ssm/storagelibs/sdks:/u01/app/11.2.0.4/grid/lib
export ORA_NLS10=/opt/oracle/oak/nls/data
export SUN_HMP_LIBS=/usr/lib64/sun-ssm/storagelibs/sdks
export OAK_MS_DEBUG=1
export PATH=$PATH:/bin:/usr/bin:/sbin:/usr/sbin
export ORACLE_HOME=/u01/app/12.1.0.2/grid

[root@ermoravmsrv1 bin]#cd /opt/oracle/oak/bin
[root@ermoravmsrv1 bin] nohup ./oakd foreground &

--wait a little(30 secs) for oakd to initialize itself.


As shown below, after starting the oakd manually from the command line,  oakcli show vm command, could do its job properly.

[root@ermoravmsrv1 bin]# oakcli show vm

          NAME                                  NODENUM         MEMORY          VCPU            STATE           REPOSITORY
   
        EBS_12_2_3_PROD_APP                     0               17384              8            ONLINE          vmrepo1                
        EBS_12_2_3_PROD_DB                      0               65536             16            ONLINE          vmrepo1                
        EBS_12_2_3_TEST_APP                     1               17384              8            ONLINE          vmrepo1                
        EBS_12_2_3_TEST_DB                      1               65536              8            ONLINE          vmrepo1                
        EBS_12_2_3_VISION                       0               48000             16            ONLINE          vmrepo1                
So, the fix is to start the oakd manually, but a proper and stable fix is actually restarting the oda_base, which requires downtime.

Without a oda_base restart, oakcli can work but it can not restart the oakd, which is manually started. That's why, we recommend restarting the oda_base to make the system run as it is supposed to run.
Why oakcli can not restart oakd , even if it can be started manually? 
The answer of this question lies on the code, because there are no logs generated when this happens. I wish I could go in to the scripts and code but had no time for reviewing the code, perl scripts and configuration files. 
So, if you encounter this problem, apply my woraround and request a downtime for restarting the oda_base.

Monday, February 22, 2016

ODA X5 BARE METAL deployment

The deployment of ODA X5 is actually, consists of 5 tasks:
  • cabling
  • power on
  • configure the network
  • deploy the GI bundle
  • Post install
In this post, I will go through the deployment process for a BARE METAL deployment.
Actually, it is already briefly documented in the setup poster avaiable via https://docs.oracle.com/cd/E22693_01/doc.12/e55694.pdf


Let' have an inner look at the process;

Before the installation:

  • Decide how many cores to enable.
  • Decide Fiber or copper network for the public network. If fiber is desired, then the InfiniBand cards with 10GbE SFP+ fiber cards
  • Determine the the DNS server's ip address (if it is availabe)
  • Determine the the NTP server's ip address (if it is availabe)
  • Request the ip addresses. 8X IP Addresses are required. 1 x Public, 1x Virutal for each node, 1xILOM + 2 scan addresses. Ip addresses should be on the same subnet. All the ip addresses must be static and Public ip addresses s should be able to be resolved from the DNS. Hostnames should be at most 13 chars.
  • Request the redundant switches for the public interfaces to prevent single point of failure.
  • Do not think about the Private ip addresses, they are configured by ODA.
  • If a custom installation is desired determine the following;
    •  Cluster name/ODA System Name
    •  Type of Configuration ,Custom?(Block size,Lang,territory,disk redundancy level, ASR configuration etc.)
    •  Mode (Bare Metal or Virtualized)
    •  Cloud Filesystem size
    •  Region, timezone, initial database name and details (RAC or single node, database name etc.)
  • Attach the Node-Storage cables:

  • Attach a monitor to the graphics card port and a keyboard and a mouse to the USB ports on Node 0.
  • Attach all power cords.

Start Storage shelves:

Start the storage server  andwait till green LEDs will stop blinking and the system processer (SP) on the nodes staying steadily lit.


Start ODA nodes:

Power of the ODA nodes by pushing the Power button on the node’s front panel.
Wait until for the green Power Ok led on the nodes to stop blinking.

Check the Storage cables:
Use "/opt/oracle/oak/bin/oakcli validate -c storagetopology" to check if te storage is cabled properly.

Configure the network:

Log in to node 1and Configure the network using the command: /opt/oracle/oak/bin/oakcli configure firstnet, select the Global option and then enter the domain name, DNS servers, host names, network interface, IP addresses for nodes, netmask, and gateway when prompted.

Deployment:

Deploy the Software on Database Appliance:

ODA comes OS and appliance manager (oackli) pre-installed.
so we only need to install the end-user bundle.
Download the end-user bundle from Oracle Support Note:888888.1 , copy it to node1 and unpack it using oakcli  (oakcli unpack -package /tmp/p12978712_210000_Linux-x86-64.zip)

Run the Configurator by logging into Node 0 with the root user, opening a vnc connection
Log into Node 0 of Oracle Database Appliance as the root user with the default password and enter the following commands.
# oakcli deploy

POST INSTALL TASKS:
Complete the Oracle Database Appliance Postinstallation Tasks Section of :"Oracle® Database Appliance Getting Started Guide Release 12.1.2.2.0 for Linux x86-64E22692-41"

EBS 12.2 START CD 12.2.0.51 released . It delivers a 12C database!

Start CD 12.2.0.51 is released. It is a game changer, as it delivers Oracle Database 12.1.0.2 as the EBS 12.2 database.
So using this new startCD, upgrading the database from 11gR2 to 12C is no longer needed.
12.2.0.51 which is available with patch 22066363 also deliveres Fusion Middleware 11.1.1.9.0
So it is the latest EBS startCD that can be installed for installing EBS 12.2.5

Oracle recommends using startCD 12.2.0.51 for the new installations and EBS upgrades.
On the other hand, altough the startCD is new, it doesn't provide a new EBS release like 12.2.5, so still we install 12.2.0 and upgrade it to 12.2.5.

Note that: upgrading 12.2.0 to 12.2.5 is a must.

Consider this info, if you are about to install EBS 12.2.

Monday, February 15, 2016

EBS R12 -- SHA 2 certificates are supported with EBS 12.1.3

It seems using SHA2 certificates in EBS 12.1.3 SSL implemetation projects is now certified and supported.
It wasn't in the past, so I find it useful to share it with you..

Earlier;

EBS , even the latest version EBS 12.2 does not support SHA2 certificates.
Oracle states this as follows;
Ref : Oracle Support
"At the present, there is no Oracle solution to this problem. An internal Bug 8839166- support for sha2 at ssl level has been raised.
For Fusion Middleware 11g, the future plans are that these algorithms will be supported when a release of FMW is released that incorporated
11.2.0.3 Required Support Files or higher."
Now;

SHA-2 signed PKI certificates are now certified for inbound connections to the Oracle HTTP Server (OHS) delivered with Oracle E-Business Suite 12.1.3.

There also ofcourse some requirements. Using SHA2 certificates requires ;

Application Tier OPATCH to be at least 1.0.0.0.63IAS ORACLE HOME to be 10.1.3.5
October CPU to be applied  on IAS ORACLE HOME

Check Steven Chan's Oracle blog for details and directions for the implementation documents.

https://blogs.oracle.com/stevenChan/entry/sha_2_signed_pki_certificates

Thanks Karan Kukreja for pointing this out.

Saturday, February 13, 2016

Using DD direct IO for meaningful IO test (LGWR especially) -- oflag=direct

Recently done a healtcheck in a critical Production envrionment an saw the following warning in LGWR trace files.

"Warning: log write elapsed time 12000m"

There were high log file syncs and parallel write wait times in the AWR reports , so decided to do an I/O test on the underlying device.
Redolog files were residing on a Veritas Filesystem which was residing on top of the disks coming from 3par storage.

As the LGWR process always do an direct IO and bypasses the filesystem buffer cache, doing an dd  test like the following was not meaningful.
dd if=/dev/zero of=/erm/testdatafile1 bs=1k count=2500k conv=fsync

In addition, the comparisons in such a test were not meaningful too, because it seemed Veritas was doing direct IOs  even when the OS fs cache is enabled and direct_io argument wasnt used in dd command  , but a linux filesystem like ext3 was using cache for its write operation and that s why there were a big difference in dd outputs .

The correct method in these situations is to make the write tests using direct_io flag of the dd command.
Something like the following would do the job;

dd if=/dev/zero of=/erm/testfilenew1 bs=1k count=50000 oflag=direct"

Just before the command above, we can also disable the filesystem cache just to be sure that, cache is not populated at all .( actually it does not matter if oflag=direct is supplied to dd, but still it worths to mention)

hdparm -W0 /dev/sda1


Lastly, here i a example output that I have done on a standard ext3 filesystem. 

[root@ermantest ~]# dd if=/dev/zero of=/erm/testfilenew1 bs=1k count=50000 oflag=direct
50000+0 records in
50000+0 records out
51200000 bytes (51 MB) copied, 16.8227 seconds, 3.0 MB/s

[root@ermantest ~]# hdparm -W0 /dev/sda1 (disabling the cache)

/dev/sda1:
 setting drive write-caching to 0 (off)
 HDIO_DRIVE_CMD(setcache) failed: Inappropriate ioctl for device
[root@exatest ~]# dd if=/dev/zero of=/erm/testfilenew2 bs=1k count=50000 oflag=direct
50000+0 records in
50000+0 records out
51200000 bytes (51 MB) copied, 16.6633 seconds, 3.1 MB/s

So , as seen, we do fixed 1k sized IOs and we see 3.1 MB /s IO throughput in a standard Linux ext3 residing on a local filesystem.

If we use /dev/urandom rather than /dev/zero , then we determine 1.5 MB/s throughput, but is just because of the cpu overhead actually and that's why using urandom or /dev/zero is not any different in IO perspective in such a test.

[root@ermantest erm]# dd if=/dev/urandom of=testfilenew4 bs=1k count=50000 oflag=direct
50000+0 records in
50000+0 records out
51200000 bytes (51 MB) copied, 35.7821 seconds, 1.4 MB/s

Why am i sharing these?

Because I want to give you a reasonable dd test for making a decision about the I/O performance in Linux.
I suggest using a similar command like dd if=/dev/zero of=/erm/testfilenew2 bs=1k count=50000 oflag=direct for testing the performance of LGWR type of IO  and expect at least 3MB/s througput from that.

If you see lower throughputs like 500k/s, then I suggest you to speak with OS and filesystem admins(Ex:Veritas admin), as well as the underlying Storage Admins to make them check their tiers as well. Especially the HBA should be checked as Direct IOs will need space in queue and HBA queue is a good place to check for that.
 
I find it interesting, so waiting for your comments about this post. 

Thursday, February 11, 2016

12.2 -- Online Patching -- Forward Cross Edition Triggers for "initial loading of data from run to patch edition"

We know that EBS Forward Cross edition triggers are used for replication the changes  that are done in the run edition, to the patch edition during the online patching cycle.
These triggers are used in dml operations and by the nature of being trigger, they are fired when a new insert or update take place.
So what about the data stored in the table before creating these trigger? In other words; how is the initial loading of data is done.
There are 2 answer for these questions.
So if the patch is designed to update the seed data, then the seed data is copied from run edition to
patch edition using insert - select method.
You can see this if you execute adop phase=prepare and look at the adzdshowlog.out
"ad.plsql.ad_zd_seed.create_sync EVENT Copy Seed Data using insert-select: SOMETABLE"

But if , the patch is designed to update an application table definition, then the data in the application table is copied from run edition to patch edition by using a fake update operation that trigger the
Forward Cross edition trigger, which has the transformation logic.
It is actually exactly the same as I have explained in the earlier blog post.
http://ermanarslan.blogspot.com.tr/2015/07/ebs-122-and-ebr-lets-make-demo.html
An example for the fake update is also there.. "update olderman set dummy=dummy;"

Why insert-select for copying/initially loading the seed data from run to patch edition, and forward cross edition triggers for copying/initally loading the application table data from run to patch edition?
The answer is simple, patches update the seed data itself, so no transformation is needed, thus a simple insert-select is enough for the initial load.
On the other hand, patches dont change the data in the application tables, they change the the application table structure itself(a change in column type for example), so transformation is needed and forward cross edition triggers which have the transformation logic for that patch are our friends and that's why they are used for initial loading of data from run to patch edition during  an online patching session.

Friday, February 5, 2016

Exadata X5- Exadata now a consolidation environment for non-database or third party applications

Oracle VM Server is now included in Exadata. We can deploy non-database and third party applications in to the Oracle Virtual Servers created on top of OVM Server running onExadata X5 .

Besides;
Exadata VMs deliver near raw hardware performance.

If you are interested, this presentation is a good place to start.
http://www.oracle.com/technetwork/database/availability/exadata-ovm-2795225.pdf