Saturday, July 22, 2017

Book: Practical Oracle E-Business Suite, last package delivered :)

Today's blog post is not technical :)
This post is book-related and it will be short.

As you know, our book, Practical Oracle E-Business Suite has been published for almost a year.

I  really appreciate the feedbacks and increasing interest in this book.

I want my followers and my readers to know that, in addition to my desire for writing, these feedbacks and the recognition are my best motivations for writing.

Actually, these are my new motivations for making continous contributions to the Oracle Community.

After the publication, I had my 10 copies that Apress sent to me. As of yesterday, I delivered the last copy that I have left.

Once again,  many thanks to all the people who directly or indirectly, consciously or unconsciously, have helped me to arrive today.

Friday, July 21, 2017

EBS 12.2 - after a fresh install, Appslogin is not working, adgendbc.sh is failing with java.sql.SQLException: Invalid number format for port number

We have encountered this strange problem just after a fresh EBS 12.2 installation.
HTTP Server check that was done in the last screen of the rapidwiz was failed.
The underlying database was a 12.1 RAC and that's why we first tried to solve it by analyzing the dbc files and jdbc thin urls.
We even went inside the database and checked the fnd_* (fnd_databases, fnd_listener_ports etc..) files to find a clue. We did a  full db tier check and ensured that both local and scan listeners were configured perfectly.
We recreated the topology by running autoconfigs, after truncating the fnd_oam_context_files table and the other related tables using fnd_conc_clone.setup_clean.
Nothing that we did, fixed the error that we were seeing in the apps tier autoconfig executions.
adgendbc.sh was failing with  java.sql.SQLException: Invalid number format for port number.

After a long research and lots of efforts, we concluded that we were facing the problem that was documented for EBS 12.1, in an EBS 12.2 instance!

The solution was disabling java just-in-time compiler for the EBS database (alter system set JAVA_JIT_ENABLED= FALSE scope = both;)

Here is the MOS document that was written for EBS 12.1 -> Adgendbc Fails With Database Connection Failure (Doc ID 1302708.1)

We have seen this issue in a EBS 12.2 instance that was freshly installed on a Solaris 11.3.

EBS 12.2 -- Roadmap for "High available EBS 12.2 installation using a Shared Application Filesystem, Oracle RAC infra and a Load Balancer"

Here is a roadmap including the required documentation references, which can be used to build the configuration that I call "High available EBS 12.2 configuration provided by a Shared Application Filesystem, Oracle RAC infra and a Load Balancer"


Actually, I m currently installing a 4 node EBS 12.2 environment and in couple of days, I will document it as a whole.
Anyways, I still wanted share the action plan with you.

Actions:

1) Install Grid 12.1 and build a RAC environment.
2) Install EBS database using rapidwiz. Install it as a RAC database.
3) Install EBS Apps Tier in to a single apps server (primary apps server) , upgrade it to the supported release (currently 12.2.6)
4) Export the necessary directories from the primary apps server using NFS
5) Mount these exported directories in the secondary apps server.
6) Follow the standard Oracle Support documents (mainly Sharing The Application Tier File System in Oracle E-Business Suite Release 12.2) and add the secondary apps server to the topology.
7) Follow EBS Load balancer document and enable the load balancer. (Note 1375686.1 - Using Load-Balancers with Oracle E-Business Suite Release 12.2)
8) Do post installation work and tune the configuration (Enable SSL, configure PCP etc..)

Actions from a different point of view:

1) Install Grid infra 12.1
2) Install EBS database tier as a RAC database using startCD 51
4) Perform a full rman backup
5) Do the database post installation work -> Using Oracle 12c Release 1 (12.1) Real Application Clusters with Oracle E-Business Suite Release R12.2 (Doc ID 1626606.1)
"Section 5.2.2 Post Install Steps"
6) Execute rapidwiz on 1 st (primary) apps node and load the configuration from db to install EBS 12.2.0 apps tier.
7) Upgrade EBS to 12.2.6  and apply the translation+localization patches (if required)
8) Follow -> Sharing The Application Tier File System in Oracle E-Business Suite Release 12.2 (Doc ID 1375769.1) to add a secondary application server to the configuration.
"do the things documented in Section 3.3 Execute adpreclone Utility on the Run and Patch File System" and afterwards
9) Enable load balancer by following -> Using Load-Balancers with Oracle E-Business Suite Release 12.2
10) Enable Parallel Concurrent Processing - > Using Oracle 11g Release 2 Real Application Clusters and Automatic storage management with Oracle E-Business Suite Release 12.2 (Doc ID 1453213.1) (I m giving the 11gR2 document for this, because enabling PCP  is not documented in the document named Using Oracle 12c Release 1 (12.1) Real Application Clusters with Oracle E-Business Suite Release R12.2 (Doc ID 1626606.1)) : Appendix I : Configure Parallel Concurrent Processing

Some of the key requirements:

*All database node must be at the same OS level (same OS patch level)
*All application node must be at the same OS level (same OS patch level)
*NFS must be installed on apps nodes
*ssh equivalency must be configured for apps to apps and db to db nodes.
*Grid 12.2 is not certified with EBS. (EBS database version delivered with latest start cd (startCD 51) is 12.1 and RDBMS 12.1 has critical issues with GRID 12.2. So GRID 12.1 should be used.

Things to know for Multi node installation:
  • Rapidwiz no longer supports multi node apps tier installation.
  • In order to have a multi node apps tier; we install the apps tier as single node, then we upgrade the EBS 12.2 to the latest RUP levle (12.2.6) and then we add the secondary application node using standard cloning procedure.
  • We use NFS for mounting the the APPL_TOP, COMMON_TOP, OracleAS 10.1.2 Oracle Home, Oracle WebLogic Server, and WebTier Oracle Home file systems from primary application tier to the secondary application node.
  • Shared Application Tier File System can not be a read-only-file system unlike in the previous releases.
  • For Solaris 11 installation, a modification in the installation stage files is required. .> http://ermanarslan.blogspot.com.tr/2017/07/ebs-1220-installation-on-solaris-511.html
Related documents:
  • Using Oracle 12c Release 1 (12.1) Real Application Clusters with Oracle E-Business Suite Release R12.2 (Doc ID 1626606.1)
  • Note 1375769.1 - Sharing The Application Tier File System in Oracle E-Business Suite Release 12.2
  • Note 1375686.1 - Using Load-Balancers with Oracle E-Business Suite Release 12.2

Wednesday, July 19, 2017

EBS -- 12.2.0 Installation on SOLARIS 5.11 / make: Failed linking targets / "modifying the stage files" (continued)

My followers should remember this.
I already wrote an article about the failed make commands, which you may see during the installation of EBS 12.2 on Solaris 5.11 Operating System.

Here is the url of the relevant blog post: http://ermanarslan.blogspot.com.tr/2017/06/ebs-1220-installation-on-solaris-511.html

As you will see when you read that blog post, I recommended a manual modification that should be done in the installation stage files.

I  tested and verified that solution and recommended it to you.

However, there was a question in my mind.. That is, we normally shouldn't modify anything right? (this is certified environment)

Anyways, I m writing this blog post in order to tell you that this kind of a modification can also be recommended by Oracle Support.

I mean, I didn't stop chasing this problem and created an SR, followed it and finally get the same recommendation from Oracle.

Oracle Support recommended almost the same thing that I recommended earlier.

Note that, currently there is no fix for this.

The workaround is ->

Locate inst_reports.mk file in your stage directory. 
EBSInstallMedia/AS10.1.2/Disk1/appsts/stage/tools34_reports.zip. 

unzip it and you will find the file reports/lib32/ins_reports.mk 

Modify that file to include LD_OPTIONS for every relink/compile for every executable. 
Rename your old tools34_reports.zip (rename it as tools34_old.zip) 
After that, zip the contents that you unzipped earlier with the name tools34_reports.zip 
At this point, your new tools34_reports.zip file will include a modified ins_reports.mk 
Lastly, re-execute rapidwiz. 

Note: append the relevant line if it already included LD_OPTIONS 

Example 1: 

Before the change: 

$(LIBSRWUSO): 
rm -f rwsutil.o rwspid.o ; \ 
$(AR) x $(LIBSRWU) rwsutil.o rwspid.o ; \ 
(LD_OPTIONS="-z muldefs"; \ 
$(SOSD_REPORTS_LDSHARED) rwsutil.o rwspid.o \ 
-lm $(LIBCLNTSH) $(LLIBTHREAD) $(MOTIFLIBS) $(SYSLIBS) -lc ) 

After the change: 

$(LIBSRWUSO): 
rm -f rwsutil.o rwspid.o ; \ 
$(AR) x $(LIBSRWU) rwsutil.o rwspid.o ; \ 
(LD_OPTIONS="-L/lib -L/lib/sparcv9 -z muldefs"; \ 
$(SOSD_REPORTS_LDSHARED) rwsutil.o rwspid.o \ 
-lm $(LIBCLNTSH) $(LLIBTHREAD) $(MOTIFLIBS) $(SYSLIBS) -lc ) 

Example 2: 

Before: 

$(RRUNM) rwrun${RW_VERSION}x: 
$(LINK) $(JVMLIB) $(RXMARB) $(RUNSTUB) $(LIBSBM) 

After: 
$(RRUNM) rwrun${RW_VERSION}x: 
LD_OPTIONS="-L/lib -L/lib/sparcv9" \ 
$(LINK) $(JVMLIB) $(RXMARB) $(RUNSTUB) $(LIBSBM) 

Pretty similar with my solution right? :) 
Anyways, I m waiting for the next startCD, because it seems the fix for this will be included in it.

Tuesday, July 18, 2017

EBS 12.2 -- Watch out the Grid Version!. Don't use 12.2 Grid with EBS ! (at least for now...)

Here is a little but important info for you.
If you plan to use 12.2 Grid with the EBS, then you should read this.

You may already know that, EBS installer (rapidwiz) delivers a 12.1.0.2 Oracle Database, when used with the latest startCD (startCD51).

Normally, Grid Infra version can be higher than the RDBMS version. (as long as the RDBMS compatability is set accordingly). We know that..

However; while trying to use an EBS Database (RDBMS version 12.1.0.2) with 12.2 grid infra; we discovered a project stopper bug.

Because of this bug, rman or dbca or any other tool can not write to ASM diskgroups.

They are all failing with ORA-15040, although all the ASM diskgroups are mounted and all the OS disk permissions are correctly set.

The problem is caused by "Bug 21626377 - 12.2_150812: DBCA FAILS TO CREATE 12102 DB OVER 12.2 GI/ASM .

The solution seems to be applying the latest Database Bundle Patch. (12.1.0.2.170117 DB BP or above)

The size of this bundle is almost 1.3 Gigabytes and putting it to the EBS install stage is a big customization for us and ofcourse, for the project. (we need to repackage our stage and make a custom stage, because we need to make Rapidwiz install this BP during the EBS installation)

That's why, we decided to reinstall the Grid Infra. Today, we will delete the current 12.2 Grid Infra installation and install a fresh 12.1 Grid Infra.
In short, if you going to place your EBS database on ASM; or let's say; if you want your EBS database to be RAC , then go for a 12.1 Grid installation. (do not try to use 12.2 Grid.. at least for now..) 

Following table shows the latest EBS/RDBMS/GRID component versions for a troubleless EBS 12.2 installation.

ComponentApplicable Versions
Oracle E-Business Suite Release 1212.2.4, 12.2.5, 12.2.6
Oracle Database12.1.0.2
Oracle Cluster Ready Services12.1.0.2

Monday, July 17, 2017

RDBMS -- , ORA-00312 ORA-00338, _allow_resetlogs_corruption, RMAN Restore & Recovery

Recently dealed with a database startup problem.
It was critical, because the database was a production database.
All the redologs were erased, actually they were zeroed.
At first, I thought that the issue might be caused by a wrong duplicate command. Such as, a duplicate command specified with NOFILENAMECHECK.

INFO: NOFILENAMECHECK prevents RMAN from checking whether the source database datafiles and online redo logs files share the same names as the duplicated files. This option is necessary when you are creating a duplicate database in a different host that has the same disk configuration, directory structure, and filenames as the host of the source database. If duplicating a database on the same host as the source database, then make sure that NOFILENAMECHECK is not set.

However; later on, I learned the truth. The issue was caused by a wrong controlfile recreation operation, that was done by a junior dba.

He was trying to clone a database, which was planned to be running on the same database server as the source database. Unfortuneatly, he recreated the controlfile of this cloned environment by pointing the redologs of the production environment. So, he went too far with this..

When I connected to the production database, I saw the redologs were zeroed.

I tried to validate them using alter system dump logfile '+REDO/redo0x.log' validate; and saw that, there are no redo records left in them.

At that point, I realized that , we were in a critical situation.

There were no redo records in redologs and the database was complaining with ORA-00312: online log x thread x: '+REDO/logx.dbf' and "ORA-00338": log X of thread X is more recent than control file. 
As a result, the instance was terminated with opiodr aborting process unknown ospid (82519) as a result of ORA-1092.

ORA-0038 normally means -> The control file change sequence number in the log file is// greater than the number in the control file. But, another potential cause for getting such error is that listed redo log is Not valid (i.e contain zeros). -- "actually this was the case"..

Well... The production database could not be opened, as the recovery was requesting one of the zeroed redologs. (the cloned database used these redologs and zeroed them. At this point; it was impossible to reuse them with the production database.)

I also saw that, the last redo was lost, but the previous one was archived.

INFO: In a cooked filesystem like ext3/ext4, if you remove the redologs while the datababase is open,  there are still some ways to get the redolog contents . (considering linux/unix doesn't delete the filecontents if the file is open by some processes, using lsof and /proc filesystem, you can get the data of those deleted files) -- it seems this is not possible with ASM at all.

Likewise, if your database is closed (closed with shutdown normal, not abort/not crashed) and if you delete your redologs (or zeroed them), then this is not a problem.
However, if the database is open and if you shutdown it using "shutdown abort" or if the database is crashed somehow, then it means you just lost all your redo.

Well.. The production database including all its redolog files was on ASM. So there were no ways to get the before image of the redolog files, so I decided to force a startup using _allow_resetlogs_corruption=true and startup force.

Well, after this forced startup, the database opened. EBS services started without errors and no problem encountered, but as recommend by Oracle Support, we needed to rebuild the database after opening it with this kind of a method. rebuild means doing the following, namely: (1) perform a full-database export, (2) create a brand new and separate database, and finally (3) import the recent export dump. When the database is opened, the data will be at the same point in time as the datafiles used.

Then, I thought that, "even if we do a full-export and import and become stable, we still lost some data. We forced the startup, so we didn't apply the redo records.. (redologs were already zeroed anyways)"

So, at that time, I also realized that, even we rebuild the database in this stage, we will never be sure about it stability. Full exp itself might encounter errors as well..

At the end of the day; the best option that came to my mind was restoring and recovering the database.

We had the backups (both full and incremental) + we had the backup of the archivelogs + we knew the log sequence number when the instance terminated.

So I told to myself "why not we restore and recover it?  The database is now open but it is not stable.."

Anyways, "rman" is intelligent enough to use incremental backups during the recover operations (if they are available and relevant). Ofcourse, rman applies archivelogs automatically after restoring the database and rolling it forward with the level1 incremental backups.

We just issued a simple run {} block as the one below and waited.

RUN
{
SET UNTIL SEQUENCE 12538;
RESTORE DATABASE;
RECOVER DATABASE;
}

It was a friday night and we restored and recovered an EBS database. We opened it with a minium data loss and luckily that data could be recreated by the business & application guys.

At the end of the day, the lesson learned here was -> "do not to place production and the clone environments in the same host".

However; the biggest lesson was " work on the production server only if you know what you are doing" and/or "do not work on the production, when you lose your focus".

Friday, July 14, 2017

About Erman Arslan's Oracle Blog Facebook Page

Today, I have created a Facebook page for this blog.
Thanks to the followers who liked and started following it.
Till today, I was sharing my blog posts in various Facebook group pages manually.
From now on, everything that I 'll write here, will be reflected to the facebook page of this blog automatically. (including this blog post:)


Here is the facebook page url : https://www.facebook.com/ermanarslansoracleblog/
I would be appreciated if you will follow this facebook page as well:)
As, it's easier to use facebook for checking the news sometimes, it may also be easier to use facebook for glancing at the blog posts.

Friday, July 7, 2017

ODA- KVM Virtualization for ODA X6-2S/X6-2M/X6-2L !!

Good news! This is my 600th blog post :)

What is better than that? Let me tell you;

With the support of KVM, Oracle added the virtualization functionality to the ODA X6-2S/X6-2M/X6-2L models. (Before this, we needed to have ODA X6-2HA to make use of the virtualization capabilities of ODA)

Oracle recently announced that, from now on; we can use virtual machines on ODA X6-2S, ODA X6-2M and ODA X6-2L. This means  ODA X6-2 S/M/L environments can now be considered as Solution-in-a-box environments! This means "applications and databases all-in-one box".


The virtualization technology that we will use with these machines is Linux KVM (Kernel-based Virtual Machine)

This new virtualization option comes with the new ODA release, ODA 12.1.2.11 release.

The ODA 12.1.2.11 release is now available, and it is promising the following the new things for ODA X6-2S/M/L.
  • Support for Unbreakable Enterprise Kernel Release 4 (UEK R4) for Oracle Linux.
  • Oracle Database Bundle Patch 12.1.0.2.170418 and 11.2.0.4.170418
  • Support for Oracle KVM virtualization for Linux applications, enabling you to create isolation between your database and applications. It is not supported to run Oracle database in a guest VM with KVM environment.
So, it seems; from now on , we will reimage our ODA X6-2 machines with the 12.1.2.11 ISO images and install ODA 12.1.2.11 release on top of it to have an ODA X6 ready for supporting the KVM based virtualization.

Patch 23530609: ORACLE DATABASE APPLIANCE X6-2 S and X6-2 M 12.1.2.11.0 OS ISO IMAGE

Patch 26080577: ORACLE DATABASE APPLIANCE X6-2 S AND X6- M 12.1.2.11.0 PATCH BUNDLE DOWNLOAD

Currently, there is no instructions for creating KVM based virtual machines in Oracle Support, however; you can find some blog posts in Oracle Database Appliance blog.

https://blogs.oracle.com/oda/

Things like KVM Import an OVA Template, KVM Deploying Guest VMs with ISO, KVM Networking on ODA (Oracle Database Appliance) and Enabling KVM on ODA are alredy explained in Oracle Database Appliance blog.

The following restrictions, however; should be noted, as well :
  • only Linux OS on the guest VMs
  • It is not supported to install an Oracle database on the guest VMs
  • There is no capacity-on-demand for databases or applications running in the guest VMs

Tuesday, July 4, 2017

Exadata-- Initial Deployment , OEDA and checkip script

In these days, we are migrating several EBS instances to Exadata.
We (as Apps and Core DBAs) are involved in these works, from deployment to the end.


We are not cabling the Exadata but, we are usually there to check and to give the inputs.
In Exadata implementations and migration projects, everyting starts with the initial deployment.
I mean the deployment of Exadata.
The deployment of Exadata is usually straight forward and the process we follow during the deployment, make us feel pretty professional.
There are two tools that we use for the initial deployment of Exadata.

The first one is "OEDA"( Oracle Exadata Deployment Assistant) and the second one is the "checkip script" that is generated by OEDA.


Using OEDA, we sent Oracle almost all the inputs that are needed for the deployment.
Things like, our scan name, our IP addresses,  our DNS IPs, ASM diskgroup names and everything...
OEDA replaces the manual configuration forms that we used in the past for deploying the older versions of Exadata.

OEDA is a tool that can be used even in our Windows clients.
It is an easy to use tool, which is fully documented Oracle Exadata Database Machine Installation and Configuration Guide (https://docs.oracle.com/cd/E80920_01/DBMIN/exadata-deployment-assistant.htm)


After giving all the necessary inputs, OEDA create the configuration files which will be used by the Oracle Field engineer during the deployment..

All the configuration files are created under the folder named "ExadataConfigurations".

After we run OEDA, we continue with executing the checkip script.
checkip script can be found in the ExadataConfigurations folder that is created by the OEDA during its run.

Checkip is the tool for ensuring all the ips that are given while running OEDA, are available and all the DNS entries and relevant stuff like that are already configured in the client/customer environment. (checkip script can be run on windows as well..)

Note that, checkip uses the JRE which is deployed by OEDA!. So , if you are planning to execute the checkip script directly from the machine where you also executed OEDA, then this is not a problem.
But if you are planning to execute the checkip script from another machine, then you need to download OEDA to that machine also.. (as the JRE, that checkip is designed to use, comes with OEDA)

The following DNS entries must be configured before running the tool;

DNS entry for Management/Admin network
DNS entry for ILOM Network
DNS entry for  Public/Client network
DNS entry for  VIP network
DNS entry for  SCAN IPs

So, it is like the tool to crosscheck the inputs that are given in OEDA.
checkip script produces an output file when we execute it.
In this output file we need to see the prefix, named GOOD for every check and we need to see the successful message at the end of that output file;

SUCCESS: 

 Successfully completed execution of step Validate Configuration File [elapsed Time [Elapsed = 95573 mS [1.0 minutes] Tue Jul 04 09:51:26 EEST 2017]]

At the end of the day, we sent the output of checkip script and the template files that are created under the ExadataConfigurations folder, to Oracle and wait for the deployment date.

So, in summary; there are 3+1 steps:

1. Customer should fill OEDA configuration.

2. Customer to run checkip script, generated by OEDA utility.

3. Customer to send Oracle the OEDA configuration files and checkip script output for validation.

4. Once configuration files has been validated and checkip script found to be ready, Oracle will be able to schedule HW and SW engineers visits. (this is done by Oracle)

One more thing;

In addition to the outputs of these 2 tools, there is one more file that is sent to Oracle for Exadata deployment. It is named as Exadata Logistic template deployment form, and it is usually filled easily.

In Exadata Logistic template deployment form, we send the information like the company name, work location, dress code, closest hotel, Vpn access(if available) and the necessary contacts to Oracle.

Well.. This is all we need to do as customer site dbas and consultants for the initial deployment of Exadata.

The real excitement, however; begins once the machine is deployed.

Once this instrument(Exadata) is deployed, we need to play it, we need to play it well. 
(The important thing is not the words, but the actions :) )