Saturday, May 16, 2020

OBIEE - Exadata - GRID -- Two tips on OBIEE Patching and Grid Patching/Upgrade

In this post, I am combining two different topics.
So that, this post will reach multiple readers at once :)

Let's start with our first tip on our first topic;

Well, firstly I want to give some tips for OBIEE users.  It is actually about a lesson learned during OBIEE patching. As you already know, time to time, we need to patch our OBIEE environments in order to fix the vulnerabilities. In this context, we usually patch the Weblogic application servers and OBIEE itself.  Some times we also upgrade our java in order to fix the java-related vulnerabilities.
The tip that I want give you is actually a solution for an undocumented problem.

This issue arises after applying the bundle patch of OBIEE. (OBI BUNDLE PATCH 12.2.1.3.200414)

The problem causes Admin Tool to crash while opening the offline rpds.

That is, you just can't open offline rpds after applying that bundle patch, even if your Admin Tool version is the same as your OBIEE version.


The problem is in SOARPGateway.h line 880, and it causes an internal error for nQS.
The error is nQSError :46008. Well, the solution is actually simple;

It is in 3 steps ->

1) Login to OBIEE and access home page.. Use the relevant option to download Oracle BI Administration Tool. -- Download the Admin tool According to your patch bunlde (in this case admin tool 12.2.1.3.200414)
3) Uninstall the Admin Tool that is crashing/causing the issue.
4) Install the new admin tool software downloaded in step 2
That 's it.  This is not documented in the readme.. The interoperability between OBIEE and its client tools are actually very important.

If we are patching OBIEE , we always need to think about the client side applications of OBIEE as well..
This is the lesson learned for this topic.

Let's continue with our second tip;

This is for GRID and/or Exadata users.. It is based on a true story :)


That is; while upgrading the GRID version from 12.1 to 12.2, we faced a problem..
Note that, the platform was an Exadata..

Actually several perl scripts and perl modules are used behind the scene of such an upgrade, but the issue that I want to share with you is arises in the preupgrade phase..

During the preupgrade, we faced an issue similar to following;

CLSRSC-595: Executing upgrade step 8 of 19: 'PreUpgrade'.
2020/05/15 19:06:34 CLSRSC-33: The OLR from version '12.1.0.2.0' is missing or unusable for upgrade
Died at /crsupgrade.pm line 2906.
The command 'perl -I/perl/lib -I/ grid/crs/install /u01/app/12.2.0.1/grid/crs/install/rootcrs.pl  -upgrade' execution failed

When we analyzed the problem, we saw that OLR ( Oracle Local Repository) and all the OLR backups were missing. The $GRID_HOME/cdata folder, where the OLR files reside, was completely empty.

Note that; OLR is a repository located on each node in a cluster and it contains information specific to each node. It contains manageability information about Oracle Clusterware, including dependencies between various services. Oracle High Availability Services uses this information.

No OLR, no backup of OLR !

Anyways, I analyzed the perl codes and didn't see any issues which may cause this problem..
I couldn't see anything that may delete the contents of cdata, but I was almost sure that the contents were deleted by the upgrade/preupgrade/or anything that takes place on the way.

There was no solution for getting the  deleted OLR and the backups back..
The only solution was recreating the OLR using the following apprach and we did that!

1) Shutdown the CRS on all nodes
2) <GRID_HOME>/crs/install/rootcrs.pl -deconfig -force
3) <GRID_HOME>/root.sh
4) Check the system with runcluvfy

After these actions , OLR was recreated and we could upgrade the GRID without any problem..

This was a strange issue, and it is still under investigation.. What was the bug, that caused this to happen? Why did we fail on our first try and why was the second upgrade attempt successful?
Deletion of OLR and its backups may be caused by "localconfig delete" or"rootdeinstall.sh"  scripts, but why would GRID Upgrade do something like this?

Well, I will continue to investigate this, and write when I get some more info about it..

The lessons learned are;

Always backup your OLR before GRID upgrades and GRID patching..
There is a possibility that your OLR may be corrupted/lost during your patching activities.

Don't put your OLR backups into the same directory as your OLR files.. I mean , always have some additional OLR backups outside the cdata directory.

Finally, the approach for recreating the OLR by executing roocrs.pl -deconfig and root.sh works :)

See you in my next blog post, which will be about the Linux Kernel Entropy...

No comments :

Post a Comment

If you will ask a question, please don't comment here..

For your questions, please create an issue into my forum.

Forum Link: http://ermanarslan.blogspot.com.tr/p/forum.html

Register and create an issue in the related category.
I will support you from there.