Monday, June 15, 2020

ODA -- "Patch your ODA with ODA Patch Bundle" - Stay away from PSUs, CPUs and/or other patches

I want to inform you on an important subject.. It is about patching Oracle Database Appliance.
You may already know this, but in some cases you may want to take the risk and apply one-off patches, PSUs or CPUs on your Oracle Database Appliance environments..


Firstly, I want you to know that, this is not a good idea!

Only in special circumstances, you may consider applying one-offs on ODA, but you still need to get the approval of Oracle Support by creating a SR.

But generally no PSU, no CPU!

Only the ODA Patch bundle...  The one-button patch specifically designed to upgrade Oracle Database Appliance firmware, OS, Grid Infrastructure, and Database PSUs.

Do you want to upgrade your GRID PSU or Database PSU on ODA? Then find a ODA Patch bundle that delivers those PSUs and go on with that..

For instance;

If you have Oracle Database Appliance 12.1.2.10.0 release, then we can say that your Oracle Database version is 12.1.0.2 or 11.1.0.4 .. Moreover, we can also say that your PSU level is 12.1.0.2.170117 (PSU) or 11.2.0.4.161018.

Well, if you want to upgrade your PSU version, then you should upgrade your ODA release to a newer ODA release, such as 12.1.2.12.0.

12.1.2.12 delivers Oracle Database Bundle Patch (BP) 12.1.0.2.170814 and Oracle Database Patch Set Update (PSU) 11.2.0.4.170814.

By doing such an upgrade, you also get an upgrade in the other layers and components such as OS, kernel, BIOS and etc.. (These are nice to have and but also required... Keep that in mind..)

Check the documentation for ODA software releases and all the related details... (Just select your ODA release from drop down menu and check the documentation of it) -> 


Also, now we have Oracle Support patch tab to find our patches for ODA..
Earlier it was not available.. 
Well, I have dealed with this machine almost since its birth, but then I took a break. So maybe it wasn't available at least for some years ago..



So let me tell you a quick real story about the effect of applying PSUs on ODA directly.
Yes, I have seen this.. 

I have seen a real bad effect of applying GRID and Database PSU into ODA X6-2 HA..
After patching GRID and RDBMS with traditional PSUs, at first everyting seemed okay.
However; the nightmare started when the database was opened.. Interestingly, one of the node rebooted itself, when the database was opened. After the reboot, the issue continued.. It was like an infinite power cycle..!
 
Very interesting, right? No clues in the OS side, no clues in CRS, ASM or ACFS logs.. No errors in agent logs, nothing in the database.. The node was rebooting itself directly when the database was started. 

The environment was a standard one.. A Bare Metal ODA X6-2 HA.. It was ACFS-based. So, if ACFS is present in the environment, it is even more dramatic.. That is, facing with this kind of an issue is becoming even more potential.

I still think that, the reboot problem was related with ACFS.. I mean, everytime when the database was opened and did some stuff on ACFS, the OS was crashing and we saw a reboot.. ( without any clues, believe me...) 

The only thing that I could see was in the "last" command output.. It was a crash.. Probably it was caused by a failing ACFS kernel module.. ( a fault in a kernel module may bring the system down, may result a direct crash just like the one I faced) 

That fail was probably an unexpected one, because even the dmesg command output was clean.. Normally, just a simple printk(KERN_ALERT ... would do the work for informing us.. So this must be an unexpected one.

Ofcourse, we could reproduce and trace it at OS level, but we didn't have that amount of time.. 

So, as suggested by the title of this blog post, Go on with ODA Patch Bundle. Stay away from PSUs...

Read the following MOS note for more->
  • ODA Support Guidelines for Using Existing Interim Patches or Requesting New Interim/Merge Patch Requests (Doc ID 2081794.1)
  • Oracle Database Appliance FAQ (Doc ID 1463638.1)
  • ODA Patching FAQ : 18.3 and Lower (Doc ID 1546944.1)

RDBMS -- About Oracle Market-Driven Support for 11.2.0.4

Extended Support for Oracle Database 11.2.0.4 is planned to end. That is, at the end of 2020, there will be no extended support for 11.2.0.4. With this in mind, 11.2.0.4 customers were worried.. However; Oracle annouced the Market Driven Support for 11.2.0.4, and 11.2.0.4 customers seemed relieved...

I also informed my followers about this earlier, in my last 2 webinars.

http://ermanarslan.blogspot.com/2020/04/ebs-oracle-ebs-19c-upgrade-webinar.html
http://ermanarslan.blogspot.com/2020/04/rdbms-19c-upgrade-webinar-presentation.html

In this post, I want to give some more info about this Market Driven Support.. Let's move on and try to find some answers to confusing questions.

11.2.0.4 customers have risks -> As of 1 January 2021 (extended support ends), they will have NO access to: new bug fixes, new security updates, or other critical-issue patches.

Mission critical systems will face operational and security risks.Sustaining support doesn't generally address any newly discovered defects or vulnerabilities. This means, no new updates, patches, code fixes or security updates..

This is huge, right? We still have lots of customers using 11.2.0.4 databases and they are unfortuneatly not ready to upgrade at the moment.

Fortuneatly, Oracle gave a helping hand to the customers.

This helping hand is the offering of Market-Driven Support..

Market-Driven Support is for 11.2.0.4 customers only.

By having Market Driven Support in hand, customers should also have the latest PSU/BP applied to the 11.2.0.4 databases.( this is recommended)

This type of support is available for 2 periods, 1 Jan 2021 to 31 Dec 2021 and 1 Jan 2022 to 31 Dec 2022. So, it's available from the end of Extended Support (Jan 2021) until Dec 2022.

Some valuable Oracle ACS (Advanced Customer Services) services are also included in this offer.

Let's see the key service components included in Market-Driven Support ->

  • Severity 1 fixes or workarounds for newly discovered severity 1 problems( for PROD env)
  • Critical security updates  to address potential vulnerabilities and reduce downtime risk. Oracle will manage the scheduling and decide the contents of these security updates.. It is important to note that, these security updates will not include updates for embedded Java/JDK functionality. Cryptography-related updates or patches are not included either. Standard SPUs are not included. So this means, limited security patches and updates..
  • One Database Upgrade Planning Workshop to assist customer to develop their upgrade plans. Oracle ACS will provide this workshop.. This workshop is aimed to help customer upgrade their databases to a fully supported release.
  • A Technical Account Manager (TAM) as single point of contact.
So far so good.. 

Well, there is an important issue I would like to underline.. That is, Market Driven Support is not an extension to Extended Support! It is not a sustaining support either.. It is somewhere in between :)

Check this website for the Extended Support offering -> 


There you will see a long line of benefits and you will also see Security alerts and Updates without any exceptions... In addition to that, you can create SR wity any level of severity.

Market-Driven Support is a completely seperate support type. So, the customers purchasing this support, should still plan their upgrades or cloud transitioning projects asap to avoid future risks.

I 'am not a Oracle Salesman, but the pricing seems fixed.
It just depends on how many production databases you have.

The prices vary according to the number of databases you have . (1) Up to 50 database (2) 51-500 databases, (3) 501+ databases.

Well, that's it :)

I wish you a healthy and beautiful week as I finish my writing...

Monday, June 8, 2020

RDBMS -- SQL Performance Tuning - Correcting SQL Plans by setting hidden parameters at session level & Fixing Plans via Sql Profiles

We recently dealt with a SQL performance problem. Actually, the problem itself wasn't so interesting. The solution, however; was stylish :)

In one of our customer sites,  we encountered a performance degradation in an important SQL.
The issue arised after we upgraded the database to 18C. (unfortunaetly they did not notice this problem in performance-test phase).

The query was trying to fetch rows from all_objects and all_synonyms-type views and the execution plan was not quite good.

We had already seen these kinds of performance problems and remember the workaround..
That is, the workaround "ALTER SYSTEM SET "_fix_control" = '8560951:on" saved our day previously.

Reference: Accessing ALL_OBJECTS View Performs Slowly, Relative to Response Time on Another Database (Doc ID 2061736.1)


However; this time we couldn't directly set that fix_control parameter. Because, we were already  on production, so setting an underscore parameter without testing it, was so risky..

Interesting part start here : )

Look what we have done; 
  • We have set the parameter at session level ->  ALTER SESSION SET "_fix_control" = '8560951:on'
  • We then run the exact same query again (in order to make Oracle optimizer to build the desired/correct plan) -- Attention -> We executed the exact same query, as we didn't want the sqlid to be changed/or a new sqlid to be generated during our execution.. 
  • Then we checked the Sqlids and the associated sql plans for the problematic sql text (Checked the average estimated seconds and plan hash values.. The new execution plan was the quickest)
  • Well, we fixed the plan for that sqlid :) After fixing the plan, our sql started run faster!
  • Lastly, we got the sql optimizer trace and saw that the hidden parameter was really active for the sql.. 

SQL DIAG TRACE:

begin
dbms_sqldiag.dump_trace(p_sql_id=>'43xdnx0r5dhmx',
p_child_number=> 0,p_component=> 'Compiler',
p_file_id=>'Compiler_Trace_43xdnx0r5dhmx');
end;

A PIECE OF CONTENT OF THE TRACE FILE:

Content of other_xml column
===========================
  db_version     : 18.0.0.0
  parse_schema   : ERMANRPT
  plan_hash_full : 1560297160
  plan_hash      : 533470668
  plan_hash_2    : 1023844655
  sql_profile    : coe_43xdnx0r5dhmx_2690840819
  Outline Data:
  /*+
    BEGIN_OUTLINE_DATA
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('18.1.0')
      DB_VERSION('18.1.0')
      OPT_PARAM('query_rewrite_enabled' 'false')
      OPT_PARAM('_optim_peek_user_binds' 'false')
      OPT_PARAM('_fix_control' '8560951:1') --> YES IT IS HERE! :)
      ALL_ROWS

Sunday, May 31, 2020

Entropy, Linux Kernel - CSPRNGs, /dev/urandom vs /dev/random and all that

These days when we stay at home due to pandemic, I've finally found some time to spend on topics that interest me. Thanks to the necessity to be at home, especially on weekends,  I made some research on various subjects related to science and its history.
Maybe you've already felt this coming... The subject of this blog post is a little different compared to the previous ones.

This time, I'm here to write about entropy.

To start with, I have revisited the basic defitinion in termodynamics. I started with the second law, which  states that, entropy of the entire universe, as an isolated system, will always increase over time.

Nevertheless I wanted to keep up with the IT world.
So I have steered my interest towards to the information theory.  After all, the term entropy is also used in the information theory. While doing my research on this area, I turned the steering wheel a little more and ended up with the Linux Kernel Entropy, which we generally come across while we are dealing with the CSPRNGs (Cryptographically Secure Pseudorandom Number Generators).
We think about CSPRNGs while getting random numbers from /dev/randon and /dev/urandom on Linux Operating Systems. So, when we take a little deep dive, we see a thing called entropy pool there.. Anyways, I think you get the idea.. These all are connected..

Actually these are very deep and huges subjects, but I will keep it as compact as possible.
I don't find long blog posts very practical :)

In this post, we will take a look at the information theroy,  make some introduction to the entropy and lastly, check the linux side of this subject.

The information theory started with Claude Shannon, a mathematician, electrical engineer and a cyrptographer. The first article in this area was Mathematical Theory of Communication. It was written by Shannon in year 1948.


The goal of this discipline was to efficiently and reliably transmit a message from a sender to a recipient. As we all know that, these messages are composed of bits. Either 1 or 0.  This topic focuses on the relationship between the bits used in the messages and the bits representing the information itself. Some kind of an optimization actually..
That is, when we communicate, we want useful information  to get through.. More useful information more better.
So according to information theory, when we send a message with an useful information, we actually reduce the receipient's uncertainity about the related subject.
According the Shannon, when we send one bit of  useful information, we reduce the receipient's uncertainity by a factor of 2. (provided that we work with logarithmic base 2)

The term entropy is used as a measure of how certain the things are.

Well, let's do a thought experiment to understand it better.

Suppose, we decided to flip a fair coin.


At t0, we don't know the outcome. We don't have any information about the outcome.. head or tails?
(We may only know the previous throw's outcome. ) So, %50 chance for head and %50 chance for tails. Just suppose head is 1, tails is 0.. So we are using one bit for each.. So our message length is actually 1 bit here..

At t1, we throw the coin and once we saw the output, we know it is a tail or a head. 1 or 0. So now we know..
At t0, there were 2 equally like options and now at t1, there is 1. So our uncertainity is reduced by 2. The information entropy here is actually equal to 1, as it is based on base-2 logarithm of the possible outcomes. The base 2 logarithm of 2 is 1. So, the result of the logarithm (1) tells us how uncertain this event is.

Or if we stick to the formula;
S = - (minus) base 2 logarithm of 0.50.. (base 2 logarithm of 0.50 is -1).. 
So again, the Entropy(S) is 1.

Ofcouse, I have shortened the calculation but here is what happens in details ( using the formula)


One more thing about t1 -> at t1, we received a single bit of usual information. We actually received 1 bit in this example (as we said 1 or 0), but we could receive 5 bytes as well..
I mean, suppose we received the outcome as "Tails", as a 5 bytes (40 bit) string rather than a single bit (1 or 0 for representing it)..  Even in that case , we would still receive a 1 bit of useful information.

Of course, in case we throw a fair dice, the entropy is different than the one in our flipping coin example. 


In this case we have 6 equally likely outcomes/possibilities.. The uncertainity reduction factor is 6 here. So, the base 2 logarithm of 6 gives us the entropy.. It is 2.5849625007. 

Extra note: Notice the dice in the above picture. The corners are steep, not round.. This is good for having fairness..  Our dice should have those type of corners as we want it to move in a caotic manner , when we throw it..

The things get interesting when the probabilities of the outcomes are not equally likely.
Actually the same formula applies even in this case.  The one that I mentioned earlier.
Remember the sum(sigma) in that formula. The symbol Σ (sigma)..

The Cross-entropy is also an interesting topic and it is widely used in machine learning as a cost function.. However; I will not go into that details in this blog post..
These topics may be the motivations of another blog post in the future.

Well, lets continue with the CSPRNG and Linux kernel perspectives.

In computer security and in Linux world, random numbers are important.. The generators that generates those random numbers are also very important.. No one should be able to predict the next number , which will be generated by the generator.

In Linux and in all other Operating systems and security implementations, we want them to be unpredictable..


As you may guess, the predictability in this case is also quantified in a measure called entropy.

Remember our flipping coin example? It had 1 bit of entropy. There was no predictability in the coin's output. If the coin was unfair (lets suppose it always outputs tails), the outcome of flipping that coin would have less entropy (zero entropy actually).. So it could be guessed with a certainity.

In short, in Linux we want that, the uncertainity in those random numbers, in those the random bytes..
This is because, a lot of the security protocols and systems depends on those random bytes.
For instance the encryption keys.. They are based on those random bytes and they should be really hard to predict.

SSL/TLS and SSH keys are another examples that use this randomness.

We have a challenge here.. Well, we are using the computers to generate those random bytes right?
A computer is a deterministic machine.. So what will a computer use for supplying this randomness? How can an OS (Linux in this case) generate unpredictable random bytes?

The answer is short. Our physical world... In our world, the things/events are not always happening in the same way.. So by feeding itself with these random things/events, an OS like Linux can have a source for creating this randomness.

Linux does this by using the events, the interrupts. A mouse move, a keyboard click , a driver interaction or an I/O operation have attributes like mouse position or I/O time.. They differ right?

They are random.. Consider I/O attributes.. An I/O that takes 2 seconds to finish in a system, may take 5 seconds to finish in another system.. Kernel have rights to get those info from those interrupts.. So Linux use these attributes to build a pool called the entropy pool and uses it as a source while generating the unpredictable random numbers.


Ofcourse having the info about those interrupts is not enough for generating the good random bytes that we need. For instance, elapsed time of an I/O may differ. It may differ according to the disk speed, the position of the requested block and so on. But, how different can it be?

So having those interrupts provides a source for generating some unpredictability.. It has some entropy, but actually we need more. They can't directly satisfy our random byte needs and they are not uniform either.

We need to increase the unpredictability and at this point, we have the CSPRNGs as the solution.

CSPRNG stands for Cryptopgraphically Secure Pseudo Random Number Generator.


This tool takes the input and generates random and uniform bytes.. Generated bytes depends on the input only.

The values of those random events that I just mentioned, are the input of CSPRNGs. There are several types of these CSPRNGs, but I will concantrate on the hash-based CSPRNGs to make this subject easy and clear. CSPRNGs which are based on hash functions basically..

Remember, the output of a hash function is uniform. It is impossible to reverse a hash function.(you can't know the input by just looking at the output). A hash function takes a limited amount of input and generates a fixed amount of output.

So what happens is;

We have a pool and suppose it is filled with zeros at the start.
When an event happens, it is serialized and hashed with the things that are already in the pool.
When an new event happens, we repeat. The new event is serialized and hashed with the things that are in the pool at that time. (this time it doesn't have zeros..)
This serialization and hashing is repeated every time a new event happens.
The thing that mixes the events into the pool is called the steering function. It is needless to say that this function should be very fast.

As you may guess, the contents of pool are mixed again and again, while new event are happening. The entropy is increased and the predictability is decreasing. An entropy  pool is coming out! :)

The predictability is pretty low.. Think about it.. In order to predict the next random number or to predict the current contents of the entropy pool , you need to have the information about all the events that has been entered into the entropy pool.. So the value generated from this thing is unpredictable..

Well , let's look at this subject from OS perspective..
We have the CSPRNG in the Linux kernel :) It maintains the entropy pools and executes the related mechanism for generating the random bytes.

Kernel has the access for all those events/interrupts and it runs the CSPRNG when we need random bytes. Having this kind of a mechanism inside the kernel also provides centralization and security. (much better protection for the pool, better protection than user space applications provide)


In Linux, we have /dev/urandom and /dev/random for this. These are character devices and they look like files. We read them like we read files and when we read 100 bytes from them, they actually run CSPRNG on the entropy pool and give us the random number we need.

These tools provide us limited and uniform random bytes when we need. Moreover, the source they are fed, is populated by the unpredictable events.

But, as you may ask, we have two devices, right? /dev/random and /dev/urandom.. So which one should be used in which case? This is definitely the question that one may ask.

Well, first describe the difference between these tools, so that maybe we can make a decision depending on those differences.

The main difference between /dev/random and /dev/urandom is that, /dev/random tracks the entproy that we have in the entropy pool and it blocks when the entropy is low. (remember the entropy that I mentioned in the first part of the blog post).. It is basically implemented in a way to block itself when it thinks that the unpredictability is low.

We can also monitor this entropy by using the file entropy avail located in the proc filesystem. /proc/sys/kernel/random/entropy_avail (maximum is 4096)

As you see in the following screenshot, /dev/random block us when the entropy_avail decreases. It is sometimes unacceptable either.

In the following example, we were blocked when the entropy_avail was around 20.. We get some bytes when it increased to 23 and then we were blocked again..  Note that, when we are blocked, our general performance decreases, our users and apps may face downtime..


When we look at Section 4 of the random manual (using -> man 4 random command) , we see the following ->


Note that this is the man page of Oracle Linux 6.10 , kernel  4.1.12-61.1.28.el6uek.x86_64.
But the same sentences are there in Oracle Linux Server release 7.8, kernel 4.1.12-124.38.1.el7uek.x86_64.

Basically it says that /dev/random is more secure, as it blocks when the entropy that it calculates for the entropy pool is low.. Calculating the entropy of the pool, calculating the entropy that is brought by a new event and deciding on the entropy to decrease when an output is given, should be hard actually :) I also find it difficult to build such an accurate mechanism.. However; by doing that it becomes more secure than /dev/urandom..

Man page also says the following in this manner;

If there is not sufficient entropy in the entropy pool, the returned values are theoretically vulnerable to a cryptographic attack on the algorithms used by the driver. Knowledge of how to do this is not available in the current non-classified liter-ature, but it is theoretically possible that such an attack may exist. 

Okay noted.. Talking about information-theoretical security probably.. When we consider computational security, both /dev/random and /dev/urandom are both secure in my opinion..
(except the early boot phase when there are not enough events to feed the entropy pool.. Bytheway, this problem can be solved by the distributions by saving the entropy pool during reboot and etc..)

So, it is better to mention about /dev/urandom at this point. Well, /dev/urandom never stops generating random bytes for us.. We keep getting our random bytes even if the entropy_avail is low..

It should also be noted that, no one learns anyting by getting an output from /dev/urandom and /dev/random.. The kernel use SHA1.. So when we request some random bytes, the kernel runs SHA1 on the entropy pool, gives us the output and also it takes that output as an input and feed the entropy pool back with it.. Note that /dev/random and /dev/urandom do the exact same thing!

With this in mind, decreasing the entropy of the entropy pool and blocking the random bytes generation seems a little unnecessary to me..  (I 'am not an expert on this subject .. This is only my opinion)

In the early boot, yes... I completely think the same thing that the man page states. However; what I also think is , once we get enough entropy in the pool, we are done. We should not be blocked after that point..

Note that, In 2020, the Linux kernel version 5.6 /dev/random only blocks when the CPRNG hasn't initialized. Once initialized, /dev/random and /dev/urandom behave the same.

This change clears my doubts :) The thought that comes out here is -> Once the pool becomes unpredictable, it is forever unpredictable.

That's enough for today. Please do not forget to follow my blog.  Add me from linkedin and from twitter to increase the interaction. More will come.

Saturday, May 16, 2020

OBIEE - Exadata - GRID -- Two tips on OBIEE Patching and Grid Patching/Upgrade

In this post, I am combining two different topics.
So that, this post will reach multiple readers at once :)

Let's start with our first tip on our first topic;

Well, firstly I want to give some tips for OBIEE users.  It is actually about a lesson learned during OBIEE patching. As you already know, time to time, we need to patch our OBIEE environments in order to fix the vulnerabilities. In this context, we usually patch the Weblogic application servers and OBIEE itself.  Some times we also upgrade our java in order to fix the java-related vulnerabilities.
The tip that I want give you is actually a solution for an undocumented problem.

This issue arises after applying the bundle patch of OBIEE. (OBI BUNDLE PATCH 12.2.1.3.200414)

The problem causes Admin Tool to crash while opening the offline rpds.

That is, you just can't open offline rpds after applying that bundle patch, even if your Admin Tool version is the same as your OBIEE version.


The problem is in SOARPGateway.h line 880, and it causes an internal error for nQS.
The error is nQSError :46008. Well, the solution is actually simple;

It is in 3 steps ->

1) Login to OBIEE and access home page.. Use the relevant option to download Oracle BI Administration Tool. -- Download the Admin tool According to your patch bunlde (in this case admin tool 12.2.1.3.200414)
3) Uninstall the Admin Tool that is crashing/causing the issue.
4) Install the new admin tool software downloaded in step 2
That 's it.  This is not documented in the readme.. The interoperability between OBIEE and its client tools are actually very important.

If we are patching OBIEE , we always need to think about the client side applications of OBIEE as well..
This is the lesson learned for this topic.

Let's continue with our second tip;

This is for GRID and/or Exadata users.. It is based on a true story :)


That is; while upgrading the GRID version from 12.1 to 12.2, we faced a problem..
Note that, the platform was an Exadata..

Actually several perl scripts and perl modules are used behind the scene of such an upgrade, but the issue that I want to share with you is arises in the preupgrade phase..

During the preupgrade, we faced an issue similar to following;

CLSRSC-595: Executing upgrade step 8 of 19: 'PreUpgrade'.
2020/05/15 19:06:34 CLSRSC-33: The OLR from version '12.1.0.2.0' is missing or unusable for upgrade
Died at /crsupgrade.pm line 2906.
The command 'perl -I/perl/lib -I/ grid/crs/install /u01/app/12.2.0.1/grid/crs/install/rootcrs.pl  -upgrade' execution failed

When we analyzed the problem, we saw that OLR ( Oracle Local Repository) and all the OLR backups were missing. The $GRID_HOME/cdata folder, where the OLR files reside, was completely empty.

Note that; OLR is a repository located on each node in a cluster and it contains information specific to each node. It contains manageability information about Oracle Clusterware, including dependencies between various services. Oracle High Availability Services uses this information.

No OLR, no backup of OLR !

Anyways, I analyzed the perl codes and didn't see any issues which may cause this problem..
I couldn't see anything that may delete the contents of cdata, but I was almost sure that the contents were deleted by the upgrade/preupgrade/or anything that takes place on the way.

There was no solution for getting the  deleted OLR and the backups back..
The only solution was recreating the OLR using the following apprach and we did that!

1) Shutdown the CRS on all nodes
2) <GRID_HOME>/crs/install/rootcrs.pl -deconfig -force
3) <GRID_HOME>/root.sh
4) Check the system with runcluvfy

After these actions , OLR was recreated and we could upgrade the GRID without any problem..

This was a strange issue, and it is still under investigation.. What was the bug, that caused this to happen? Why did we fail on our first try and why was the second upgrade attempt successful?
Deletion of OLR and its backups may be caused by "localconfig delete" or"rootdeinstall.sh"  scripts, but why would GRID Upgrade do something like this?

Well, I will continue to investigate this, and write when I get some more info about it..

The lessons learned are;

Always backup your OLR before GRID upgrades and GRID patching..
There is a possibility that your OLR may be corrupted/lost during your patching activities.

Don't put your OLR backups into the same directory as your OLR files.. I mean , always have some additional OLR backups outside the cdata directory.

Finally, the approach for recreating the OLR by executing roocrs.pl -deconfig and root.sh works :)

See you in my next blog post, which will be about the Linux Kernel Entropy...

Saturday, May 2, 2020

EBS -- Oracle Database 19C - "Curious and Frequently Asked Questions" - Supported Features & Products

We all know the cool features of Oracle Database 19C and we can't wait to implement them in our EBS environments.

Thing like Automatic indexing, Active Dataguard DML Redirection and so on.

We also want to know things like ISG (Integrated SOA Gateway), Active Dataguad itself and Goldengate whether they are supported to be used with EBS ( EBS on Oracle Database 19C) or not.

In this post, I want to share my findings with you.

We will take a look and see whether the key new features of 19C and old but gold features of EBS are supported with EBS 19C Databases or not.

This blog post will be a little different than other, because we will proceed in the form of questions and answers. It will be in the form of Curious questions and answers :)

The following Q/A reveals the current situation, but I may revisit this blog post, if I learn something new about these subjects. 


Let 's start.

Q1 : Can we use the key new features of Oracle Database 19C with EBS? (automatic indexing, Active Dataguard DML Replication, Sql Quarantine, Real time statistics and so on)

Answer:  Automatic Indexing feature of Database 19c is NOT certified for use with E-Business suite applications 12.1.3 and or 12.2 at this time.
The certification of other new features such as Active Dataguard DML Replication and real time statistics is still not clear either. However; Oracle development is probably working on it. At this moment, there is no documentation or info about the certification and implementation of those new features in EBS environments.

Q2 : What if we are using  or planning to use any of the unsupported products?
EBS with 19 C ınsupported product list: Oracle Enterprise Data Warehouse (EDW), Oracle Enterprise Planning and Budgeting (EPB), Demand Signal Repository (DSR)

Answer : Just don't upgrade the EBS database to Oracle Database 19c until those products are supported for use with Oracle Database 19c.

Q3 : What about ISG? Is there anything we need to pay attention to?

Answer : ISG is not fully certified, but it is documented to use it with 19C. There are stability issues which are not fixed yet..
If you are using or planning to use ISG, you should rethink whether or not to perform an upgrade to Database 19c.. It may be better to wait until the certification of Oracle E-Business Suite with Database 19c for ISG is announced.

Issues like;
Performance of design-time operations may be slow when Oracle E-Business Suite is on Oracle Database 19c Design-time operations may fail for certain public APIs when Oracle E-Business Suite is on Oracle Database 19c

If you are planning to upgrade EBS database to 19C, we also strongly recommend applying Patch 30721310: ISG Consolidated Patch for EBS 12.1 customers.

MOS notes for implementation: 

Oracle E-Business Suite Integrated SOA Gateway Release Notes for Release 12.2.9 (Doc ID 2563289.1)
Interoperability Notes: Oracle E-Business Suite Release 12.1 with Oracle Database 19c(Doc ID 2580629.1)

Q4: Are Dataguard and Active Dataguard supported with EBS 19C Database?

Answer: Yes. However; there is no info about the support status of using Active Dataguard DML redirection yet. I mean, the one which is enabled with alter session enable adg_redirect_dml ...
We may use Oracle Active Data Guard to offload some production reporting to run against the Oracle Active Data Guard database instance.

MOS notes for implementation: 

Using Active Data Guard with Oracle E-Business Suite Release 12.1 and Oracle Database 19c(Doc ID 2608027.1)
sing Active Data Guard Reporting with Oracle E-Business Suite Release 12.2 and Oracle Database 19c(Doc ID 2608030.1)

Q5: What about the Business Continuity? 19C EBS Database and Dataguard implementation?

Answer: Business Continuity is as we know. The logic is the same :) However; this time we are implemeting it in a Multitenant-aware method.

It is fully documented both for Logical standby and Physical standby-based business continuity.. I share the MOS notes for physical standby implementations below;

MOS notes for implementation:

Business Continuity for Oracle E-Business Suite Release 12.2 on Oracle Database 19c Using Physical Host Names (Doc ID 2617787.1)
Business Continuity for Oracle E-Business Suite Release 12.1 on Oracle Database 19c Using Physical Host Names (Doc ID 2567091.1)

Q6: Can we use Goldengate for upgrading EBS Database to 19C ? That is; can we create a new EBS CDB-PDB 19C environment and use Goldengate to replicate and sync it with our current release?

Anwer: No. You cannot use GoldenGate for EBS backup, EBS database migration, or EBS database upgrades. Also there is no certification about using Goldengate with EBS. GoldenGate does not certify against applications..

However; there are ways for migrating EBS data and for achieving Operational reporting using Goldengate.

MOS notes for implementation:

Using Oracle GoldenGate to Replicate Data from Oracle E-Business Suite Release 12.2 (Doc ID 2004495.1)
NOTE:1112325.1 - Deploying Oracle GoldenGate to Achieve Operational Reporting for Oracle E-Business Suite.

Q7: Is Preventive Controls Governor (PCG) certified with 19c database and Linux 7 ?

Answer: Development is working on it :)

Here is the note :

Is Preventive Controls Governor (PCG) certified with 19c database and Linux 7 ? (Doc ID 2634785.1)

Oracle Database: 12.2.x version is certified for GRC 8.6.6 even though matrix just lists 12.2.0.1. This includes 18c (12.2.0.2) and 19c (12.2.0.3). CCG 5.5.1 is also supported on Oracle Database 12.2.x, even though matrix lists 12.1.0.2.For GRC certification on Linux 7 refer to ER bug 30822310. For PCG certification on DB 19c and Linux 7 refer to ERs Bug 30808383 and Bug 25971420.

Q8: Do we need to purchase multi-tenancy license to use 19C databases with EBS?

Answer: Using multitenant arch. (CDB-PDB) is a must in EBS 19C database environments. However; EBS customers don't need to purchase license for that.

Q9: What about the custom code? Will they be affected by the 19C upgrade?

Answer: We should pay attention to this subject. We should test our code very well. Something in the code may rely on the standard functionality which may be deprecated or changed in the new database realease.. We should test and implement a method like fix on fail for these code issues.

For instance:
-- in 19C DBMS_JOB is still working but we need to have create job privilege
--SQL performance may change
--UTL_FILE_DIR is desupported. (But, EBS have a workaround for that.. :)
--There may be other undocumented changes in SQL and PLSQL so, a comprehensive test is required both for the functionality and reporting side/queries.

Q10: Is the multitenant CDB-PDB arch. a must for using Oracle Database 19C with EBS? If so, can we have multiple PDBs with a CDB?

Answer: Yes. A CDB with one PDB (single tenant) is currently the only certified deployment for Oracle E-Business Suite with Database 19c. A CDB with multiple PDBs (multitenant) is not currently certified for Oracle E-Business Suite. A non-CDB architecture is not planned to be certified or supported for EBS with Database 19.

Q11: What are the supported 19C database features and options for EBS currently?

Answer: 

Advanced Compression
Database In-Memory
Database Partitioning
Native PL/SQL Compilation
Oracle Application Express (APEX)
Oracle Label Security (OLS)
SecureFiles
Virtual Private Database (VPD)

Q12: Is Oracle E-Business Suite with Database 19c Integrated with Oracle Access Manager for Single Sign-on ?

Answer : Yes

Q13: When will EBS be certified with Autonomous Database?

Answer:  Not certain yet. 

Q14: What about cloning and business contiunity( such as dataguard implementations)? Is there any difference in the approach for EBS 19C databases?

Answer:  The approach is similar, but we do these things by following CDB-PDB aware approaches. Switchover and cloning is done on the CDB level and it is well documented on Oracle Support .

Tuesday, April 28, 2020

EBS -- Oracle EBS 19C Database Upgrade Webinar !

Webinarımızı kaçırmayın! Bu Kez E-Business Suite için!
EBS Oracle 19C Veritabanı Versiyon Güncelleme konusundaki kritik noktaları ve yeni versiyonla gelen özellikleri uzmanlarından dinleyin.
Kayıt Linki: https://techdata.zoom.us/webinar/register/WN_C_5hFeapRyCq_ukfTcIP-w

Don't miss our webinar!

This time for E-Business Suite! 

Listen to the critical points of EBS Oracle 19C Database Version Update and the new version features from experts..
Registration Link: 
https://techdata.zoom.us/webinar/register/WN_C_5hFeapRyCq_ukfTcIP-w

"Note that, this webinar will be in Turkish."


Tuesday, April 21, 2020

RDBMS -- 19C Upgrade Webinar-- a presentation with almost all the details about the upgrade

Arkadaşlar merhaba,

20 Nisan da verdiğimiz webinarda kullandığımız sunumun slaytlarını aşağıda paylaşıyorum...
4 Mayıs 2020 de yapacağımız EBS 19C Upgrade webinarımızda görüşmek üzere :)

Not: Haraketli slaytları burda resim olarak paylaştığım için, ilgili slaytlarda sorun yaşayabilirsiniz.
Ancak webinar 'a katılmış olan arkadaşlara orjinal slaytlara ulaşmaları için linkler de gönderilecektir.

Following is for my english speaking followers :) ->
As you may already know, we gave a Database 19C Upgrade webinar two days ago.
Here I 'm, sharing the slides of the presentation that we used in our webinar. 
Note that, in 4 May 2020, we will have another weminar for EBS customers and this time, we will cover 19C Upgrades in EBS environments. This presentation was in Turkish, but still you may want to check the slides :)





























Saturday, April 18, 2020

ASK ERMAN -- Well, how much time I gain? :)

Question: How much time do you spend/lose? 
Answer: Well, how much time I gain? :) 

ASK ERMAN - Erman Arslan's Oracle Forum 

Almost 30 issues per month. 
Many times more updates. 
It's been almost 6 years.

I have written this blog since 03.14.2013. I didn't only try to write as much as I can, but also be picky in choosing my topics. I tried to write unique articles to add a value to Oracle documentation, general community and support. In August 2014, I created a forum to give a voluntarily remote support. Firstly, it was like an extension of my blog, but as the time passes, it has become a very important motivation of mine.


You can ask questions and get remote support using my forum.
Just click on the link named "Erman Arslan's Oracle Forum is available now. 
Click here to ask a question", which is available on the main page of Erman Arslan's Oracle Home or just use the direct link: http://ermanarslan.blogspot.com/p/forum.html 

Wednesday, April 15, 2020

RDBMS-- 19C Ugprade Webinar!

Webinarımızı kaçırmayın! Oracle 19C Veritabanı Versiyon Güncelleme konusundaki kritik noktaları ve yeni versiyonla gelen özellikleri uzmanlarından dinleyin.Kayıt Linki: https://techdata.zoom.us/webinar/register/WN_tPeZBo93TWyeBTjctOfxUA

Don't miss our webinar!
Listen to the critical points of Oracle 19C Database Version Update and the new version features from experts..
Registration Link: https://techdata.zoom.us/webinar/register/WN_tPeZBo93TWyeBTjctOfxUA

"Note that, this webinar will be in Turkish."