Sunday, April 18, 2021

OLVM & KVM -- Oracle Premier Support

In this blog post, I will give a crucial info.. Yes, I find it important, although some may say that this is publically available information.. Still, I want to underline this thing, as it may be a support issue in Oracle Linux KVM environments some day.

Oracle Linux KVM is a feature that has been delivered and supported as part of Oracle Linux,as you know.  But! things are a little different for OLVM.(Oracle Linux Virutalization Manager)

If you want to get support for Oracle Linux Virtualization Manager, you must have an Oracle Linux Premier Support subscription. In other words; support for OLVM is not included Oracle Linux Basic Support subscription..

So lets answer this question ->  Do we (on-prem KVM users) need Oracle Linux Premier support?
Well, the answer depends on lots of things.. I mean, if you have Linux premier support, then it means you got the full package -> 
  • Around-the-clock access to enhancements, updates, and errata
  • Oracle Enterprise Manager for Linux Management
  • Oracle Linux Manager (formerly Spacewalk)
  • High availability with Oracle Clusterware
  • Comprehensive tracing with DTrace
  • Oracle Linux load balancer
  • Comprehensive indemnification
  • Oracle Container runtime for Docker
  • Oracle Linux Virtualization Manager
  • Zero-downtime patching with Ksplice
  • Oracle Linux Cloud Native Environment
  • Gluster Storage for Oracle Linux
  • Oracle Linux software collections
  • Oracle Linux high availability services support (Corosync and Pacemaker)
  • Premier backports
  • Lifetime sustaining support
Let's not go away from the context .. Does an Oracle Linux KVM user need a premier support?
Well, my answer is yes.. If you are using Oracle Linux KVM, you should use OLVM as well.. OLVM is a modern interface, that eases things and provides a smooth virtualization management platform.. As you use OLVM for an important process, you need to get Oracle Support for that.. Consider the following scenario;

Your security admin told you that the Http Server of the OLVM host has some security weakness and it must be upgraded to a target httpd release..

Facts : 

Currently, there is no direct document available on MOS for this kind of an upgrade..
There is no OLVM package or patch which is applicable for this kind of a request, as far as I can see.
You may be a HTTP Server and/or Linux expert and you can upgrade the latest httpd version by using your own methods.. (installing a new httpd and copy-pasting the OLVM-related httpd configuration).. However; this httpd in question belongs to OLVM , so you actually get a custom conf there..
You may wait for the new version of OLVM and you hope that, Oracle will be using the latest version of httpd in this new OLVM version.

But what about you don't have that time to wait? What about you don't have that Linux skills to make such an upgrade based on a custom method.
Or let's say you did the upgrade but you can't make OLVM running with the new httpd release..

You see it, right.. Oracle Linux Premier Support even solves these types of problems. 
Even in the cases, where we need it once a year, it provides us a guarantee .. 
Considering you are an Oracle Linux KVM customer, ask yourself the question 'Do I need to have Linux Premier Support?' again and answer the question yourself by reconsidering how vital OLVM is for you.. If you pay attention, I don't even want you to think about the other things supported (zero downtime patching with Ksplice and etc..) with the Oracle Linux Premier Support..


V$INDEXED_FIXED_COLUMN is a useful fixed view.

As we know v$session and similar fixed views get their data through x$tables (or lets say through some special memory structures). Note that, most of the time, v$ views fetch their data from multiple X$ memory structures.

This V$INDEXED_FIXED_COLUMN view shows which columns of these x$ tables have index.

Probably, the indexes in these memory structures are not exactly like the indexes that we know on our database world .. For instance; probably they don't look like b*tree index structures.. I guess these indexes are only based on some offsets, but for us it provides the same benefit (actually I guess they are little les useful) as our traditional indexes.. Besides, they look like traditional indexes in the execution plans. 

Therefore, if we get the data from x$ structures through those indexed columns, we can achieve performance increase especially in queries (on v$ views) , especially in environments with large v$ data.

In this context, if a performance problem appears in a query on some v$ views, it would be useful to look at the execution plan. If we see FIXED TABLE FULL when accessing x$ tables,  we can get the information from V$INDEXED_FIXED_COLUMN which columns of the related x$ table are indexed and then we may change our query to make the optimizer use that index and thus solve the problem. 

The condition we use in our query for the value of an indexed column is also important. For example sid column in v$session .. If we use sid = 5 as a condition, our query uses index to reach x$ data, but if we use sid = userenv ('SID'), optimizer doesn't choose the indexed way.. ( this is seen even in the cases where there is no data type mismatch)

Of course, in such problems that we have long running v$ queries on big sized v$ data, we need to check whether the size of the data in the relevant v$/x$ should be that big or not. Having unexpectedly big sized v$ data should be analyzed.

In addition to that, we need to evaluate the performance bugs and patches, if any, about the related v$ views in the relevant DB release.

This is the tip of the day. Stay tuned :)

Sunday, April 4, 2021

OBIEE 12C -- Implementing a Custom Authentication Provider & Custom Authorization based on Embedded Ldap

In one of my earlier posts, I shared a method for implemeting a Custom SSO Login to the OBIEE instances from 3rd party apps. 
You can have a look at that earlier post using the following url:

Today, I m here to give you the method for implementing a custom authentication provider for OBIEE.. This time we are dealing with a customization on Weblogic authentication providers actually. We have implemented this custom authentication provider configuration and tested it.. I must admit that it is challenging, but it works! 

In our case, the purpose was to bypass Active Directory interaction during the login and make some https calls to a custom login web service hosted by the client.. That custom login service in turn was designed to authenticate the users by communicating with the Active Directory (AD).. So we should be communicating with the webservice host and instead of us, that webservice host should be communicating with AD.. 

This requires a new custom authentication in the first place.. In this context, the same Custom Authentication Provider given in the document pointed by the url below can be used.. However; in that document there is database part .. In our case, we use weblogic embedded ldap to store the user information and mappings (if you want to see it that way, or you call it user store), so except that database part, we follow the "Fusion Middleware Developing Security Providers for Oracle WebLogic Server 12c" document pointed by ; 

In our case, we changed the code (given in the example in the document) and made it call our custom web service during the login. However; this is not sufficient to login a user into OBIEE.. That is, the user info should also be available for OBIEE/Weblogic. 

So that's why we changed the example code a little bit more.. We modified it by following the algorithm below; 

-First, create the users and groups in Weblogic. (this is a one time action) You can even delete the users after creating the groups and associating them with the groups.. But authorized groups should be there. 
-Get the user pass from login page 
-Call webservice and try to authenticate the user. 
-If the authentication is successful, check the weblogic embedded ldap.. 
  -If the user is not there in the weblogic embedded ldap, create it. 
  -If the user is already there, don't do anything, just run the rest of the relevant code and exit. 
-If the authentication is not sucessful, run the rest of the relevant code and exit. 

I won't get into the details of the code and the configuration that we needed to make in the Weblogic to deploy this custom authentication provider.. These are already documented and well known. I mean, we put our jar into the $ORACLE_HOME/wlserver/server/lib/mbeantypes and restart Weblogic and then using the Weblogic console; we go to Security Realms > My Realm > Click on Providers tab > Lock and Edit > Click New > Choose Custom authentication Provider > Give it a name :) > Complete adding the new custom auth and restart Weblogic :) 

I said , I just mentioned that I won't get into the details but I couldn't stop myself.. :) Anyways, the thing that I wanted to underline here is not the implementation itself, but the algorithm for implementing such a custom login flow.. 

It is not only the authentication we need to pay our attention to.. But, the authorization is also important and a custom design should be implemented there as well..

So keep this in mind, if you need to implement some custom authentication providers some day.. In our case, we kept up with Weblogic and used the embedded ldap in conjuction with our custom algorithm to solve the authorization problem, but we could also implement an authorization provider in addition the authentication provider... So all these should be considered when making a customization in OBIEE login flow..

That's it. I hope you find it useful..

Saturday, April 3, 2021

Erman Arslan's Oracle Forum -- March 2021 - "Questions and Answers Series"

Question: How much time do you spend/lose?

Answer: Well, how much time I gain? :) 

Remember, you can ask questions and get remote support using my forum.
Just click on the link named "Erman Arslan's Oracle Forum is available now.
Click here to ask a question", which is available on the main page of Erman Arslan's Oracle Blog -- or just use the direct link:

 Come on, let's see what we've been up to in March. (Do not forget to read the blog posts too :)

Tuesday, March 16, 2021

OBIEE - BI administration tool performance problem -- on 12CR1 database

 Here is a quick tip for a quick win. Especially for OBIEE users!

You may encounter performance problems while using BI administration tool , especially while importing metadata, on that wizard while selecting Metadata Types and all that..

We have seen that problem on an OBIEE environment. BI admin tool version was also and the database version was 12CR1.  This tool-specific performance problem started to be seen after the database upgrade. (in our case, after the DWH upgrade - 11gR2 to12CR1)

Client side was analyzed and there wasn't any problems there.

Traced the db session and saw that, it was active all the time, different queries one after another. Those queries were reading data from dictionary views like all_tab_columns and all_nested_tables..

Considered collecting fixed object stats and dictionary stats, but didn't do any of those, as the system was a very mission critical one, and performing that type of a statistics collection was not allowed. (especially at that point where we cannot predict whether collecting those stats will be our solution or will bring some new problems to the environment) ..

It was obvious that, import medata wizard wasn't producing very optimized SQL, or let's say the wizard wasn't producing the SQLs by considering the optimizer fixes and features in newer Oracle releases. Actually, this may also be a database related problem because we have the following note already in place in Oracle Support;

Query to Dictionary ALL_CONSTRAINTS Slow after Upgrade to (Doc ID 2266016.1)

Using the wizard for this task is actually an optional way.. I mean, we can always do that metadata import manually, but in this case it was hard to do it in manual way, because there were several tables to be processed..

We didn't have the motivation to open a support ticket for this. That wizard was already optional and the problem was something in the middle between the tool and the database.. Besides, we were after a quick win..

Recently we dealt with a similar problem in a Oracle Discoverer environment.. There, the database was upgraded to 19C and the customer was facing dramatic performance problems almost in all discoverer reports. 

If you want to read that story, here is the link - > 

In this case, too we did something similar.. We created an after logon trigger and by the help of that trigger, we made some optimizer related parameters automatically set for the BI admin tool sessions during the database login.  (note that, there wasn't any performance problems in ETL or OBIEE reports.)

This fixed the issue!

Here is the setting we've done inside our custom after logon trigger -> 

IF LOWER (v_program) LIKE ('%admintool.exe%')
EXECUTE IMMEDIATE 'alter session set optimizer_features_enable=""';
EXECUTE IMMEDIATE 'ALTER SESSION SET "_optimizer_push_pred_cost_based" = FALSE';
EXECUTE IMMEDIATE 'ALTER SESSION SET "_optimizer_squ_bottomup" = FALSE';
EXECUTE IMMEDIATE 'ALTER SESSION SET "_optimizer_cost_based_transformation" = OFF';

Ofcourse, this is not a supported solution, but currently (according to my research), it is the best thing we have :)

That is it. I hope you find this useful.

OVM -- license / CPU-cores alignment , CPU Pinning , CPU Affinity , OVM Manager + ovm_vmcontrol

As mentioned in my earlier posts (years ago), you need to have your computational resources aligned with your Oracle Database and Application licenses.. 

In virtualized Oracle environments (OVM and KVM), you have the ability to dedicate your cpu cores to your virtual machines and keep them aligned with the licenses you have for your applications and databases.. This can also be thought as a capacity on demand solution.. That is, you grow as you pay and you license only the cpu cores you use.. 

Doing such a configuration starts with analyzing the licenses you have. If you have CPU-core license, you take your license counts and divide it with the core factor (for intel it is 0.5) to get the maximum count of cpus that you can have with  those license you have.. For user named plus licenses, you also need to be sure that your user count is aligned with the user count defined in your named plus license.. But! again, you need to be aligned with the cpu count.. In other words, you can't have a 24 core machine to host a database or application which is licensed with 25 named users. So, there is a user count & cpu count alignment as well..

Okay enough with the intro : ) Check the articles below for more on this topic.

In this blog post, I will give you the technical side for implementing license & cpu alignment in OVM environments..

So, as you can guess we configure Guest VM cpu resources according to the licenses we have.. But! not only that, we also neet do implement cpu pinning for the Guest VM to dedicate them to the specific CPU cores according to our license count.. Without this pinning action, it is not accepted to license only the cores that we use.. In other words, if you don't do this pinning action, you will need license all the cores on your Oracle VM server. (it is the same in KVM environments as well)

In order to configure the cpu cores of a guest machine and set CPU pinning, we use OVM Manager and Oracle VM Virtual Machine Control (ovm_vmcontrol utility.) -- supposing we are dealing with Oracle VM 3.4 and on-wards.. Note that, it is better to be on 3.4.3 and on-wards, because; on 3.4.1 and 3.4.2, CPU pinning with ovm_vmcontrol utility on running guest does not work.

We first start by setting the maximum number of virtual CPU cores (maxvcpu) and actual number of virtual cpu cores (vcpu) for our Guest VM.. We use OVM Manager for this task.

Note that, these numbers(maxvcpu and vcpu) are actually based on threads, not based actual physical cores.

In this context, if we have a 4 core intel - Linux machine and if we have hyper threading enabled, our host sees those cores as 8 threads. So, there in OVM Manager, we set maxvcpu and vcpu counts for a guest machine based on the thread count.

Note that, if we change maximum number of cpus for a Guest VM we need to reboot that vm after the value is changed.

Well.. Suppose we set the maxcpu to 4 and vcpu to 4.. With this config, we actually make or Guest VM  use 4 threads. However; in order to get benefit from capacity on demand (aligned Cpu cores and licenses), we need to set CPU pinning for those 4 threads as well and we do that by using ovm_vmcontrol.

A quick command toolbox for getting cpu related info from our OVM host -> 

xm info
xenpm get-cpu-topology
xm vcpu-list

In order to use the vm_vmcontrol, we need to download and install it first..

ovm_vmcontrol is delivered by -> Patch 13602094: ORACLE VM 3.0 UTILS RELEASES: 1.0.2, 2.0.1, 2.1.0

We install the tool on the OVM Manager host. (in my opinion, this is easiest installation method)

We unzip it in /u01/app/oracle/ovm-manager-3 directory and if the OS user of the OVM Manager has java installed, no further actions required.

Before using the tool, we check our cpu topology and get the info about our threads and physical cores.
Below, we see a a single socket server with 4 cores and 2 threads per core.

# xenpm get-cpu-topology
CPU core socket node
CPU0 0 0 0
CPU1 0 0 0
CPU2 1 0 0
CPU3 1 0 0
CPU4 2 0 0
CPU5 2 0 0
CPU6 3 0 0
CPU7 3 0 0

In the above outputs, the CPU lines represents the threads.. So CPU0 is thread 0 of core 0. CPU1 is thread 1 of core 0.. CPU 2 is the thread 0 of core 1 etc.. 

So we have 2 threads in each core.. So if we have 2 core license for this Guest VM, we need to pin those 4 vcpus (threads) to 2 physical cores..  for instance, to -> Core 0 and Core 1

In the above output, cpu 0,1,2,3 are the threads that correspond to core 0 and core 1 and that's why our ovm_vmcontrol command in this case will be similar to following;

 ./ovm_vmcontrol -u admin -p <admins_password> -h <ovm_managers_hostname> -v <guest_machine_name> -c setvcpu -s 0-3

Got the point right?

You can also get the cpu pinning related info for a vm using a ovm_vmcontrol command similar to following;

 ./ovm_vmcontrol -u admin -p <admins_password> -h <ovm_managers_hostname> -v <guest_machine_name> -c getvcpu

The command above lets you check the current pinning configuration of a vm..

" xm vcpu-list" also gives you that info in its output -- exactly the values under the column named Cpu affitinity.

# xm vcpu-list
Name ID VCPU CPU State Time(s) CPU Affinity
0003fb00000600007c351fa24276c63f 1 0 5 -b- 4676.8 0-3
Domain-0 0 0 0 -b- 932.1 any cpu
Domain-0 0 1 6 -b- 1168.0 any cpu

Some references in this context:

Set CPU Pinning for VMs on Oracle VM 3.4.1 and 3.4.2 (Doc ID 2213691.1)

Wednesday, March 10, 2021

We (Blog and Forum) ranked high in the lists of the most known Oracle and Database (including all databases) blogs, websites & Influencers 2021

Continuing to produce content from two different lines and support oracle users all around the world. My blog was already on the top 100 list, "this week my forum as well entered that list at the top".  In addition to that, both my forum and blog entered Top 80 Database Blogs, websites and Influencers list..

Thanks to Feedspot for Top 100 Oracle Blogs, Websites & Influencers in 2021 & Top 80 Database Blogs, Websites & Influencers in 2021.

Check the cool content ->

2 records ( 1 for blog and 1 for forum) in both list. Not bad isn't it:)

Sincere thanks to my readers, supporters and forum subscribers.

Friday, March 5, 2021

Erman Arslan's Oracle Forum -- Feburary 2021 - "Questions and Answers Series"

Question: How much time do you spend/lose?

Answer: Well, how much time I gain? :) 

Remember, you can ask questions and get remote support using my forum.
Just click on the link named "Erman Arslan's Oracle Forum is available now.
Click here to ask a question", which is available on the main page of Erman Arslan's Oracle Blog -- or just use the direct link:

 Come on, let's see what we've been up to in Feburary. (Do not forget to read the blog posts too :)

Monday, March 1, 2021

Oracle Linux - KVM -- VM network on Broadcom bond devices fail -- actually OS fails adding a Broadcom bnxt_en bond to a bridge

Recently dealed with a problem on an Oracle Linux KVM.. Customer was trying to implement Oracle Linux KVM using Oracle Linux Virtualization Manager (OLVM) , but failing in network configuration. OS was Oracle Linux 7.9 64 bit...

The issue was about virtual machine network .. That network could not be assigned to the relevant bond device using OLVM.. Bond device was configured with 2 slaves, and the configuration was correct, the bonding mode was appropriate and the slaved and the master (bond) were active in the OS layer.. But! somehow OLVM could not assign the vm network ( created by the customer using OLVM) to the relevant bond device. 

No errors were seen on OLVM, no errors in the OLVM logs (for instance, in engine.log), but I saw the following log messages on Oracle Linux syslod.. (/var/log/messages) ;

server01 kernel: VLAN2: port 1(bond1.10) entered blocking state

server01 kernel: VLAN2: port 1(bond1.10) entered disabled state

That VLAN2 shown in the logs above was actually a bridge.. As you may already know, when we have the vm network in the picture, we rely on the bridges on Linux layer..  So, it was clear that we had a bridge problem.. kernel was disabling the relevant path..

So this was the cause that prevents OLVM assigning vm network to the bond device.

When we tried to add that bond to that bridge, the following error was shown in the log;

server01 network: Bringing up interface bond1.10: can't add bond1.10 to bridge VLAN2: No data available

After doing some more analysis, I concluded that the problem wasn't on Oracle Linux KVM.. The problem should have been on Oracle Linux kernel or the device driver associated with the ethernet devices.. (in this case Broadcom bnxt_en)

With this in mind, I made more specific research and found similar bugs on Redhat.. 

In the redhat support,  I could see a bug, which had the exact similar symptoms ->  Bug 1860479 - Unable to attach VLAN-based logical networks to a bond..

The bug was recorded for Redhat 8 , but it seemed we had the same bug in Oracle Linux 7.9.. Actually, rather than the OS version, the kernel version was the key..

The fix was upgrading the kernel, but the workaround was downgrading it.. (according to Redhat). 
I was trying to get a quick win in this case, so I had to use a lower version kernel, than I decided to use Redhat compatible kernel instead of using the UEK kernel.. As you can imagine, the server was rebooted with the redhat compatible kernel (installed as an alternative kernel in Oracle Linux)  and problem solved! After booting with that kernel (a lower version kernel when compared to the UEK kernel), customer could assign the vm network to the relevant bond device using OLVM.

Note that, this bug appears when we configure the bond-slaves on 2 network ports belonging to 2 different Broadcom networks cards .. Bug doesn't appear when we configure the bond-slaves on the same network cards ..

That 's it .. I hope, you find this article useful.

Saturday, February 27, 2021

Exadata - Oracle Hardware -- SFP Transceiver types & models

Yesterday, one of my collegues made me revisit the following article that I wrote after completing a field operation in an Exadata environment.

Exadata -- How to Connect Oracle Exadata to 10G Networks Using SFP Modules

The subject was about activating SFP modules and the related network on Exadata.. The question was the type/model of the transceivers that I have used while accomplishing the task that I shared in the article..

Actually, it's been a while since I did that job. I did not include that detail in my blog post either, but I could hardly remember that I have used the transceivers customer already had.. The client bought the transceivers from Oracle long time ago, longer than the time I start working there as the lead consultant :)
This was a good question indeed.. So I though about it and made a little research..

I guess the transceiver model was something similar to X2124A-N. So, one can position transceivers that meet similar standards. The general idea is to be compatible with the switch side. But as we have oracle hardware (Exadata) in the picture -- at least on one side, we need to be compatible with the oracle hardware as well - in the first place actually.. So using oracle supplied transceivers is the best idea, but similar transceivers should also work.

Anyways, we ended up with the following document -> 

Oracle's 10 Gigabit Ethernet Transceivers and Cables Frequently Asked Questions

Still the idea given in the paragraph above applies.. As we may not have Oracle switches in the custom environment, I mean as we may have to work with customer supplied 3rd party switches, being compatible with the switch side should be on our focus as well..

That is it.. I wish you a good weekend :)