Monday, October 5, 2020

Erman Arslan's Oracle Forum -- Questions and Answers Series - September 2020

Let's start with the following question and answer. 
PS, this question and answer reveals what my motivation is. ->

Question: How much time do you spend/lose? 
Answer: Well, how much time I gain? :) 

In September, again I tried to answer all the questions. I gave advices when necessary, and provided guidance for the solutions when I had enough info about those problems and the environments where those problems arise. 

Take a look at the issues and related topics in Erman Arslan's Oracle Forum. Collect the harvest you can gather from the support and technical directions provided!


Erman Arslan 's Oracle Forum September 2020 -> 

Thursday, October 1, 2020

Custom SSO / Login to OBIEE from a 3rd party app. By sending a POST request.. This works even when the LightWeightSSO is enabled!

In one of my previous blog post (https://ermanarslan.blogspot.com/2020/09/obiee-sso-integrating-with-third-party.html), I shared a third party SSO integration method for OBIEE.

We were just passing the user and password info as url arguments and it was working.

On that blog post, there was the following sentence; 

That is -> We make our OBIEE to get the user and password through the OBIEE url. (on-the-fly login using url arguments).. Note that this is the simplest way of doing this work.. Ofcouse, customer's ability to post the usernames and passwords using any other method than this one, will make us change/improve the design of this login flow.

Anyways, this was one of the ways, but today we realized something else.. Something else that is refuting that way.

That is, if we login to OBIEE and then try to reach ODV from there,  we find ourself in a login dialog, where we should enter our user and password information once again. Yes.. This is not cool..

Fortuneatly; we have a solution for this too!

The solution is to enable LightweightSSO. Sound simple right? But wait a sec, LightweightSSO is not compatible with our 3rd party integration method , I mean -> Logging into the OBIEE from a third party app by passing user and password as arguments in OBIEE URL...

Remember, in that blog post, I already mentioned that when 12.2.1.3 LightWeightSSO is ON, NQPwd/User(I mean the URL method) won't work for OBIEE login.. So, as I mentioned in that earlier blog post, we disabled LightWeightSSO to be able to pass user and password info through url.

However; when the LightWeightSSO is disabled, we can't directly reach ODV from OBIEE.. I mean, ODV requires us to re-enter our user and password info as I just mentioned. 
So it is not acceptable. 
This means we need to enable LightWeightSSO to make  automatic SSO integration between OBIEE and ODV work.. Ofcourse, this time (when the LightWeightSSO is enable), our OBIEE login (through url arguments user and password) will not work..

Well, this is what makes me write this blog post.

The question : How can we login to OBIEE from a 3rd application automatically in a custom SSO-like way, even when the LightWeightSSO is enabled?

In order to answer this, we take a look at the OBIEE login flow, I mean we do a technical login mechanism analysis. 

I don't mean a code analysis, but we use our browser (For instance Chrome-> F12-> Network tab) to analyze the http requests, http headers and the form data.. We need to check the required the arguments.

Once we do those analysis; we can see that, when the LightWeightSSO is enabled, the login page changes. 
Our login page becomes login.jsp. Login.jsp get the user and password info from the user and authenticates it using "login" (without .jsp suffix). 

So when we check that "login", we see that it is designed to receive some POST request arguments. j_username, j_password and so on. 
So if we can make a HTTP POST request to "login" directly from our 3rd party app, it should work.. 

This way, we will be able to pass the username and password info to OBIEE and OBIEE will let us in automatically. (even when the LightWeightSSO is enabled!)

So, we create a simple html to test this..
Note that the values that you see below are just examples.  -> 

<html>
<form id='redirectForm' method='POST' action='https://oiee_host:obiee_port/bi-security-login/login'>
<input type='hidden' name='j_username' value='weblogic'/>
<input type='hidden' name='j_password' value='erman'/>
<input type='hidden' name='j_msi' value='none'/>
<input type='hidden' name='j_language' value='en'/>
<input type='hidden' name='j_redirect' value='L2FuYWx5dGljcy9zYXcuZGxsP2JpZWVob21lJnN0YXJ0UGFnZT0xJmhhc2g9RlEyeDZFaGp3cnJHQXNzbmVWOWtSeVVuYmxVQjYyczZMR0JESFEtR3F5ZEoxcXh2bjMyMmxKaUlwU1R4VFIxMA'/>
</form>
<h1><a href="#" onclick="document.getElementById('redirectForm').submit()">GO!!</a></h1>
</body>
</html>

Please note the hidden input names -> j_username, j_password, j_msi, j_language and j_redirect..
j_redirect is the url that OBIEE will redirect us after the login process. It is in the base64 form. (in this case it is basically set to -> /analytics/saw.dll?bieehome&startPage=1)

So, we open this html with our browser and click GO! Guess what? We found ourselves in the OBIEE home page! (logged in automatically in the backend by posting user and password info) So it works! 

At the end; we pass this html to the developers of the third party application as a reference and they modify their OBIEE login code and that's it :) We login to OBIEE automatically from a 3rd app automatically even when the LightWeightSSO is enabled.

I 'm not finished! :)

If the third party app requires a form, and if it doesn't like the form of the login.jsp. (because it is doing its work with javascript probably) , I mean if the 3rd party app requires a submit button, then we create a wrapper html like the one below and deploy it to our Weblogic (or any webserver that we have)..
Want to deploy it to a Weblogic? -> here is the way ->  "How To Publish a Static HTML Page To WebLogic Server and Request Through Oracle HTTP Server 11g (Doc ID 1192439.1)" -- Part 1 is enough..

With this action, we actually put a middle man between our 3rd app and  OBIEE login and make the 3rd app to post to OBIEE login using that middle man :) This works too!

So the flow becomes;  "3rd pary app -> Wrapper html -> OBIEE Login"

<html>
    <form  name="loginform" method='POST' 
        action='/bi-security-login/login' 
        style="visibility:hidden">
    <input type='hidden' name='j_username' value=''/>
    <input type='hidden' name='j_password' value=''/>
    <input type='hidden' name='j_msi' value=''/>
    <input type='hidden' name='j_language' value=''/>
    <input type='hidden' name='j_redirect' value=''/>
    <input type='submit' value='Login'/>
</form>
</body>
</html>

That is it for today:) I hope this will help you.

Wednesday, September 23, 2020

Upcoming End date of OVM Premier Support. It is time to consider KVM + OLVM (especially for the new projects)

This is for the ones who is considering a virtualization solution for new projects.

Especially for the ones considering Oracle Virtıalization... 

Usually when we say Oracle virtualization, we mean Oracle VM Server, but actually that was in the past. Now we have an alternative to OVM..  It is KVM (Kernel Based Virtual Machine).

KVM is actually an open source virtualization technology that turns Linux into a hypervisor. 

I firstly used this KVM in ODA X7-2 S and M environments. At that time, we had some limitations though.. For instance, there were no capacity-on-demand option for databases and application that are running on Guest KVM Machines.

However; now we have the cpu pinning/ hard partitioning / capacity-on-demand option in KVM!

Note that,  using hard partitioning to limit Oracle product software licensing still adds some restrictions such as live migration and scheduling policies available on Oracle Linux Virtualization Manager.


If you want to check out some of my advantures on ODA and KVM, you can read the blogposts pointed by the following urls.  :)

Also note that, Oracle Linux KVM is the same hypervisor used in Oracle Cloud. You can read the related  article in the following url to get more info : https://www.oracle.com/a/ocom/docs/olvm-datasheet-nov2019.pdf

At the moment, KVM has a support contract advantage too.. KVM is offered under Oracle Linux Virtualization Manager (OLVM). Note that OLVM is the virtualization management platform that can be easily deployed to configure, monitor, and manage KVM environments.

So, KVM is offered under Oracle Linux Premier limited support.  So no need to purchase support for the virtual layer seperately.. 

Another thing that motivates us to prefer KVM is the upcoming end date of OVM premier support.
As per Lifetime Support Policy document; Oracle VM Premier Support period will end in March, 2021. Therefore, the users of OVM would need to buy additional extended support. 

As I mentioned, alternative to OVM is KVM, offered under Oracle Linux Virtualization Manager (OLVM). 

Here is the key benefits of using KVM and OLVM -> 
  • Complete server virtualization and management solution with zero license cost
  • Single software distribution for Oracle Linux OS or Oracle Linux KVM 
  • Speeds application deployment with Oracle Virtual Appliances
  • Ksplice integration to patch kernel, QEMU, and user space libraries with no service interruption
  • Hard Partitioning support enables efficient Oracle application software licensing
  • Full Stack Management with Oracle Enterprise Manager
  • Path to Oracle Cloud Infrastructure with a common hypervisor
So, we advise our customers to use KVM which is offered under Oracle Linux Premier limited support, especially for their new virtualization projects. This is the purpose of this blog post actually. Final decision is still yours :)

Wednesday, September 16, 2020

RDBMS / ASM / Exadata - "Smart Rebalance" / Seems like the %15 free space rule (or %9 free space rule) is becoming history

Yesterday, I published a blog post about the %15 rule. I shared my thoughts on %15 free size rule, which states that, in order to be in the safe side in case of a cell/or disk failure in Exadata/ASM environments, we need to have some free space in the relevant diskgroups. This actually guarantees the rebalance, which should be done after a disk or cell failure, to be succesful. The rule states that, at least %15 of a diskgroup should be free. 

I found this magic number, or let's say this magic percentage (%15) a little interesting and felt the need to write a publish a blog post about it.

You can access that blog post via the url below;
http://ermanarslan.blogspot.com/2020/09/asm-grid-my-thougths-on-15-free-disk.html

Yestarday night, I was still curious and checking the documents to learn something new about this subject and I finally found the thing I was looking for. That was exactly what was expected. "Smart Rebalance"

Smart rebalance used with Oracle ASM which eliminates the need for free space for Grid Infrastructure 19c and Higher using high redundancy diskgroups.

In other words;  no need for free space!   If there is not enough space to rebalance at the time of failure, offline the disk! Upon replacement, efficiently repopulate it from partner disks automatically! 
This eliminates the need to reserve free space for rebalance when using high redundancy. It provides  seamless repair without the risk of out of space errors..

Currently there is no internal info about it, but you may visit the following the url to see  Exadata MAA slides.. Slide 58 introduces the smart rebalance and shows the dissaperance of %15 rule gradually :)


I will give you more information on this topic when I have a little more detail.

Tuesday, September 15, 2020

EBS - Attention ! Workflow mailer & oAuTH2.0 and office365 - Microsoft / End of Support for Basic Auth - "Deadline has been pushed to the second half of 2021"

Thanks to the community in my forum (Erman Arslan's Oracle Forum), we realized something important and fortuneatly, it is still not to late to report this!

Thanks Laurel for pointing it out in the following thread :) -> 


First, Microsoft announced, that they will stop supporting Basic Authentication for Exchange online on October 13, 2020.  EAS, POP and IMAP..

But then, they changed the deadline .. That is, the Basic Auth is "not" going to be disabled on October 13, 2020. Due to the COVID... That deadline has been pushed to the second half of 2021
So, it seems we still have time. Good news, right? :)


Microsoft says: In response to the COVID-19 crisis and knowing that priorities have changed for many of our customers we have decided to postpone disabling Basic Authentication in Exchange Online for those tenants still actively using it until the second half of 2021. We will provide a more precise date when we have a better understanding of the impact of the situation.

Anyways, when that time will come, workflow Mailer IMAP with Office 365 basic authentication will not be supported and probably it will just not work.(Basic authentication will be turned off)

EBS customers will have to use OAuth 2.0 token based authentication for IMAP.

So, EBS customers who are using Workflow mailer with office365 may be in trouble , and I think Microsoft is ready for this -> 

https://developer.microsoft.com/en-us/outlook/blogs/announcing-oauth-2-0-support-for-imap-smtp-client-protocols-in-exchange-online

They say : We’re announcing the availability of OAuth 2.0 authentication for IMAP, SMTP AUTH protocols to Exchange Online mailboxes. If you have an existing application that reads or sends email using one or more of these two protocols, the new OAuth authentication method will enable you to implement secure, modern authentication experiences for your users. This functionality is built on top of Microsoft Identity platform (v2.0) and supports access to email of Microsoft 365 (formerly Office 365) users.

Oracle address this situation by the following document ;

EBS Workflow Mailer Configuration with OAuth 2.0 Token-Based Authentication for Cloud-Based Email Services (Gmail, Yahoo, Office365, etc) (Doc ID 2650084.1)
Note that, this document is not up-to-date...

We have also a bug record, an Enhancement Request for it. 
Bug 30505419 : WORKFLOW MAILER SUPPORT OF OAUTH2 - GENERIC PLATFORMS

Unlike the document, the enhancement requests seems up-to-date. Oracle seems working on this subject as I see some recent updates on the bug record;

*** 09/11/20 08:45 am ***
*** 09/11/20 09:18 am RESPONSE ***

The Enhancement Request is in Internal Review status, meaning not approved nor denied.
However, currently we have no ETA for this. 

In any case, I think the solution/patch will be for EBS 12.2.x.. So, I think, upgrading to the latest version (12.2.10)  should not be a must.

But still, you need to design and implement your backup solution, because the ATG fix may not be ready until the second half of 2021.. ( Actually, I think there is enough time to deliver the fix, but still we need to be prepared)  
Especially EBS 12.1.3 customers should be careful and ready. 12.1.3 is also subject to restriction on new patches starting Dec 1, 2021 and a solution for 12.1.3 cannot be guaranteed until the final solution for EBS 12.2 connecting to Office365 will be developed.

In order to be in the safe side, customer should just create a local mail server, test it and be ready for activating it on the second half of 2021.. (just in case)

I will continue to follow this subject and keep you updated.

ASM / GRID -- My Thougths on %15 free size rule --- rebalance, imbalance, calculations, bugs and all that

The rule states that, in order to be in the safe side in case of a cell/or disk failure in Exadata/ASM environments, we need to have some free space in the relevant diskgroups. This actually guarantees the rebalance, which should be done after a disk or cell failure, to be succesful. The rule states that, at least %15 of a diskgroup should be free.

Well, I found this magic number, or let's say this magic percentage (%15) a little interesting and that's why I want to share my thoughths on this with you.

Normally, we have a metric named USABLE_FILE_MB as you may already know. It may depend on the version but, normally this metric gives us the safe allocatable size considering a case of a disk failure.. In the old versions, this was reporting the safe allocatable size, a value which can be taken as a reference for being safe even in a cell failure.

In simple logic, we can say that; we have no risks, ofcourse if the USABLE_FILE_MB has a positive value and if we think it will stay positive even when we consider potential new future allocations.

Moreover, USABLE_FILE_MB is derived by considering the REQUIRED_MIRROR_FREE_MB, which is the required size for a rebalance operation to complete in the worst case scenario.

The formulas are as follows;

Normal Redundancy
USABLE_FILE_MB = (FREE_MB – REQUIRED_MIRROR_FREE_MB) / 2

High Redundancy
USABLE_FILE_MB = (FREE_MB – REQUIRED_MIRROR_FREE_MB) / 3

If USABLE_FILE_MB is a negative value, then we can directly say that the normal redundancy environments are in danger, but in any case we can still check FREE_MB. If the value that we see in FREE_MB is bigger than the disk size (if the disk sizes are equal.. If they are not equal, then FREE_MB should be bigger than the largest disk size), we can still rebalance in case of a disk failure. 

So far so good. These are all related with disk failures. (as I mentioned earlier, we need to check the version and conclude what the USABLE_FILE_MB reports to us.. Usable file mb even in the case of a disk failure or Usable file mb even in the case of a cell failure)

Of course, if we lose a cell and if the USABLE_FILE_MB considers only the disk failures, the situation is different. We need to multiple the USABLE_FILE_MB with the count of disks in the cell.

It is independent from the redundancy being normal or high; for instance , if the USABLE_FILE_MB is 10 and it reporting us the usable file mb in the case of disk failures and if we have 12 disks in a cell, then we have to  multiply that value 10 with 12. This makes 120 and that 's minimum usable file mb that we need to see in USABLE_FILE_MB in order to be safe even in  a case of a cell failure.

At this point and in this context, following article of Emre Baransel might be nice for reading.

https://www.doag.org/formes/pubfiles/8587254/2016-INF-Emre_Baransel-A_Deep_Dive_into_ASM_Redundancy_in_Exadata-Manuskript.pdf 

Until here, if you notice, I have never mentioned the 15% rule.  So I have explained  the subject ignoring this rule, but actually this rule must not be ignored.

Now it is time to explain that rule:)

Well, we first revisit the MOS note named, "Understanding ASM Capacity and Reservation of Free Space in Exadata (Doc ID 1551288.1)".

In MOS note, we have a script that calculates the reserve space and capacity for the disk failure coverage and it has a reserve factor of 0.15 and that's where the %15 rule comes in..

When we examine the script, we can say that, it directly multiplies the raw total disk size by %15 and then, it substract that value from the raw total disk size.

In my opinion, it shouldn't be that way.. I mean, there shouldn't be a %15 rule and I think this subject is a little buggy.

Note that, at the moment;  we need to consider the %15 rule and we must follow it!

Anways; if we reserve %15 of space , are we safe ? Well, probably.. But, the following bug says that, even if we have %15 reserve space ,we still may have problem during rebalance..

Bug 21083850  ORA-15041 during rebalance despite having free space -> Bug 21083850 - ORA-15041 during rebalance despite having free space (Doc ID 21083850.8)

The cause of this bug is probably the imbalance during rebalance -> 

When a disk is force dropped, its partners lose a partner.
As a result, the partners of its partners get more extents relocated to them, causing an imbalance.
This imbalance results in the ORA-15041, because some disks run out of space faster than others.

In the document above, we see a patch is addressed. However, in another Oracle script, we see a comment like the following -> "Use the new 15% of DG size rule for single disk failure, regardless of redundancy type (Bug 21083850)" 

This makes me think that this subject is buggy :) The %15 rule is there not only to address that specific bug, but it is there due to other bugs as well.  In my opinion, these kinds of rules are there because of other problems.. In this specific case, probably because of imbalance, or let's say it is probably due to the ASM extents not being distributed properly.

Normally, when we lose a disk, ASM will distribute the mirror extents of that failing/lost disk to the other disks that are available on the relevant diskgroup (ofcourse, according to the redundancy type).. That comes from the logic of disk mirroring. However, probably, ASM distributes these extents not evenly and overloads some discs in some cases and that's where we get ORA-15041.

This situation can also be explained by ; having those disks already overloaded even before the rebalance.. So as you may guess, if ASM uses them aggressively during the rebalance they get full and the rebalance code returns an error.

Ofcouse, imbalance  may be normal in some cases.. For instance when we have fail groups .. 

That is; when we have a fail group configuration, ASM will have a more difficult job during the rebalance.. I mean, when we have fail groups; ASM will have less choices for distributing the mirror extents when a disk is dropped.. Still, I don't think that these kinds of causes should not be enough to reveal such a  rule (%15 rule)

Well, these are my thought on this subject... Please feel free to comment and correct me if I'm wrong. Please share your thoughts on this subjects by commenting to this blog post.

Sunday, September 6, 2020

ACE Virtual Happy Hour - The Great Gathering

A Virtual Happy Hour. An ACE get-together. 111 ACEs & Oracle members in call.  Thanks to Jennifer Nicholson and Oracle Ace Program for organizing this. Being an Oracle ACE has always been an honor for me, and as I see these valuable people who are experts in their fields together, my curiosity for new technologies, my passion for Oracle and my motivation for research continue to increase even today. 
PS: Ace video and song was great :) It's a good memory.


Friday, September 4, 2020

OBIEE - SSO -- Integrating with a third party login with AD authentication / passing user and pass in URL

Implementing SSO or Windows Native authentication in OBIE is something we do frequently.

Basically, we integrate OBIEE with Microsoft Active Directory and obtain centralized password management. 

This is also more secure and easy for the users.. They don't have to remember or manage their OBIEE usernames and passwords as they already have more important usernames and passwords, I mean their domain users and passwords. 

This gain we make by implementing SSO is actually in convenience, manageability and security. This benefit or gain is provided by all types of SSO configurations. In single password implementations, users login with their domain usernames and passwords. In windows native authentication, they don't even login as we get the credentials (or lets say the auth info) from the client OS on the fly in the backend transparently to the user :) It's been a long sentence :)

Anways, this is what we do in OBIEE and even in EBS envirıonments. In EBS, we use OID and OAM as well. (Things get complicated there but that 's true :)

So I guess all of us are familiar with this single sign-on (single password or no password)  concepts already. 

However; what I want to share in this blog post is something, which is a little different than a standard configuration.. A scenario, a real life story, a workaround; you name it :)

That is, suppose your customer wants your OBIEE to authenticate the users with AD usernames and passwords but suppose the customer has a custom web page which is in front of the OBIEE and the customer wants to get the usernames and passwords through that custom web page. The custom web page must authenticate the users and then redirect to OBIEE..

So what we do?

Well, we implement SSO in OBIEE side. This is what we need to do in the first place. 

Then we make our OBIEE to get the user and pass through the OBIEE url. (on-the-fly login using url arguments).. Note that this is the simplest way of doing this work.. Ofcouse, customer's ability to post the usernames and passwords using any other method than this one, will make us change/improve the design of this login flow.

At this point, we pay attention to the following;

"12.2.1.3 LightWeightSSO is ON by default and NQPwd/User wont work." 

This means, the news versions of OBIEE won't let you in with the usernames and passwords supplied through the url.

So, what we do? 

If that is a must.. I mean if the page can't post the username and password information to OBIEE using any other method than the url method, then we disable the LightWeightSSO. 

In other words; If we must use the NQUser and NQPassword login url parameters, we must disable lightweight SSO using the WLST disableSingleSignOn command. 

The following document will help us for that;

OBIEE 12c: Using NQUser and NQPassword in URL, Fails to Login When Single Sign-On (S
SO) or Lightweight SSO (LWSSO) Is Enabled (Doc ID 2316810.1)

Once we configure our OBIEE side, we tell the customer to make the necessary modifications in the custom webpage to make it pass the username and password information to OBIEE during the login process. That is it..

We reached our goal.. Clients will use the custom web page to enter their AD/domain usernames and passwords and then the custom web page will make OBIEE to authenticate it in the backend and the clients will see his/or her BI dashboards without authenticating again.

Before finishing, 2 important reminders ->

Don't forget to implement a full path SSL for the HTTPS communication.
Consider implementing  LDAPS for the ldap traffic between OBIEE and AD.

This is the story of the day :) I hope you will find it useful.

Friday, August 21, 2020

Weblogic - Oracle BI Publisher -- AD authentication - Configuring LDAPs

Recently, we needed change the authentication protocol that was used by a BI Publisher enviroment.. The environment was authenticating the users from Active Directory and it was using LDAP. Well, we needed to make it more secure.. That is, we needed to convert it to LDAPs. (Lightweight Directory Access Protocol (Over SSL))

It seems there are 2 ways to do that.. Actually there are 2 ways to configure BI Publisher to use LDAP or LDAPS.

One way is to use BI Publisher's administration page..

We just click Security Configuration under Security Center which is accessible through the Administration page. Then we create a local superuser and  we use authorization region to select our security model. ( LDAP in this case)
We can configure both LDAP and LDAPs configuration using this page and we restart the BI Publisher when we are done. (It is needless to say that , we must also add the relevant server certificate to the relevant java keystore)

Example of the Authorization region:


This method is already documented in "Oracle Fusion Middleware Administrator's Guide for Oracle Business Intelligence Publisher"- Section :  "Configuring the BI Publisher Server to Recognize the LDAP Server"

Anyways, there is another way and it is through the Weblogic Admin Console.
Actually, this method is the one that we used for making this environment use LDAPs.

Actually, we used this method because, when we checked the BI Publisher's admin console, we saw that the configuration under the authorization region that I mentioned above was just empty.. On the other hand; the environment was using LDAP to authenticate its users.. So, the current LDAP configuration (which was done by someone else earlier) was directly done through the weblogic admin console and that's why we decided to change LDAP to LDAPs directly using the Weblogic console.. 

Here is the action list;

Home >Summary of Security Realms >myrealm >Providers >DefaultAuthenticator


Change host  (if required) 
Change port(636)  --default LDAPS port
select “SSLEnabled” check box -- we are enabling LDAP over SSL, right..

Go to Summary of Servers-bi_server1-Configuration-Keystore. (bi_server1 is the name of the BI Publisher 's managed server .. Yours might be different)

Check the "Java Standard Trust Keystore" and note the value of it.. (We will use that in your keytool import command later.)

Set the proper environment in the shell; 

Example:

JAVA_PATH=/obi/wls/Oracle_BI1/jdk/bin/
KEYTOOL_PATH=/obi/wls/Oracle_BI1/jdk/bin/keytool
KEYSTORE_PATH=/obi/wls/Oracle_BI1/jdk/jre/lib/security/cacerts

Import the required certificate for the LDAPS communication.. (The certificate of the LDAP server -- Active Directory usually .. Note that, Customer or AD admin will give that certificate to you..)

/obi/wls/Oracle_BI1/jdk/bin/keytool -import -alias ermanad_2020 -file /tmp/ermanad.cer -trustcacerts -v -keystore /obi/wls/Oracle_BI1/jdk/jre/lib/security/cacerts

Display the imported certificate just in case..

obi/wls/Oracle_BI1/jdk/bin/keytool -list -v -keystore /obi/wls/Oracle_BI1/jdk/jre/lib/security/cacerts -alias ermanad_2020

Restart Weblogic Services and that's it ! :)

GTECH -- Summer School 2020 -- Oracle Database & Cloud & Big Data & EBS - Training For Newly Graduates!

Once in a year, we as GTech provide training for newly graduated engineers.

In this training, we teach Sql, PL/SQL, Oracle Database & Cloud, EBS, OBIEE, BigData, ETL and more.

This year was the third time, that I was the lecturer for "Database and Cloud".

See the following blog posts for 2019 and 2018 Summer Schools ->

https://ermanarslan.blogspot.com/2019/07/gtech-summer-school-2019-oracle.html
https://ermanarslan.blogspot.com/2018/07/summer-school-introduction-to-oracle.html

This year, I explained the Cloud in more detail. I also extended the lessons a little bit by adding an intro for Big Data&No Sql.. 

The students of the class were so curios about databases and actually Oracle in general..

I tried to shed a light on the important topics like Oracle Database Server Architecture, Oracle Database Process Architecture, background processes, High availability configurations, Cloud Computing, Big Data, NoSQL databases and so on..

The list of topics covered in the training was as follows;
  • Introduction to RDBMS
  • Introduction to Oracle
  • Architecture (Oracle)
  • Installation (Oracle) & workshop
  • DBA role & DBA tools
  • Cloud Computing
  • Big Data & NoSQL
  • APPS DBA role & EBS System Administration (EBS 12.2)
In order to make the newly graduates understand Oracle consultancy better, I have also explained how to complete a critical migration project successfully by going through a real life case.

While explaing these topics, I tried to share real life stories all the time.. Tried to teach them the basics of Oracle, but I also dived deep when required.

This year, the training was online. (due to Pandemic), but still the participants asked lots of good technical questions and these made our lessons more entertaining :)

This year's training lasted 3 days.

Like every year, we had an exam this year. I changed our exam a little more with the newly added topics. That is ; at the end of the training, I also gave this written examination to the participiants. (this time 45 questions )

It was a pleasure for me to teach Oracle in GTech Academy ( GTech -- Oracle Platinum)
This has also been a useful activity for the "ACE program".

I hope, It was useful for these guys..
I also hope I will see them (at least some of them) as successful DBAs, Apps DBAs or Cloud Architects one day :)

Following is the picture of this year's class.. A good memory :)