Tuesday, December 11, 2018

RDBMS -- TLS 1.2 support and issues ORA-29263: HTTP protocol error & ORA-29024: Certificate validation failure

Recently dealed with a SSL web service call-related problem.
Developers were trying to call a web service by executing a stored procedure residing in the Oracle Database.
The database version was 11.2.0.3, and the web service calls ended up with the following;

ERROR at line 1:
ORA-29273: HTTP request failed
ORA-06512: at "SYS.UTL_HTTP", line 1369
ORA-29263: HTTP protocol error
ORA-06512: at line 9

We directly applied the Patch 13517951: UTL_HTTP FAILS ACCESSING HTTPS SITE IN 11.2, but the issue remained..

This was easy to diagnose actually. After doing a little research and analyzing the traffic (by getting a tcpdump and analyzing it with wireshark), we could conclude that the traffic was TLS 1.2..
Oracle Database 11.2.0.3 can not communicate with TLS 1.2, so we recommended a database upgrade.. -> Upgrade to 11.2.0.4 and apply Oct PSU.

As 11.2.0.4 Oct 2018 DB PSU contain MESv415..

Note that: MES is short for RSA BSAFE Micro Edition Suite which is a software development toolkit for building 
cryptographic, certificate, and Transport Layer Security (TLS) security technologies into C and C++ applications, devices and systems. With release of Oct 2018 PSU, all supported DB versions use RSA BSAFE toolkit MESv415 or greater.

After upgrading the database and applying the Oct 2018 PSU, the error changed ..
Now the web service calls were failing with the following;

ERROR at line 1:
ORA-29273: HTTP request failed
ORA-06512: at "SYS.UTL_HTTP", line 1130
ORA-29024: Certificate validation failure
ORA-06512: at line 8

It was obvious that the certificates in the wallet could not be validated..

Still , we wanted to diagnose the issue;

We even got 10937 trace , but the traffic was looking good.

Then we decided to analyze our wallet and the certificates inside of it..

The wallet should include only the signing certificates, because during the SSL handshake Oracle checks whether the signing authority is known to it (i.e. whether the certificates of the signing authority was imported into the wallet). 

We saw that, the last certificate in the certificate chain was user certificate, not a trusted one. So , this shouldn't be imported into the wallet as a trusted certificate. 

So we removed that user / server / leaf certificate from the wallet and the error dissapeared :)

Yes..I know that these SSL/TLS-related configurations can be tricky sometimes, so I wanted to share this with you.. 

Friday, December 7, 2018

Oracle Cloud Day 2018 - "Core Banking -- Exadata Cloud at Customer Migration", a Real Life story. our presentation w/ Orhan Eripek..

Yesterday, we (my friend Orhan Eripek and I) attended Oracle Cloud Day 2018 Istanbul, as the speakers..

Our presentation was about a Core Banking migration project.

It was a real life story, and it was directly related with cloud... Our target in this project was to migrate a Core Banking database to a Half Rack Exadata Cloud at Customer machine (ECC).

We had a limited amount of time (30 minutes) to give our presentation.. It was a short period of time explain this process but we believe that we could give the general information about the processes that we have done, the methods that we have followed, the things we have learned, the facts that we discovered and the gains that we achieved by migrating a Core Banking database to Exadata Cloud at Customer (ECC).


Note that , it was a cross platform migration -- (IBM AIX - Power to Exadata Linux - Intel)

We presented our presentation during a TROUG session (just after the lunch) and we pleased to have a large audience (mostly technical)..

Here's a look at some of the images from yesterday  -- just for the memories :)


Sunday, November 25, 2018

Exadata -- Exadata X3 reimaging problem -- biosbootorder

This will be a quick post, because I 'am currently on-site and waiting to start a migration operation :)
Still, I couldn't wait to write about this :)

Recently, we needed to reimage an Exadata X3 system) with a newer image version. ( (X3 can be considered quite old)

We downloaded the Exadata images (18.1.7 and 12.2.1.1.8 versions) and configured our PXE server.

First we tried with 18.1.7...
We booted the compute node using PXE, but the imaging operation failed while validating the biosbootorder.. 

[ERROR][0-0][/opt/oracle.cellos/validations/init.d/biosbootorder- 247][main][247]  
Failed. See logs: /var/log/cellos/validations/biosbootorder

After the failed biosbootorder validation, we got a kernel panic (as a result of reboot).

That is, the boot problems ended up with a kernel panic ->

[ 70.944116] [<ffffffff816a9634>] dump_stack+0x63/0x81 
[ 70.949533] [<ffffffff816a757c>] panic+0xcb/0x21b 
[ 70.954618] [<ffffffff81086560>] do_exit+0xa70/0xa70 
[ 70.959946] [<ffffffff8106c53c>] ? __do_page_fault+0x1cc/0x480 
[ 70.966145] [<ffffffff816b5a6a>] ? page_fault+0xda/0x120 
[ 70.971820] [<ffffffff810865f5>] do_group_exit+0x45/0xb0 
[ 70.977494] [<ffffffff81086674>] SyS_exit_group+0x14/0x20 
[ 70.983284] [<ffffffff816b031a>] system_call_fastpath+0x18/0xd4 
[ 70.989605] Kernel Offset: disabled 
[ 70.993393] Rebooting in 60 seconds.. 

We thought that;
probably, during the boot, there were some drivers missing. They were probably not loaded so, we couldn't boot. ..

Following document was related with a virtualized env, but it had the same error stack and kernel panic.

OCI-C - Instance Fails to Boot Post Patching For L1TF Vulnerability ( Doc ID 2448058.1 )

As I already mentioned, we suspected from the drivers.. However, the real cause was the initramfs.. initramfs could not detect the boot disk using its boot label.

Anyways, we opened an SR and Oracle logged a bug for this.
Bug 28893408 - X3-2 : PXE IMAGING HANGS AND KERNEL PANIC " ?

The solution was as follows;

boot with diag.iso
chroot to /mnt/cell -> chroot /mnt/cell
change boot to DbSys1  (in file named i_am_hd_boot)
Install Grub on /dev/sda - > image_functions_grub2_install /dev/sda /boot force
rebuild initramfs -> dracut --force

I am not going into details of the actions above right now. But, I will :) in my next posts.

Tuesday, November 20, 2018

Exadata Cloud at Customer -- my experience & interesting stories -- traditional methods vs Oracle Database Cloud Service Console

Today, I want to share my experience on Exadata Cloud at Customer, aka ECC (or ECM :).
I want to share the things I have seen so far... ( I have 3 on-going ECC migrations at the moment)
Rather than giving the benefits about this machine ( I already did that) and this Cloud@customer named cloud model, I will concantrate on explaining the database deployment and patching lifecyle.


First, I want you to know the following;
  • Oracle doesn't force us to use TDE for the 11.2.0.4 Databases, which are deployed into the ECC.
  • Oracle recommends us to use TDE for the databases deployed in ECC. 
  • Oracle has Cloud GUIs delivered with ECC. (both for creating instances + creating databases..)
  • Oracle Cloud GUIs in ECC can even patch the databases. (with PSUs and other stuff)
  • Ofcourse, Customers prefer to use these GUIs. However, using these ECC machines as traditional Exadata machines is also supported. That is , we can download our RDBMS software and install it into ECC machines manually as well.
  • We can create our databases using dbca (as an alternative to the tools in GUIs) -- We can deploy and patch our databases just like we do in a non-cloud Exadata machine. (at least currently..)
  • ECC software is patched by Oracle. Both ECC and its satellite(OCC) are patched by Oracle in a rolling fashion. ( Having RAC instances gives us an advantage here)
  • There is a new edition of Oracle Database it seems.. It is called Extreme Edition and I have only seen it in ECC machines)
  • We can't reinstall GRID Home in ECC.. What we can do is to patch it.. ( if a reinstall is needed, we create SRs)
  • When we have a problem, we create a SR using the ECC CSI and it is handled by the  Cloud Team of Oracle Support.
  • Oracle Homes and patches delivered by the ECC GUIs are a little different than the ones deployed with traditional methods.
  • We see banners of Extreme edition in the headers of sqlplus and similar tools.
  • Keeping the ECC software up-to-date is important, because there are little bugs in the earlier releases. (bugs like -> expdp cannot run parallel -- it says : this is not an Enterprise Edition database -- probably because of Extreme edition specific info delivered in ECC Oracle Homes)
So far so good. Let's put these things into your minds and continue reading.

The approach that I follow in ECC projects is simple.

That is; if you deploy a database using Cloud GUI, then continue using Cloud GUI.

I mean, if you create a database (and an Oracle Home) using Oracle Database Cloud Service Console, then patch that database using Cloud Service Console.

But if you install a database home and create a database using the standard approach (download Enterprise Edition software, use dbca etc..), then continue patching using the standard approach.

If you mix these 2 approaches, then you need to make a lot of efforts to make the things go right.

Yesterday, I was in a customer site and the customer reported that they couldn't run dbca successfuly to create their database in ECC.

They actually deployed the Oracle Home using Cloud Console, and then they tried to use dbca to create a database using that home.  ( home is from GUI, database is from dbca)

The error they were getting during the dbca run was the following;


When I checked the dbca logs, I saw that DBCA was trying to create the USERS tablespace and dbca was trying to create it with TDE. (as a result of the encrypt_new_tablespaces parameter.. dbca was setting it to CLOUD_ONLY.. probably the templates were configured that way.) .

See.. Even the behaviour of the dbca is different, when it is executed from an Oracle Home deployed via ECC GUIs.

I fixed the error by customizing the database that dbca would create.. I made dbca not to create the USERS tablespace and then the error dissapeared.

After the database was created , I set set the encrypt_new_tablespaces parameter to DDL (as my customer wanted), and then they could even create new tablespaces without using TDE.

-- optionally, I could create a master key and leave the parameter as is. (CLOUD_ONLY)

Another interesting story was during a patch run.

Customer reported that they couldn't patch the database using GUI.

When I check the GUI, I saw that patch was seen there, but when I checked the logs of the ECC's patching tool, I saw the wget commands.. However; the links that wget commands were trying to reach broken.. 

The patching tools in ECC get the patches using wget automatically, and those patches are not coming from Oracle Support, they are coming from another inventory (a cloud inventory)

Anyways, as the links were broken, customer created a SR to cloud team to make them put related patches to the place they need to be.

Customer also wanted to apply a patch using the traditional way (opatch auto) into an Oracle Home which was created using GUI.

Actually, we patched the Oracle Home successfully using opatch auto , but then we encountered the error "ORA-00439: Feature Not Enabled: Real Application Clusters", while starting the database instances.

Note that, we downloaded the patch using the traditional way (from Oracle Support) as well.

"ORA-00439: Feature Not Enabled: Real Application Clusters" is normally encountered when oracle binaries are relinked using rac_off.

On the other hand, it wasn't the cause in this case.

I relinked properly, but the issue remained.

Rac was on! The related library was linked properly but ORA-00439 remained!

Then, I start to analyze the make file.. (ins_rdbms.mk), and found there an interesting thing there.

In ins_rdbms.mk , there were enterprise_edition tags and extereme_edition tags.. (other tags as well)

When I checked a little bit further, I saw that according to these tags, the linked libraries differ.

Then I realized that this patch that we applied was downloaded from Oracle Support.. (there is no extreme_edition there)

As for the solution, I relinked the binary using the enterprise_edition argument and the error dissapeared->

cd $ORACLE_HOME/rdbms/lib
make -f ins_rdbms.mk enterprise_edition rac_on ioracle

-- we fixed the error, but this home became untrustable .. tainted..

So, what is different there? The patch seems different right? What about the libraries? Yes there are little differences in libraries too..

What about the extreme_edition thing? This seems completely new..

So, again -> "if you deploy a database using Cloud GUI, then continue using Cloud GUI. I mean, if you create a database (and an Oracle Home) using Oracle Database Cloud Service Console, then patch that database using Cloud Service Console.
But if you install a database home and create a database using the standard approach (download Enterprise Edition software, use dbca etc..), then continue patching using the standard approach."

That's it :)

Sunday, November 18, 2018

RDBMS -- EBS - Exadata Cloud at Customer (ECC) migration -- ignorable errors during 11.2.0.4 upgrade

This blog post will be in 2 parts.

In the first part , I will give some useful info about "EBS - ECC migrations" and then, I will share some ignorable errors that we have seen during our EBS database upgrade.

Let's start with the first part;

As a prereq for an Exadata Cloud at Customer (ECC) migration project, we were upgrading the database tier of an EBS R12 instance.

The upgrade was done for making the database of this EBS instance aligned with the ECC's minimum database software version requirements. ( currently, ECC requires 11.2.0.4 as the minimum RDBMS version)

So our plan was to upgrade this EBS's database and then migrate it to ECC using dataguard..

With this migration, we also planned to convert this EBS's database from single instance to RAC.
Note that, the source database version was 11.2.0.3.

Anyways, although it sounds complicated , there are 3 documents to follow for this approach.

The MOS document for the database upgrade -> Interoperability Notes EBS 12.0 and 12.1 with Database 11gR2 (Doc ID 1058763.1)

The MOS document for dataguard switchover -- migration -> Business Continuity for Oracle E-Business Release 12.1 Using Oracle 11g Release 2 Physical Standby Database (Doc ID 1070033.1)

The MOS document for converting Using RAC 11gR2 with EBS R12 ->  Using Oracle Real Application Clusters 11g Release 2 with Oracle E-Business Suite Release 12 (Doc ID 823587.1)

Now, let's check what we have seen during 11.2.0.3 to 11.2.0.4 upgrade (using dbua)

Well.. Although we have did everything documented in "Interoperability Notes EBS 12.0 and 12.1 with Database 11gR2 (Doc ID 1058763.1)", during the upgdade we have seen unexpected errors like the following;




When we check the upgrade log (we must check the log to see the details of the failing command), we ended up with the following;

drop procedure sys.drop_aw_elist_all
*
ERROR at line 1:
ORA-04043: object DROP_AW_ELIST_ALL does not exist

create or replace type SYSTEM.LOGMNR$TAB_GG_REC wrapped
*
ERROR at line 1:
ORA-02303: cannot drop or replace a type with type or table dependents 
create or replace type SYSTEM.LOGMNR$COL_GG_REC wrapped
*
ERROR at line 1:
ORA-02303: cannot drop or replace a type with type or table

create or replace type SYSTEM.LOGMNR$SEQ_GG_REC wrapped
*
ERROR at line 1:
ORA-02303: cannot drop or replace a type with type or table
  
create or replace type SYSTEM.LOGMNR$KEY_GG_REC wrapped
*
ERROR at line 1:
ORA-02303: cannot drop or replace a type with type or table

CREATE TYPE SYSTEM.LOGMNR$TAB_GG_RECS AS TABLE OF  SYSTEM.LOGMNR$TAB_GG_REC;
*
ERROR at line 1:
ORA-00955: name is already used by an existing object

Good news -> After some researches, we have concluded that all of these errors were ignorable.

The ORA-04043 which was encountered for dropping DROP_AW_ELIST_ALL was ignorable. My comment on this was -> this object seems not secure (maybe there was a SQL/DML injection bug there) and maybe that's why upgrade was trying to drop it. (ref: http://www.davidlitchfield.com/OLAPDMLInjection.pdf)

But, as far as I can see, this procedure normally comes with patch 9968263.. So if this patch was not applied, then it is normal to not to have this procedure inside the database and then "object doesn't exist error" is normal as well. So it was just ignorable :)

The ORA-02303 and ORA-00955 errors were encountered for Logminer-specific objects. These errors were completely addressed/documented in Oracle Support, so they were directly ignorable.This was a bug actually. These objects were actually affecting the Goldengate, since we didn't have Goldengate in this customer, we just ignored them..  However, if you have Goldengate, then check the following document:

ORA-02303 & ORA-00955 Errors on SYSTEM.LOGMNR$ Types During PSU Updates (Doc ID 2008146.1)

That's it :) Just wanted to share.

Monday, October 22, 2018

Exadata X7 -- Diagnostics iso, M.2 SSD Flash devices and the Secure Erase utility

After a while, I m here to write about Exadata.

As we are doing lots of migration projects, we are dealing with Exadata most of time and actually maybe this is the reason why I m writing less than before :)

Anyways, this post will be about 3 things actually.

1) The Secure erase utility, which is used to to securely erase all the information on the Exadata servers.

2) The diagnostics iso , which is a tool to be used to boot theExadata nodes to diagnose serious problems when no other way exists to analyze the system due to damage to the system.

3) The M.2 SSD devices in Exadata X7, which may be used for system boot and rescue functions.

As I mostly do, I will explain these 3 things by going through I real life story.

Recently, my team needed to erase a Exadata X7-2 1/4 machine after a POC.

We needed to use the Secure Erase utility and we needed to delete the data using the 3pass method. (note that there are also cyrpto and 7pass methods..)

According to the documentation, we needed to download the secure erase ISO and boot the nodes using PXE or USB. (note that, booting an Exadata X7 server using PXE boot is not that easy -- because of UEFI..)

While trying to boot the Exadata X7 cells, we first encountered an error .(Invalid signature detected , check secure boot policy in setup)..

This was actually an expected behaviour, as the documentation of Exadata was lacking the information for PXE booting an UEFI system.

At this point, we actually knew what we needed to do..

The solution was actually documented in another Oracle Documentation. 


However, we didn't have the time to implement that. So we just disabled the secureboot in BIOS and rebooted the nodes.

Well.. After this move, cell nodes couldn't boot normally and we found ourselves in the diagnostics iso shell :)

This diagnostics shell was a result of the automatic boot that is done using the diagnostics iso residing in M.2 flash SSD devices.. 


Note that, in Exadata X7, we don't have internal USB devices anymore. USB devices were replaced by M.2 flash SSD devices.. So we have 2 M.2 flash devices for recovery purposes in Exadata X7 cells.


Well. we logged in to the diag shell using root/sos1exadata and we found that, there is a Secure Erase utilty inside /usr/sbin :)

So we got ourselves our erasing utility without actually doing anything :)

We booted the cells one by one and started deleting the data on them, using secureeareser 3 pass ->  

/usr/sbin/secureeraser -–erase --all --hdd_erasure_method 3pass --flash_erasure_method 3pass.

Note that, 3pass takes a long time.. (and it is directly depends on the sizes of  disks)

So far so good.

We were deleting the data on cells, but what about the Compute nodes?

Compute nodes don't have such a diag shell present, so we needed to boot them with an external usb, and execute the Secure Eraser through External USB, as explained in "Exadata Database Machine Security Guide".

At the end of the day, we have seen/learned 5 things ->

1) Secure eraser is present in the diag iso that comes with the M.2 devices in Cells.

2) Secure eraser's 3pass erasure method takes a really long time. (2-3 days maybe)

3) Oracle documentation in MOS is lacking the information on how to boot an UEFI system (Exadata X7) with PXE. That's why people keep saying that X7 can not boot with PXE.. Actually that's wrong.

4) Each Exadata X7 cell comes with 2 x M.2 SSD Flash devices (each 150 GB) for rescure operations. (No USBs anymore)

RDBMS -- XTTS (+incremental rman backups) -- how to do it when the source tablespaces are read only?

In my previous post, I mentioned that the method explained in MOS document " 11G - Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup (Doc ID 1389592.1)" requires the source tablespaces to be in READ-WRITE mode.

The xxtdriver.pl script which is in the core of the XTTS method powered by incremental backups, just checks if there are any tablespaces in READ ONLY or OFFLINE mode..

What if our source tablespaces is in READ ONLY mode? --> According to that note, the rule is simple -> if tablespace is read only, use traditional TTS.

Why? Because it is a requirement of the process described in Note 1389592.1.. This requirement is checked inside the xxtdriver.pl and that's why we cannot execute this XTTS(+incremental backup) method for readonly tablespaces.

I still don't understand this requirement, because there seem to be no technical impossibilities for this.
Anyways; in my opinion, Oracle wants us to use this XTTS (+incremental backup) method when it is really required..

But, there are scenarios where a read only environment is required to be migrated using XTTS (+incremental backup) method.

One of these scenarios is an active dataguard environment, and the other one is described in the following sentence->

During a migration process ; a source environment can be readonly in t0 (time 0), then can be taken in to  read-write mode in t1 (time1), and then can be taken to readonly in t3(time 3).

So far so good.

What if we want to use XTTS (+incremental backup) method for read only tablespaces?
Then, my answer is use/try the manual method.

As I already wrote this down in my previous blog post; XTTS based conversion is done using the sys.dbms_backup_restore package. 

xxtdriver.pl uses it and we can manually execute it too! 

Moreover; technically, if we use that manual method, we do *not* need to make source tablespaces read write.

In the following url, you can see how this is done manually -> https://connor-mcdonald.com/2015/06/06/cross-platform-database-migration/

Ref: https://connor-mcdonald.com

Although I haven't test it yet, I believe, we can migrate our read-only source tablespaces  using a manual XTTS(+incremental backup) as described in the blog post above. (**this should be tested )

There is one more question.. What about using a Active Dataguard environment as the source for a XTTS (+incremental backup) based migration? .. 

Well.. My answer for this question, is the same as above. I believe, it can be done.. However; it should be tested well, because Oracle clearly states that -> "It is not supported to execute this procedure against a standby or snapshot standby databases".. (**this should be tested )

Thursday, September 27, 2018

RDBMS -- XTTS (+incremental backups) from Active Dataguard / not supported! /not working! & how does XTTS scripts do these endian conversions of incremental backups?

Hi all,

I just want to highlight something important.

That is;
If we want to use XTTS (Cross Platform Transportable Tablespace) method and reduce its downtime with rman incremental backups; our source database can't be a standby.. It also can't be an Active dataguard environment.

This is because, the script which is in the core of the XTTS method powered by incremental backups, just checks if there are any tablespaces in READ ONLY or OFFLINE mode. Actually it wants all to the tablespaces which are wanted to be migrated, to be in READ WRITE mode.

If the scripts finds a READ ONLY tablespace, it just raises and error ->

RAISE_APPLICATION_ERROR(-20001, 'TABLESPACE(S) IS READONLY OR,
OFFLINE JUST CONVERT, COPY');

I just want to highlight this, as you may be planning to use an Active Dataguard environment for the source database in an XTTS (+incremental backups) based migration project.. As you know, in active dataguard we have read only tablespaces, so this might be an issue for you.

Anyways, actually I was also curios about this READ-WRITE requirement of XTTS (+incremental backups) and yesterday I jumped into the XTTS scripts.

Unfortuneatly, I couldn't find anything about it there.. I couldn't still answer the question Why? Why XTTS(+incremental) method requires the source tablespaces to be in READ-WRITE mode..

However, the perl script named xttdriver.pl just checks it. I couldn't find any clue (no comments in the scripts, no documentation, nothing on web) about this requirement, but look what I have found :)

->

In 12C, RMAN has the capability to convert backups.. In 12C, rman can convert the backup even in cross platform and XXTS actually use this rman's capability to convert the incremental backups from source to target platform..

So if your database is 12C, XTTS (those scripts I mean) use "backup for transport" and "restore from platform" syntax of rman  to convert your backups.

Ofcourse, if your database is an 11G, then those rman commands are not available.. 
So what XTTS does for converting your backups in 11G environments, is using the "sys.dbms_backup_restore" package..

XTTS use it similar to the following form to convert the incremental backups to target platform:

sys.dbms_backup_restore.backupBackupPiece(
bpname => '&&1',
fname => '&&2/xtts_incr_backup',
handle => handle, media => media, comment => comment,
concur => concur, recid => recid, stamp => stamp, check_logical => FALSE,
copyno => 1, deffmt => 0, copy_recid => 0, copy_stamp => 0,
npieces => 1, dest => 0,
pltfrmfr => &&3); --attention here, it gets the platform id 
EXCEPTION
WHEN OTHERS
THEN
DBMS_OUTPUT.put_line ('ERROR IN CONVERSION ' || SQLERRM);
END ;

Also, XTTS applies those converted backups to the datafiles using
"sys.dbms_backup_restore.applyDatafileTo", "sys.dbms_backup_restore.restoreSetPiece" and sys.dbms_backup_restore.restoreBackupPiece".

So, it is still not answered why XTTS(+incremental backups) needs the tablespaces in source to be in READ-WRITE mode, but it is certain that, what XTTS method does is not a magic :) 

I mean, XTTS scripts just do a very good orchestration.. The perl scripts used in XTTS doesn't do conversion using perl capabilities. (note that: endian conversion can also be done using perl functions inside the perl.. ) 
It is actually a  good thing though. I mean, if XTTS scripts would use the perl itself to convert the files, then it will be more complicated right?

Anyways, this made me think that; this XTTS related conversion can even be made manually by executing the necessary rman commands, calling the necessary dbms_backup_restore calls and using exp/imp.. However, it would be a little bit complex, and there would be a support issue for that :)

Well.. That's it.. I just wanted to share this little piece of information, as I found it interesting.

One more thing, before finishing this :) -> 

I must admit that, the rman's convert capability in 12c seems very handy.. So being in 12C is not only good, as it is an up-to-date release (fixed bugs etc..), but it is also good for easing the migration approaches.

One last thing; XTTS method doesn't support compressed backupsets. So the backups used in XTTS must not be compressed backups.. ( else you get -> ORA-19994, "cross-platform backup of compressed backups to different endianess is not supported")

I will revisit this blog post, if I will find the answer for the question -> why does XTTS(+incremental) requires source tablespace to be in READ-WRITE mode? why is there such a restriction? Also you.. If you have an idea, please comment.

Sunday, September 16, 2018

EBS R12 (12.1) -- interesting behaviour of adpatch -- HOTPATCH Error -> "You must be in Maintenance Mode to apply patches"

There is an interesting behaviour of adpatch, that I wanted to share with you.
This behaviour of adpatch was observed in an EBS 12.1 environment, during an attempt for hot-patching.
What I mean by this interesting behaviour is, actually the exception that adpatch throws during an ordinary hotpatching session. I mean the error that adpatch returned -> "You must be in Maintenance Mode to apply patches"..

As you already know, in EBS 12.1, we can apply patches without enabling maitanence mode.
All we have to do is taking the risk :) and execute the adpatch command with options=hotpatch argument.
This is a very clear thing, that you already know. But what if we try to apply a regular patch(non-hotpatch) and fail just before applying our hotpatch?

As you may guess, adpatch will ask us the question "Do you wish to continue with your previous AutoPatch session [Yes] ?"?
So if we answer Yes and if our previous patch attempt wasn't for applying a hotpatch ( I mean if the previous patch was tried to be applied without the options=hotpatch argument), then the "options=hotpatch" will make adpatch confused.

At this point, adpatch will say "you are trying to apply a patch with options=hotpatch, but you didn't use "options=hotpatch" in your previous patching attempt. As you wanted to continue with your previous Autopatch session, I will take the value of the argument named options regarding your previous patching attempt."

Just after saying that, adpatch will check the previous patching attempt and it will see that the command that you used in the previous patching attempt was "adpatch"(options specified wasn't specificed)..
However; now you are supplying "options" as an argument..

At this point, adpatch will replace your options argument with "NoOptionsSpecified" . It is because you didn't used options argument in your previous patching attempt/session.
So, the adpatch command will become like "adpatch NoOptionsSpecified" .. Weird right? :) but true.. And I think this is a bug.. adpatch should properly handle this situation, but unfortuneatly, it is not able to do so.. Anyways; I won't go into the details..

Then, adpatch will try to apply the patch in question, and it will see the NoOptionsSpecified.

Then quest what? :)

adpatch will report a warning -> "Ignoring unrecognized option: "NoOptionsSpecified"."

So, it will ignore NoOptionsSpecified argument (options=hotpatch was already replace before) and normally it will stop and say -> " You must be in Maintenance Mode to apply patches. You can use the AD Administration Utility to set Maintenance Mode. "

What is lesson learned here? :)
-> after a failed adpatch session, don't say  YES to the question ("Do you wish to continue with your previous AutoPatch session"), if you want to apply a hotpatch just after a failed adpatch session (regular/non-hotpatch session.)

Here is a demo for you ->

[applr12@ermanappsrv  17603319]$ adpatch options=hotpatch
Your previous AutoPatch session did not run to completion.
Do you wish to continue with your previous AutoPatch session [Yes] ?
AutoPatch warning:
The 'options' command-line argument was not specified originally,
but is now set to:
"hotpatch"
AutoPatch will use the original value for 'options'.
AutoPatch warning:
Ignoring unrecognized option: "NoOptionsSpecified".

AutoPatch error:
You must be in Maintenance Mode to apply patches.
You can use the AD Administration Utility to set Maintenance Mode.

Tuesday, August 14, 2018

EBS R12 -- REQAPPRV ORA-24033 error after 12C DB upgrade /rulesets & queues

Encountered ORA-24033 in an EBS 12.1.3 environment.
Actually, this error started to be produced in workflow , just after upgrading the database of this environment from 11gR2 to 12cR1.

The database upgrade (running dbua and other stuff) was done by a different company, so that we were not able to check if it is done properly..
However; we were the ones who needed to solve this issue when it appeared :)

Anyways, functional team encountered this error while checking the workflows in Workflow Administrator Web Applications -> Status Monitor, and reported it as follows;


ORA-24033 was basically saying us, there is a queue-subscriber problem in the environment, so we started working with the queues, subscribers and the rulesets.

The analysis showed that, we had 1 ruleset and 1 rule missing in this environment..

select * from
dba_objects
where object_name like 'WF_DEFERRED_QUEUE%'

The following output was produced in a reference environment, on which workflow REQAPPRV was running without any problems.


The following output, on the other hand; was produced in this problematic environment.


As seen above, we had 1 ruleset named WF_DEFERRED_QUEUE_M$1 and 1 rule named WF_DEFERRED_QUEUE_M$1 missing in this problematic environment..

In addition to that, WF_DEFERRED related rulesets were invalid in this problematic environment.

In order to create (validate) these ruleset , we followed 2 MOS documents and executed our action plan accordingly.

Fixing Invalid Workflow Rule Sets such as WF_DEFERRED_R and Related Errors on Workflow Queues:ORA-24033 (Doc ID 337294.1)
Contracts Clause Pending Approval with Error in Workflow ORA-25455 ORA-25447 ORA-00911 invalid character (Doc ID 1538730.1)

So what we executed in this context was as follows;

declare
l_wf_schema varchar2(200);
lagent sys.aq$_agent;
l_new_queue varchar2(30);

begin
l_wf_schema := wf_core.translate('WF_SCHEMA');
l_new_queue := l_wf_schema||'.WF_DEFERRED';
lagent := sys.aq$_agent('WF_DEFERRED',null,0);
dbms_aqadm.remove_subscriber(queue_name=>l_new_queue, subscriber=>lagent);
end;
/
commit;

declare
l_wf_schema varchar2(200);
lagent sys.aq$_agent;
l_new_queue varchar2(30);

begin
l_wf_schema := wf_core.translate('WF_SCHEMA');
l_new_queue := l_wf_schema||'.WF_DEFERRED';
lagent := sys.aq$_agent('WF_DEFERRED',null,0);
dbms_aqadm.add_subscriber(queue_name=>l_new_queue, subscriber=>lagent,rule=>'1=1');
end;
commit;

declare

lagent sys.aq$_agent;
begin
lagent := sys.aq$_agent('APPS','',0);
dbms_aqadm.add_subscriber(queue_name=>'APPLSYS.WF_DEFERRED_QUEUE_M',
subscriber=>lagent,
rule=>'CORRID like '''||'APPS'||'%''');
end;
/

So what we did was to;

Remove and add back the subscriber/rules to the WF_DEFERRED queue 
+
Add the subscriber and rule back into the WF_DEFERRED_QUEUE_M queue.  (if needed we could remove the subscribe before adding it)

By taking these actions; the ruleset named WF_DEFERRED_QUEUE_M$1 and the rule named WF_DEFERRED_QUEUE_M$ were automatically created and actually, this fixed the ORA-24033 error in REQAPPRV :)