Saturday, May 31, 2025

Creating Database Domain on SuperCluster and Installing Cluster Database with OEDA

Today, we will take a look at the Supercluster side of things. That is, setting up a new Database Domain on Oracle Super Cluster M8 and performing a Cluster Database installation using the Oracle Exadata Deployment Assistant (OEDA) tool.  

Supercluster is already in end of support status, but we are still seeing it hosting critical environments. Of course, it won't go like this and Super Cluster customers will probably replace their Super Clusters by placing PCA(s) and Exadata(s), but now, respect to what Super Cluster has contributed so far, today's blog post will be about Super Cluster.



This isn't just another generic guide; I'm going to systematically walk through the steps, highlighting critical details, especially around configuring the infrastructure. I will also share the steps you absolutely need to skip. Consider this your high level go-to reference for similar installations.




1. Creating a New Oracle Database Domain via IO Domain tab. (we do this on both of the nodes)

First things first, let's get our new Database Domain up and running on the Super Cluster.
Open the Super Cluster Virtual Assistant screen.


Navigate to the I/O Domains tab on the navigation panel.
Click the Add button to create a new domain.
Input all the necessary parameters for each domain, including CPU, memory, and network settings.
 
 
2. Database Configuration with OEDA

Now that our domains are ready, let's get OEDA involved. We know OEDA from the Exadata environments, but we see it in Super Cluster as well. 

2.1. OEDA Preparations

OEDA helps you with the prerequisites too.
Launch the Oracle Exadata Deployment Assistant (OEDA) tool.
Select the two newly created database domains and perform the JOC File Export operation. This action will generate an XML configuration file containing all the domain-related information.
 
2.2. Obtaining DNS and Installation Files

Refer to the installation template generated by OEDA:
APPENDIX A: DNS requirements
APPENDIX B: Files to be used for installation
Prepare these files and place them in the appropriate directories.

2.3. Placing Installation Files

Keep your OEDA directory structure tidy
Copy the installation files specified in APPENDIX B into the WorkDir folder within your OEDA directory structure.
 
2.4. SSH Requirement

This is a crucial step.
Since we're installing on SuperCluster, passwordless SSH connectivity must be configured over the ZFS rpool for both database domains.
Both Grid and Database software will be installed directly on ZFS.
 
3. OEDA Installation Commands

Once everything is set up, it's time to run the OEDA commands on the respective domains:

Following  command lists all the installation steps.

instal.sh -cf xml_file -l (character l)

Following command validates the configuration.

install.sh -cf xml_file -s 1 (number 1)

If the validation is successful, the following steps are executed sequentially:

install.sh -cf xml_file -s 2
install.sh -cf xml_file -s 3 
install.sh -cf xml_file -s 4

4. Steps That Must NOT Be Executed

IMPORTANT: Since there are already other database domains running on the system, the following steps "MUST NOT" be executed. Failing to skip these can lead to data loss or system instability for existing domains! ->

Step 5: Calibrate Cells
Step 6: Create Cell Disks
Step 17: Resecure Machine

5. Installation Step List (Overview)

Here’s a quick overview of the OEDA installation steps:

Validate Configuration File

Setup Required Files

Create Users

Setup Cell Connectivity

Calibrate Cells (SKIP THIS!)
Create Cell Disks (SKIP THIS!)

Create Grid Disks

Install Cluster Software

Initialize Cluster Software

Install Database Software

Relink Database with RDS

Create ASM Diskgroups

Create Databases

Apply Security Fixes

Install Exachk

Create Installation Summary

Resecure Machine (SKIP THIS!)

6. Completing the Installation

Once you’ve followed all the steps above, the installation for the new database environment (GRID /RAC + RDBMS installed) in your Super Cluster environment should be successfully completed. Always remember to perform system tests and verify access to finalize the installation.

7. Known Issues
 
Before starting the OEDA installation, since the installation will be on the Super Cluster IO Database Domain Global zone, passwordless SSH settings must be configured between the ZFS storage and the IO Domains. 

The /u01 directory, where the installation will take place, resides on ZFS.

During OEDA installation, if there are other IO database domains on the Super Cluster system, it's critically important not to run the OEDA Create Cell Disk step. Otherwise, other IO domains will be affected, potentially leading to data loss.
 
Before the Grid installation, passwordless SSH access must be configured between the two nodes for the users under which the Grid and Oracle software will be installed.

That's all for today. I hope this walk through helps you navigate your Super Cluster installations with more confidence. Happy super clustering! :)

Friday, May 16, 2025

ODA -- odacli command Issue after implementing SSL: A Real SR Process in the Shadow of Missing Steps -- Lessons Learned & Takeaways

Enhancing security in Oracle Database Appliance (ODA) environments through SSL (Secure Socket Layer) configurations can ripple across various system components. Changing certificates, transforming the SSL configuration to a more secure one (with more secure and trusted certificates) can be a little tricky. However, the path to resolving issues encountered during these processes isn't always found in the documentation.

In this post, I will share a real Oracle Service Request (SR) journey around this subject. I will try to share both the technical side of things and those undocumented steps we had to follow.

The Symptom: Silence from odacli

After implementing SSL configuration (renewing the default SSL certificates of DCS agent and DCS controller with the certificates of the customer) on ODA, we hit a wall: the odacli commands simply refused to work. For instance, when tried to run: odacli list-vms, we got the following cryptic message;

DCS-12015: Could not find the user credentials in the DCS agent wallet. Could not find credential for key:xxxx

This clearly pointed to a problem with the DCS Agent wallet lacking the necessary user credentials. Despite following the configuration guides, odacli failed, and the DCS Agent felt completely out of reach.

Initial Moves: Sticking to the Script (Official Oracle Docs)

Oracle's official documentation laid out a seemingly straightforward path:

Configure SSL settings within the dcs yml file(s).
Restart DCS.
Update CLI certificates and dcscli configuration files.

We done all this. Every step was executed properly. Yet, the problem persisted. odacli continued to encounter errors.

The Real Culprit: A Missing Step, An Undocumented Must-Do

Despite the seemingly correct configurations, our back-and-forth with the Oracle support engineer through the SR revealed a critical piece of the puzzle – a step absent from any official documentation:

We get ODACILMTL PASSWORD by the following command;

/u01/app/19.23.0.0/grid/bin/mkstore \ -wrl /opt/oracle/dcs/dcscli/dcscli_wallet \ -viewEntry DCSCLI_CREDENTIAL_MAP@#3#@ODACLIMTLSPASSWORD

We get the password from the output of the command above and we use it to change the password of /opt/oracle/dcs/dcscli/dcs-ca-certs. (--custom keystore. Note that, we get the password related with DCSCLI_CREDENTIAL_MAP.  )

/opt/oracle/dcs/java/1.8.0_411/bin/keytool -storepasswd -keystore /opt/oracle/dcs/dcscli/dcs-ca-certs

We update the conf file with the ODACLIMTLSPASSWORD entries.

These two files : /opt/oracle/dcs/dcscli/dcscli.conf and /opt/oracle/dcs/dcscli/dcscli-adm.conf

The following line: 

TrustStorePasswordKey=ODACLIMTLSPASSWORD

So we do something like a mapping of  wallet and the keystore passwords using the ODACLIMTLPASSWORD.

Skip these, and even with a perfectly configured agent, odacli commands will fail because they can't access the necessary credentials.

Live Intervention and Breakthrough

During a screen-sharing session with the Oracle engineers via Zoom, we went through the following:
Re-verified and, where needed, reconfigured the dcs yml file(s).
Ensured the wallet entry was correctly added.
Executed the crucial mkstore and dcscli commands (above) 
Restarted both the Agent and CLI services.

After these, commands like odacli list-jobs and odacli list-vms started working flawlessly. 

This SR journey left us with some significant takeaways:

"Official documentation may not be always the full story." Some critical steps, like the mkstore credential mapping, might only surface through the SR process itself.

"Configuration details demand absolute precision." File names, paths, and alias definitions in Oracle configurations must be an exact match. Even a minor deviation during the adaptation of Oracle's example configurations to your environment can bring the system down.

"Configuration Files are as Crucial as Logs in Support Requests". Attaching the actual configuration files to your SR significantly accelerates the troubleshooting process for Oracle engineers.

Lessons Learned:
  • Documentation Gaps: Document the steps learned from SRs in the internal technical notes.
  • The processes behind enhancing security in Oracle environments may extend beyond the confines of official documentation. This experience wasn't just about resolving a technical problem; it was a valuable lesson in enterprise knowledge management. If you find yourself facing a similar situation, remember to explore beyond the documented steps – and make sure those learnings from SRs find their way into your internal knowledge base.

Wednesday, May 7, 2025

RAC -- Importance of pingtarget in virtualized environments & DCS-10001:Internal error in ODA DB System Creation

Recently struggled with an issue in a mission critical environment. The issue was the relocating VIPs. It started all of a sudden and diagnostics indicated some kind of a network problem.

The issue was related with failed pings. The pingtarget concept of Oracle was in the stage and due justified reasons, causing VIPs to failover to the secondary node of the RAC.

Some background information about Ping target : Delivered with 12C (12.1.0.2), useful and relevant in virtualized environments. It is there for detecting and take actions in case where network failures are not recognized in the guest VMs. It is related with the public network only, since private networks already have their own heart beat check mechanisms designed with care. So basically, if the target ip(s) can not be pinged from a RAC node, or if there is a significant delay in those pings, VIPs are failed over to the secondary node(s). The parameter is set via srvctl modify nodeapps -pingtarget command. 

Well.. This is a feature developed with the logic that "if the relevant node cannot reach the ping targets, then there is a network problem between this node and the public network, namely the clients, and this means, the clients cannot access the DBs on this node, and if so let's failover the VIPs and save the situation."

It seems innocent since it has nothing to do with the interconnect, but actually it is vital. VIP transfer(s) etc. are happening according to this routine.

In our case, a switch problem caused everything. The default gateway was set to the firewall's ip address and the responses of the firewall to ping(s) were sometimes mixed up. 

We were lucky that the ping target parameter could be set to more than one IP.  ( the fault tolerance), and that saved the day.

But here is an important thing to note: We should not set ping target to the IPs that are against the logic of this. It is necessary to set our ping target to the ip addresses of the physical and stable devices that provide connection to the outside world and that will respond to ping.

If more than one IP is to be given, those IP addresses must be the ones that belong to the devices that are directly related to the public network connections.

Also, a final note on this subject: when you set this parameter to more than one IP, there may be Oracle routines that cannot manage it. Of course, I am not talking about DB or GI, but for example, we faced this in an ODA DB System creation. DB System creation could not continue when the ping target was set to more than one IP address, we had to temporarily set the parameter to a single IP address, and then set it to multiple IP addresses ​​again when the DB System creation finished.

Well, the following is the error we got;

[Grid stack creation] - DCS-10001:Internal error encountered: Failed to set ping target on public network.\\\",\\\"taskName\\\":\\\"Grid stack

This error can be encountered due to incorrect network gateway used in DB system creation (we specify it during DB System Creation GUI(s) and we may change it in the json configuration file) , but! it can also be encountered if you specify multiple ip addresses as the ping targets. We have faced this, and temporarily set the ping target to a single (default gw) address to fix the issue in ODA DB system creation.

I hope this blog post will be helpful in informing you on the subject and will save you time when dealing with the related ODA error.

Monday, April 28, 2025

Oracle Upgrade Support Entitlement - "A Real Life Case" for EBS customers upgrading databases from 11g to 19C

 Hello everyone, I hope you are doing well. Today, I want to share a summary of a recent discussion on my forum about upgrading from version 11.2.0.4 to 19C specifically in the context of an EBS R12.1.3 system. Here we will see the upgrade support entitlement in action.

Well, let's dive into the case;

the upgrade path in mind was to migrate 11.2.0.4 database from Linux 5.4 server to Oracle Linux 8 server and upgrade it to 19C on Oracle Linux 8 server.

So there was a requirement for EBS 11.2.0.4 database running on Oracle Linux 8 and that's created questions about the certification.

Certification tab on Oracle Support said ; 11.2.0.4 is certified with Oracle Linux 8 and EBS 12.1.3 was certified with 11.2.0.4. 

However; when we checked Oracle EBS 12.1.3 and Oracle Database 11.2.0.4 certification together as a bundle, the certification tab on Oracle Support didn't directly told anything about Oracle Linux 8 certification.

With this incomplete(or confusing) information, one might directly think the following;

Linux 8 is certified with Oracle Database 11.2.0.4 .
Linux 8 is certified with Oracle EBS 12.1.3.

This makes -> Linux 8 is certified with Oracle EBS 12.1.3 (with 11.2.0.4 database). 
So, a config like EBS 12.1.3 and Oracle Database 11.2.0.4 should run properly on Oracle Linux 8 that s for sure. 

But! when we checked the certification matrix for Oracle Applications (aka EBS) R12.1.3 and Oracle 11.2.0.4 database combination, we didn't see Oracle/RH Linux 8 listed, so maybe Oracle Linux 8 was not certified with Oracle Database 11.2.0.4 in EBS 12.1.3 context.

MOS note named Oracle E-Business Suite Installation and Upgrade Notes Release 12 (12.1.1) for Linux x86-64 (Doc ID 761566.1) was a reference. 

But that was also creating confusions there, cause there was nothing about such a restriction.

additional ref: https://blogs.oracle.com/ebstech/post/ebs-1213-migration-to-oracle-linux-8-and-red-hat-enterprise-linux-8-now-certified

Another ref: Requirements for Installing Oracle Database/Client 11.2.0.4 on OL8 or RHEL8 64-bit (x86-64) (Doc ID 2988626.1)

The only thing we see there was -> "Customers installing E-Business Suite 12.1.1 on the above operating systems using Rapid Install must upgrade the Oracle Database to 11gR2 (11.2.0.3 or higher for OL/RHEL 6, 11.2.0.4 for OL/RHEL 7 and SLES 12), 12c (12.1.0.1 or higher for OL/RHEL 6, 12.1.0.2 for OL/RHEL 7 and SLES 12) or 19c (for Oracle Linux/RHEL 7 and 8 and SLES 12) ...."

But this might be misleading and may not be an up-to-date information. Cause those products each, their own was certified with Oracle Linux.

Anyways;

Here is important information;

EBS 12.1.3 and Oracle Database 11.2.0.4 is a certified configuration (at least for using it some time before upgrading to 19C)

But! in order to make this work , you need to have Upgrade Support Entitlement. That 's the rule.. And, one of the justification for this is to be able to download patches that make this configuration work. 

An example is in relinking.. In order to fix the relink errors, you need a patch to fix the issue.
--> (Requirements for Installing Oracle Database/Client 11.2.0.4 on OL8 or RHEL8 64-bit (x86-64) (Doc ID 2988626.1), 
but ! you can download the patch only if we have “Upgrade support entitlement”.

Interesting fact.

This is the important thing and the main purpose of this blog post. So I hope, I cleared the doubts here. This is with the the help of my oracle forum, this is the benefit of it.. Of course, big thanks to my followers.

One last important note; Oracle Grid Infrastructure 11.2.0.4 is not supported on Oracle Linux 8. To install Oracle Oracle RAC 11g Release 2 11.2.0.4 on Oracle Linux 8, first install Oracle Grid Infrastructure 19c and then install Oracle RAC 11g Release 2 (11.2.0.4).

Friday, April 25, 2025

Erman Arslan's Oracle Forum / Until 2025 May - "Oracle Q & A Series"

Empower yourself with knowledge! 

Erman Arslan's Oracle Blog offers a vibrant forum where you can tap into a wealth of experience. Get expert guidance and connect with a supportive community. Click the banner proclaiming "Erman Arslan's Oracle Forum is available now!" Dive into the conversation and ask your burning questions.

-- or just use the direct link: http://erman-arslan-s-oracle-forum.124.s1.nabble.comA testament to its vibrancy: over 2,000 questions have been posed, sparking nearly 10,000 insightful comments. Explore the latest discussions and see what valuable knowledge awaits!

Oracle EBS, Cloud, Exadata, ODA, KVM, Oracle Database, OS and all that.

Supporting the Oracle users around the world. Let's check what we have in 2025.

adop cutover by big

Query on interoperability patches from 11.2.0.4 to 19C by prabhunoule

Can't access Oracle EBS r12.2.4 by latifa

Report failing with ORA-19011: Character string buffer too small by VinodN

Nfs mount using blockvolume oci by satish

Mount block volume to dbcs server by satish

GSS error R12 by satish

ADOP Prepare Phase Fails with FATAL ERROR by Rabia

dbms_stats.gather_table_stats by Laurel

Integrate Microsoft Active Directory with Oracle Forms 12c by kvmishra

ADOP Patching Prepare phase has failed on Disaster Recovery site by jayr

Migrating EBS 12.2 to different DC with VMware cluster. by Firo

Prepare phase failing - ICM Issue by VinodN

WF notifications are blocked by big

Finding dropped users by ZT

Request taking long time by Samia

Flashback with EBS by VinodN

MICR fonts location in Oracle 12.2 by madhun17_ebs

backup LAN by Roshan

how to create short url for Oracle EBS R12.2 by Cherish312

Loading event to database connection issue rac by satish

DBMS redefinition by Roshan

db connection error after APPS password change by VinodN

Online Patch Enabling Patch Is Failing on ADZDPREP.sql by Mansoor8810

Oracle EBS R12.2 VM Server migration by kishor_sinha@yahoo.c...

ZD_EDITION_NAME SET2 not getting generated after patch cycles by sandy_fond

ORA-46655: no valid keys in the file from which keys are to be imported by Rabia

EBS 12.2 Application Patching by Firo

Migration to IAAS Cloud by Firo

after httpd.conf edited by big

ebs12.2.12 application go down by raiq1

enteprise manager cells by Roshan

.CEF and .FIN / oracle flexcube by pamqq

On Line help URL by big

Monday, April 21, 2025

OHS -- Unable to initialize SSL environment, nzos call nzosSetCredential returned 28791 OHS:2171 NZ Library Error: Unknown error

Today, I'm going to share a rather annoying OHS error and its solution. Sometimes these kinds of errors can really make you spend lots of time diagnosing, right? Luckily, we've figured out the fix and wanted to share it with you. Maybe it'll save someone some time.

If you've encountered the following error in your OHS log file when trying to start OHS, you're not alone!

Error:
Unable to initialize SSL environment, nzos call nzosSetCredential returned 28791 OHS:2171 NZ Library Error: Unknown error

I will assume that you already created your wallet and imported your certificate(s).
I mean you already properly executed the sequence of commands I have given example below , but still getting OHS:2171 NZ Library Error: Unknown Error..

orapki wallet create -wallet . -auto_login_only
orapki wallet add -wallet . -dn 'CN=BLABLA,OU=FOR TESTING ONLY,O=FOR TESTING ONLY' -keysize 2048 -self_signed -validity 3650 -auto_login_only
-

-Ref: Doc ID 2729766.1 12c: How to Recreate the Default Wallet that has Expired from Oracle HTTP Server

Actually, this issue is generic and can be encountered in lots of cases where you couldn't place certificates in the right wallet, or where you didn't correctly specified your wallet location in ssl.conf but !I already assumed that you did everything right in this context, and still encountering the issue.

So, in that case, you should check your admin.conf. There may be a misalignment between the server name written in the admin.conf and the server name (DN - CN) you used while executing "orapki wallet add" command..

For instance, if you have a real server name (rather than localhost) written in admin.conf you should use the same server name while importing your certificate into your wallet using "orapki wallet add" command. That is, if you have the real server name in the admin.conf,  and if you use localhost rather than that server name in orapki wallet add command, you may end up with this. You may still import your certificate but your OHS won't start and it will fail and report "nzosSetCredential returned 28791 OHS:2171 NZ Library Error: Unknown error."

The best practice (and the solution) is to have the real server name in the admin conf (as the ServerName value) and use that server name in the orapki add -wallet -dn command (example command: orapki wallet add -wallet . -dn 'CN=exampleservername,OU=FOR TESTING ONLY,O=FOR TESTING ONLY' -keysize 2048 -self_signed -validity 3650 -auto_login_only)

Friday, April 18, 2025

Why KVM? (Oracle KVM vs. Other Virtualization Solutions)

Why KVM? 

Ever wondered how Oracle KVM stacks up against the big players like VMware ESXi and Microsoft Hyper-V? Well, we explained it in our tech event last week, and here I'm sharing some of the things we have gone through in this context. 

We will go through the following table for comparing Oracle KVM , ESXi and Hyper-V.

This table looks at the issue from some of the most important / key dimensions.



Well, Let's dive in!

  • Open Source: KVM shines here. It's a big YES for open source, unlike VMware ESXi and Microsoft Hyper-V. For us open-source enthusiasts, this is a major plus!

  • Licensing Cost: Now, this is where things get interesting. KVM is generally "Low" on the cost scale, which is fantastic. VMware ESXi? Well, the table says "High," and from what I've seen, that rings true. Microsoft Hyper-V lands in the "Medium" zone. Cost can be a big factor, especially for smaller setups or those just starting out.

  • Performance: When it comes to power, both Oracle KVM and VMware ESXi are tagged as "High." That's good news for those demanding top-notch performance. Microsoft Hyper-V is listed as "Medium." Performance can vary depending on your specific workloads, but it's good to see KVM holding its own.

  • Management Tool: Here's where the ecosystems differ. KVM mostly rely on OLVM. VMware leans on vCenter, which is a robust but often pricey solution. Microsoft has SCVMM. The choice of management tool can really impact your day-to-day operations.

  • Security: Security is paramount, right? KVM brings SeLinux & sVirt to the table, leveraging Linux's security features. VMware uses NSX, focusing on network security. Microsoft offers Shielded VMs. Each has its own approach to keeping your virtual machines safe and sound.

  • Flexibility: This is where KVM's open-source nature really shines again. It's "Integrated with Linux," which gives it a lot of flexibility. VMware is described as a "Closed ecosystem," which can sometimes limit your options. Microsoft Hyper-V is "Windows-focused," so its strengths lie heavily within the Windows environment.

 In addition, if you are running Oracle database, KVM is clearly ahead in my opinion. In many aspects (support, cost, compatibility etc.)

Takeaway:

Looking at the table above, it’s clear that Oracle KVM is a strong contender, especially if you’re looking to keep costs low. The “High” performance rating is also a big plus. However, still the best choice really depends on your specific needs, existing infrastructure, and comfort level with different ecosystems.

What are your thoughts on this comparison? Have you had experience with any of these hypervisors? Let me know in the comments below!

Friday, April 11, 2025

Oracle Linux KVM Steps Up

These days, we are in the virtualization world shaking up with the effects of Broadcom on VMware. Especially the recently announced minimum 72 core licensing requirement has become a nightmare for many VMware customers (small and mid-sized). This new policy, which will deeply shake the budgets of small and medium-sized enterprises (SMEs), has inevitably increased the interest in alternative virtualization solutions. At this point, Oracle's powerful and cost-effective virtualization platform Oracle Linux KVM (Kernel-based Virtual Machine) stands out as a savior.

Yes, it must be admitted that the most well-known advantage of Oracle Linux KVM is its cost effectiveness. With VMware's new licensing model, having to pay the license fee for cores you don't use is not a sustainable situation for many companies. Oracle Linux KVM, on the other hand, offers zero license cost and completely solves these concerns. The support that comes with the Oracle Linux Premier Support subscription is a bonus.

However, what Oracle Linux KVM offers is not just about cost. Let's take a closer look at the other attractive features of this powerful virtualization solution:

High Performance: Thanks to its integrated structure into the Linux kernel, KVM offers performance close to hardware. This advantage becomes even more evident especially in today's high-core servers.

Scalability: KVM allows both vertical and horizontal scaling. You can easily add resources for your increasing workloads or run your virtual machines on different hardware.

Advanced Features: Critical enterprise features such as Live Migration, Snapshot, Cloning etc..

Hardware Support: Thanks to Linux's wide hardware compatibility, KVM also supports a wide range of server hardware.

Security: Linux's security-focused structure also creates a strong foundation for virtual machines running on KVM.

Oracle Integration: Especially for Oracle applications and databases, Oracle Linux KVM offers unique advantages. Thanks to the Hard Partitioning feature, you can fix virtual machines to specific physical cores and license them only for the cores used. This means a serious cost advantage, especially for Oracle.

Compatibility with Oracle Cloud Infrastructure (OCI): Oracle Linux KVM is the virtualization technology underlying OCI. In this way, you can easily move virtual machines in your on-premise environments to OCI or create hybrid cloud scenarios.

Uninterrupted Patching with Ksplice: Thanks to Oracle's unique Ksplice technology, you can apply kernel and user space security updates without having to reboot your virtual machines. This is a critical advantage in terms of business continuity.

Oracle Linux Virtualization Manager (OLVM): It is a user-friendly web-based management interface that allows you to easily manage your KVM environment. You can perform many operations such as creating, monitoring, and managing virtual machines via OLVM.

This new licensing policy of VMware actually pushes its customers to look for different and more flexible solutions. Beyond the cost advantage it offers, Oracle Linux KVM is a serious alternative to VMware with its performance, scalability and enterprise features. Especially if you are in the Oracle ecosystem, there is no reason not to consider Oracle Linux KVM.

Remember, technology is constantly evolving and changing. Today's "standards" may be tomorrow's "old" ones. Now is the time to reconsider your virtualization strategy and find the solution that best suits your needs. Give Oracle Linux KVM a try, you will not regret it.

See you in my next article, stay with technology!

Wednesday, March 5, 2025

EBS 12.2 -- Password Change / Special Characters / FNDCPASS and all that.

If you change the APPS password to something with special characters in it by the wrong way, you may encounter "ORA-01017: invalid username/password; logon denied" errors, almost in anything that touches the EBS DB. 

Here is an example thread in my forum : http://erman-arslan-s-oracle-forum.124.s1.nabble.com/db-connection-error-after-APPS-password-change-td12919.html

Note that, using the FNDCPASS utility to change the passwords of database users such as APPS , APPLSYS, GL etc to include special characters is NOT supported. I haven't tried it but I think the same goes for AFPASSWD (enhanced version of FNDCPASS).

However; you may use FNDCPASS to change an application user password (such as SYSADMIN's password)  to a value with specials characters. But! if you want to do that, you may need to use quotation marks.

Here is an example:

FNDCPASS apps/apps 0 Y system/manager USER SYSADMIN '$welcome1'

Note that, for some special characters, you don't need to use quotation marks.. This is by design..

Check the MOS note given below for supported special characters in application user passwords and the requirement of quotation marks in case of using them with FNDCPASS.

R12: How to change passwords to include special characters using FNDCPASS? (Doc ID 1336479.1)

Friday, February 28, 2025

GTech, Oracle Event - Oracle Database 23AI and OEM 24AI

I have spoken at many events, I have lost the count, but this one was so much fun. 

I explained the unification of AI (Vectors, and DB integrated GEN-AI with RAG) , Graph and Native JSON in the context of Converged Oracle Database. I presented key new features of Oracle Database 23 AI and did a demo of a RAG solution that we developed in-house using Oracle Database 23AI and OCI's integrated GEN AI models and Cohere's GEN AI models.

We also talked about the AI-powered features of new Oracle EM. It was fun and it was beneficial to the community. We eagerly await the 23AI upgrades in the near future.

A formal intro for my speech was as follows: 

GTech Senior System and Database Management Director, Oracle ACE Pro♠️ Erman Arslan explained how our infrastructure and system services enhance database performance, security, and resilience, while showcasing the next-generation speed and efficiency solutions offered by Oracle Database 23AI with AI.