Monday, April 29, 2024

Demystifying the Oracle EBS R12.2- Prepare Phase: Code Objects and Stub Objects

This blog post tackles common questions regarding code objects and the prepare phase in Oracle EBS R12.2. Note that, this is derived from my forum.. Based on a real question. It's especially helpful for those new to EBS 12.2 and want to understand how code objects are handled during patching.

The Questions:

  1. Are all code objects copied from the run edition to the patch edition during the prepare phase?
  2. If not copied, how do they exist after the cutover?
  3. Are the objects copied during the prepare phase called stub objects?

Understanding Editioning and EBR:

We clarify that code objects are editionable. This means they are managed through Oracle's Edition Based Redefinition (EBR) feature, rather than simply being copied. EBR allows for maintaining different versions of code objects within the same database.

Code Objects in the Patch Edition:

During the prepare phase, code objects aren't physically copied. Instead, they are editioned in the patch edition. This essentially creates new versions of the code objects specific to the patch.

Cutover and Using New Code Objects:

The cutover phase activates the patch edition. As a result, the newly editioned code objects become active, and you automatically start using them.

Stub Objects Explained:

These aren't actual code objects but rather placeholders. They act as pointers referencing the original code objects in the run edition.

Why Stub Objects?

Stub objects are lightweight, minimizing the initial size of the patch edition. This efficiency is crucial for online patching, where downtime is minimized.

A Clearer Picture:

Imagine the run edition as your main codebase. The prepare phase creates a new edition (patch edition) with pointers (stub objects) to the run edition's code objects. These pointers are lightweight, keeping the patch edition initially small. Finally, during cutover, the patch edition becomes active, and you start using the editioned code objects.

Friday, April 26, 2024

Oracle Database / Datawarehouse Security, High Availability & Disaster Recovery

Following is a quick article that I prepared for providing the data security, high availability and disaster recovery solution all at once for a mission critical Oracle Data warehouse environment. Below, you will find Oracle Advanced Security, Oracle RAC , Golden Gate, Dataguard and other related features and solutions of Oracle all at once, integrated together, put into the context and proposed together to be in place for securing the data, making it high available and protecting both the data and the services from disasters. Features / Solutions given here can be used in different contexts (for instance: Golden Gate can be used for data integration) as well, but here we are in the scope of data security, high availability and DR, so I will not get into that.

Oracle Database / Data Warehouse - Advanced Security Option for Data Security:

Securing Your Oracle Data Warehouse: Why Oracle Advanced Security is the Right Choice

Introduction

Data warehouses hold an organization's most valuable asset: its data. This data can include sensitive customer information, financial records, and intellectual property. Ensuring the security of this data is paramount. Oracle Advanced Security (OAS) is the ideal solution for robust data security within your Oracle data warehouse, and it can be tailored to your data warehouse environment, providing the peace of mind that comes with knowing your data is safe.

Robust Data Security Measures

OAS offers a comprehensive suite of functionalities to address all aspects of data security:

  • Encryption: Transparent Data Encryption (TDE) encrypts data at rest within the database, rendering it unreadable even if an attacker gains access to storage media.
  • Access Controls: OAS enforces granular access controls, allowing you to define precisely who can access specific data and what actions they can perform.
  • Authentication: OAS supports various strong authentication mechanisms like Kerberos and PKI, ensuring only authorized users can access the data warehouse.

Benefits of Oracle Advanced Security

  • Seamless Integration: OAS integrates seamlessly with your existing Oracle database environment. There's no need to modify applications, minimizing disruption and implementation costs.
  • Performance Efficiency: OAS is designed for minimal performance impact. Data encryption and decryption happen transparently, ensuring smooth operation of your data warehouse.
  • Regulatory Compliance: OAS helps meet compliance requirements for data privacy regulations like PCI DSS and HIPAA.
  • Data Security Lifecycle Coverage: OAS protects data throughout its lifecycle, from encryption at rest to securing backups and exports.

Addressing Specific Data Warehouse Security Concerns

Data warehouses often house highly sensitive data. OAS addresses these concerns:

  • Insider Threats: OAS enforces access controls, preventing unauthorized access attempts, even from privileged users.
  • Data Breaches: TDE renders data useless even if stolen, minimizing the impact of a breach.
  • Accidental Data Exposure: Data redaction, another feature of OAS, allows you to mask sensitive data retrieved by queries, further protecting sensitive information.

In addition to the Oracle Advanced Security Components, Audit Vault, Database Firewall and Database Vault can be implemented to provide the Maximum Security Architecture for ensuring the security of the data end-to-end, in transit and in place.

Here is the short take / Conclusion:

Oracle Advanced Security provides a robust and efficient solution for securing your Oracle data warehouse. It offers comprehensive data encryption, access controls, and authentication mechanisms, all seamlessly integrated with your existing environment. By choosing OAS, you ensure a high level of data security for your most valuable asset. Implementing the Maximum Security Architecture (MSA) of Oracle takes data security to the ultimate, maximum level. By implementing the MSA, your gains in the context of data security will be Auditability, Regulatory Compliance, Operational Efficiency, Risk Minimization, Correct Authorized Access, Role Based Authorization, Minimum Req. Privilege Principle, Separation of Duties, Data Security, Strengthened Authentication, Risk-based Authorization Control, Compliance with Security Standards, Centralized Reporting, Centralized Key Management, Centralized Security Management and Anomaly detection in Data Access.

Data Warehouse Continuity- Oracle Database – High Availability Features:

Ensuring Uninterrupted Data Access: Oracle Database High Availability for Your Data Warehouse

Introduction

In today's data-driven world, uninterrupted access to your data warehouse is critical for informed decision-making. Downtime can lead to missed opportunities, lost productivity, and erode user confidence. This paper explores how Oracle Database's High Availability (HA) features, specifically Oracle Real Application Clusters (RAC), ensure continuous operation of your Oracle data warehouse.

Our Expertise in High Availability Solutions ( just a little advertisement:)

My team and I possesses extensive experience in implementing and managing high availability solutions for Oracle databases. We understand the critical role of data warehouse uptime and leverage the power of Oracle HA to create a robust and resilient environment. We can implement Oracle Database in a highly available manner by using the Oracle Features in this context. Oracle Database Features combined with our implementation experience, can ensure your data warehouse remains accessible 24/7, empowering informed decisions and driving business success.

What is Oracle Database High Availability?

Oracle offers a comprehensive set of features for achieving high availability in your data warehouse. RAC, the cornerstone of this approach, allows you to run your database on multiple servers (nodes) concurrently in active-active manner.

 This provides several key benefits:

  • Failover: If a node fails, the remaining nodes automatically take over processing, minimizing downtime and ensuring data remains accessible.
  • Scalability: RAC facilitates horizontal scaling by adding additional nodes, enabling your data warehouse to handle increased data volumes and user demands.
  • Planned Maintenance: You can perform maintenance on individual nodes without impacting overall data warehouse availability.

Addressing Data Warehouse Continuity Concerns

Data warehouses often face unique challenges regarding continuity:

  • Large Data Volumes: RAC's parallel processing capabilities ensure efficient data loading and retrieval, even with massive datasets.
  • Batch Processing Windows: Planned maintenance can be scheduled outside of critical batch processing windows, minimizing disruption.
  • Disaster Recovery: RAC integrates with Oracle Data Guard for disaster recovery, providing a geographically separate replica for business continuity in case of a major outage.

Beyond RAC: Additional High Availability Features

Oracle's HA suite extends beyond RAC, offering additional functionalities:

  • Oracle Data Guard: Creates synchronized standby databases for disaster recovery and read-only workloads.
  • Automatic Storage Management (ASM): Simplifies storage management and facilitates automated failover in case of disk failures.
  • Oracle Restart: Monitors and automatically restarts critical Oracle components in case of failures.

Here is the short take / Summary:

By leveraging Oracle Database High Availability features, we can create a highly available data warehouse environment. Our expertise ensures a seamless implementation, maximizing uptime and minimizing the impact of potential outages. We, certified Oracle experts with extensive experience in implementing and customizing Oracle Database for various industries and business requirements.

Oracle Data Warehouse - Disaster Recovery Implementation

Disaster Recovery for Your Oracle Data Warehouse: Minimizing Downtime, Maximizing Peace of Mind

Introduction

Natural disasters, technical failures, or cyberattacks can cripple your business by disrupting access to your data warehouse. Here I will briefly summarize how Oracle Database's disaster recovery (DR) features equip you to recover your data warehouse swiftly in the face of such events, minimizing downtime and ensuring business continuity. I will try to give a quick explanation of how Oracle's DR features, combined with our (my team and I) proven expertise, can minimize downtime and ensure your data warehouse remains operational even in the face of disaster.

Our Proven Track Record in Disaster Recovery

My team and I posses a proven track record of implementing and managing robust DR solutions for Oracle data warehouses. We understand the criticality of rapid recovery and leverage Oracle's DR features to create a comprehensive disaster preparedness plan.

Oracle Database Disaster Recovery: Built-in Resilience

Oracle offers a comprehensive suite of DR functionalities designed to get your data warehouse back online quickly:

  • Data Guard: This core technology creates a synchronized copy (standby database) of your production data warehouse at a geographically separate location. This ensures data availability even if the primary site becomes unavailable.
  • Active Data Guard: This advanced option allows read-only access to the standby database, offloading some workload from the primary and enabling faster failover in case of disaster.
  • Golden Gate: For complex data warehouse environments, Golden Gate facilitates heterogeneous replication, allowing replication between dissimilar database systems if needed for your DR strategy.
  • Recovery Manager (RMAN): This powerful tool simplifies backups, enabling efficient restoration of your data warehouse to a specific point in time.

Addressing Data Warehouse DR Challenges

Data warehouses present unique considerations for disaster recovery:

  • Large Datasets: Oracle Data Guard's efficient replication methods minimize bandwidth consumption and ensure timely synchronization of massive data volumes.
  • Data Consistency: Our expertise ensures configuration of Data Guard for optimal consistency based on your specific recovery time objective (RTO) and recovery point objective (RPO) requirements.
  • Testing and Failover: Regular DR testing done in this context ensures seamless failover to the standby site and rapid data warehouse recovery.

Beyond Recovery: Building a Comprehensive DR Plan

Disaster recovery goes beyond technology. Oracle Consultancy services (Such as my team and I provide) encompass:

  • DR Strategy Development: We collaborate with you to define your RTO, RPO, and DR site selection based on your business needs.
  • DR Plan Documentation: We create a comprehensive DR plan outlining roles, responsibilities, and failover procedures for a smooth recovery process.
  • Ongoing Support: We offer ongoing support to ensure your DR plan remains current and your team is prepared for any eventuality.

Her is the Short Take / Conclusion

By leveraging Oracle Database's disaster recovery features and our extensive experience, we can create a robust DR plan for your data warehouse. This ensures minimal downtime, rapid recovery, and protects your business from the impact of unforeseen disasters.

Wednesday, April 24, 2024

EBS 12.2 -- oracle.apps.fnd.cp.gsm.GSMSvcComponentContainer starts from wrong filesystem / patch filesystem

Question: How much time do you spend/lose?
Answer: Well, how much time I gain? :)

I mentioned in my previous blog post that I will share some of the interesting cases here. Erman Arslan's Oracle Blog / Forum has 350 subscribers asking questions regularly and here I 'm sharing the interesting ones.

You asked:

We have an Oracle 12.2.6 EBS system.

Run FS is fs1

As soon as system started, all the services are running on fs1 , except oracle.apps.fnd.cp.gsm.GSMSvcComponentContainer

GSM starts on fs2. Concurrent Managers are using binaries from fs2.

If we run an empty ADOP patching cycle, after cutover then all is OK -> they all starts from fs2.

But if we run it one more time, all the services start from fs1 as expected but those mentioned ones start from fs2!

Running adconfig is not helping and we even checked that in context file of fs1 none of the parameters reference to fs2.

None of the env parameters pointing to fs2.

Where it takes that fs2 value?

What is this ? Any ideas ?

Moreover;

if I do env |grep fs2 -> the patch_base is fs2.

So meaning conc starting from patch base..

Any hints would be appreciated :)

ps -ef |grep fs2

/testapp/oracle/TEST/fs2/EBSapps/comn/util/jdk64/bin/java -DCLIENT_PROCESSID=24445080 -Dhttp.proxyHost= -Dhttp.proxyPort= -Dhttp.nonProxyHosts= -Djava.awt.headless=true -Xmx384m -Doracle.apps.fnd.common.Pool.leak.mode=stderr:off -verbose:gc -Ddbcfile=/testapp/oracle/TEST/fs2/inst/apps/TEST_ebstestapps/appl/fnd/12.0.0/secure/TEST.dbc -Dcpid=9758 -Dconc_queue_id=1140 -Dqueue_appl_id=0 -Dlogfile=/testout/oracle/TEST/conc/log/FNDCPGSC9758.txt -DLONG_RUNNING_JVM=true -DEBS_HOSTNAME=ebstestapps -DOVERRIDE_DBC=true -DFND_JDBC_BUFFER_MIN=1 -DFND_JDBC_BUFFER_MAX=2 oracle.apps.fnd.cp.gsm.GSMSvcComponentContainer

So where is in EBS the reference to use this /testapp/oracle/TEST/fs2/EBSapps/comn/util/jdk64/bin/java? and not the java from fs1?


/testapp/oracle/test/fs2/EBSapps/comn/util/jdk64/jre/bin]> ./java -version
java version "1.7.0"

Java(TM) SE Runtime Environment (build pap6470sr9fp10-20150708_01(SR9 FP10))

IBM J9 VM (build 2.6, JRE 1.7.0 AIX ppc64-64 Compressed References 20150701_255667 (JIT enabled, AOT enabled)

J9VM - R26_Java726_SR9_20150701_0050_B255667

JIT - tr.r11_20150626_95120.01

GC - R26_Java726_SR9_20150701_0050_B255667_CMPRSS

J9CL - 20150701_255667)

JCL - 20150628_01 based on Oracle jdk7u85-b15

/testapp/oracle/test/fs2/EBSapps/comn/util/jdk64/jre/bin]> cd /testapp/oracle/test/fs1/EBSapps/comn/util/jdk64/jre/bin

/testapp/oracle/test/fs1/EBSapps/comn/util/jdk64/jre/bin]> ./java -version

java version "1.7.0"

Java(TM) SE Runtime Environment (build pap6470sr9fp10-20150708_01(SR9 FP10))

IBM J9 VM (build 2.6, JRE 1.7.0 AIX ppc64-64 Compressed References 20150701_255667 (JIT enabled, AOT enabled)

J9VM - R26_Java726_SR9_20150701_0050_B255667

JIT - tr.r11_20150626_95120.01

GC - R26_Java726_SR9_20150701_0050_B255667_CMPRSS

J9CL - 20150701_255667)

JCL - 20150628_01 based on Oracle jdk7u85-b15

And now I know why :

autoconfig does not populate fnd_env_context table so majority of parameters there refer to fs2 .

So thats why have this problem.

Tried to update context file version to a higher version. The same issue..

So, I guess to fix our issue we need to clean fnd_env_context somehow?

How we can recreate autoconfig files and re-load fnd_env_context ?

Any hints ? :)

Thanks!

We said:

INFO :FND_ENV_CONTEXT Table stores information about environment name and value for each of the concurrent processes.

Action plan:

1)Run the "Purge Concurrent Requests and/or Manager Data" concurrent program with the necessary -parameters to clear that records from fnd_env_context.
2)Stop the concurrent processing
3) Run the Concurrent Manager Recovery Troubleshooting Wizard with the internal manager down as detailed in the following document:
Concurrent Processing - Concurrent Manager Recovery Troubleshooting Wizard (Note 134007.1)
4) Start the concurrent processing.

Reference:
MAIN REF: Workflow Mailer Won't Start In Cloned Environment (Doc ID 1953629.1) -- this for 12.1 but, it seems it is applicable for 12.2 as well.
Concurrent Processing - Purge Concurrent Request and/or Manager Data Program (FNDCPPUR) (Doc ID 104282.1)

Be careful while running this program. Don't break any business rules (i.e if there is a need to save the concurrent request log and out files for 30 days, then you will need to get the approval of business side for this...)

You can backup the following tables before running the conc program:

FND_CONCURRENT_REQUESTS
This table contains a complete history of all
concurrent requests. This table should ideally be kept at around
4 - 5k records and should be regularly purged as part of normal
'housekeeping' by running the standard FNDCPPUR program.
This is a part of basic system administration and it is recommended
that for the average instance, it should be run every 30 days or so.
Allowing the table ( and others ) to increase in size can affect performance.

FND_RUN_REQUESTS
When a user submits a report set, this table stores
information about the reports in the report set and
the parameter values for each report.

FND_CONC_REQUEST_ARGUMENTS
This table records arguments passed by the concurrent
manager to each program it starts running.

FND_DUAL
This table records when requests do not update
database tables.

FND_CONCURRENT_PROCESSES
This table records information about Oracle
Applications and operating system processes.

FND_CONC_STAT_LIST
This table collects runtime performance statistics
for concurrent requests.

FND_CONC_STAT_SUMMARY
This table contains the concurrent program
performance statistics generated by the Purge.

FND_ENV_CONTEXT
Table stores information about environment name and value for each of the concurrent process

You said:

Last time I managed to fix- so conc mgrs started on correct file system - by updating the s_contextserial to higher than in the database and re-run autoconfig.
select serial_number from FND_OAM_CONTEXT_FILES where status in ('S','F') and name not in
('METADATA','TEMPLATE') and ctx_type='A';

This time I am again having this issue after cutover.
This time updating the context_serial to higher than in db, is not helping.
autoconfig completes without any errors..
But the serial_number 65 but in context file I set it to 70.
Its not changing after autoconfig.
Any hints ? I think of cleanuping the fnd_oam_context_files ?

We said:

I think there are some things that affect some environment variables, left there, in the FND.. tables.. So, a wrong value in FND tables.
Yes.. FND_ENV_CONTEXT may be the cause.

After taking your backup, you can try truncating FND_OAM_CONTEXT_FILES and running autoconfig (first in db then in apps tier)afterwards.. But, what I recommend you is following;

INFO :FND_ENV_CONTEXT Table stores information about environment name and value for each of the concurrent process

Action plan:
1) Run the "Purge Concurrent Requests and/or Manager Data" concurrent program with the necessary -parameters to clear that records from fnd_env_context.
2) Stop the concurrent processing
3) Run the Concurrent Manager Recovery Troubleshooting Wizard with the internal manager down as detailed in the following document:
Concurrent Processing - Concurrent Manager Recovery Troubleshooting Wizard (Note 134007.1)
4) Start the concurrent processing.

Reference:
MAIN REF: Workflow Mailer Won't Start In Cloned Environment (Doc ID 1953629.1) -- this for 12.1 but, it seems it is applicable for 12.2 as well.
Concurrent Processing - Purge Concurrent Request and/or Manager Data Program (FNDCPPUR) (Doc ID 104282.1)

Solution / You Said:

Update and FYI:
We fixed the issue using the following action plan;

shutdown apps tier
truncate the applsys.fnd_env_context
run autoconfig on db+appstier
restart

Actually its comes up now and then same issue appears sometimes after a lot of patching activities, note that -> not every time.. 
So, seems like some bug in EBS that confuses this GSM thing, and it just start on wrong filesystem.
Take care!

EBS 12.2 -- due to VIP values in the apps_jdbc_connect_descriptor - failing fs_clone

Educability.. We all need it, and here in this blog and in my forum, I think we are taking steps in this regard. Educability is defined with 3 basic things. 

1) Learning from experience ( PAC - Probably approximately correct learning)
2) Chaining together what we've learned in different contexts. 
3) Taking from someone else, their experience.

It is actually the combination of those 3.. We, humans have this unique capability. So why not we use it in every way that we can? :)

So you encounter some problems, you ask them, I try to shed some light on them and offer some solutions for them, you try the suggested actions, or visit the guided documentation, you send the feedback and update me.. In this way, we all learn, we all educate ourselves.

Anyways, enough great words of wisdom for the intro here :) I almost hear you say give us the short take. So I come to the point.

Erman Arslan's Oracle Blog / Forum has 350 subscribers asking questions regularly and here I 'm sharing the interesting ones.

You asked:

After cloning  an EBS 12.2 instance,  we have seen that, we had SCAN-related values in the apps_jdbc_connect_descriptor. But! , in the cloned environment we had VIP values in the apps_jdbc_connect_descriptor. Apparently the fs_clone is using the VIP values and failing in EBS_SYSTEM validation. We can connect from the apps tier using sqlplus EBS_SYSTEM/<pwd> without any issue.

Any thoughts on what is going on?

We have tried to set the <DB>.dbc and adgendbc.sh as well as the context file to the desired values but! as soon as we run autoconfig, the related values in all the related places in the context file get reset to the values with VIP.

 So where is autoconfig picking up these from?

We said:

Interesting , actually it should be other way around. (see the following blog post , there we have a different behavior -> https://ermanarslan.blogspot.com/2021/01/ebs-122-configuring-application.html -- reference

You may check the relevant template file and see if there is some casual connection there. (your AD & autoconfig version may also be the cause) .. Check the template and try to predict where the autoconfig takes that value.. (db parameters maybe.. Local and Remote listener parameters I mean..) -- informative

*s_jdbc_connect_descriptor_generation is also important.

if we set it or leave it as is (I mean if it is set to TRUE), then the apps tier autoconfig will overwrite the jdbc url with the scan-based & load balanced one. (but you are saying the exact opposite.. But still check that value) -- solution offer 1

Autoconfig may also take that value from the old context files stored in FND_OAM_CONTEXT_FILES table.. Check that as well..
In the worst case, you may clean up the network topology and re-populate it with autoconfigs (db + app tier) --solution offer 2

Solution / You said:

Afer setting the s_jdbc_connect_descriptor_generation to TRUE it is working as desired.
Thanks for the reply.

Tuesday, April 23, 2024

RDBMS -- a Case Study: Suggestions for Performance Improvements, quick actions, utilizing the features for performance and the approach

Following is based on a real life performance problem, which was encountered on a very critical Oracle Database 19C Production System.

While the headings given below summarize how these performance problems should be approached from scratch, those written things under the headings provide case-specific details. Determining the current status of the environment, motivations in determining the direction of analysis, quick actions and suggestions for the future (Oracle specific).. Of course, the details are included in the analysis report. With this approach, we solve problems while ensuring customer satisfaction.. 

General Findings:

High parse

Lots of I/O -- due to several reporting activities

concurrency waits

Slow SQLs due to visiting very big tables.

High (almost at limits) OS load and CPU usage

Several active sessions and huge amount of queries (especially OLTP-type) 

Remote database queries and related network waits

Findings:

High number of sessions

High CPU usage

High Hard Parse

High Parse (Soft)

High number of SQL Executions

Low number but long CPU Bound SQL Execution

SQLs with high buffer get.. (there are also those with high execution and those with low execution)

Low Parse CPU to Parse Elapsed ratio

Medium-High DBlink wait

Medium-High Indexed Reading Wait

Fast I/O (ALL Flash I/O Subsystem)

High I/O count

Comments:

SQL Version count should be reduced. -> For mutexes

Hard Parse should be reduced. (Using Bind for Hard Parse, Dynamic SQL should be avoided, PLSQL should be used -- automatic cursor sharing)

Soft Parse should be reduced. (For Soft Parse, techniques such as simplifying queries, using indexes effectively, and reducing unnecessary joins can significantly improve performance.)

SGA should be increased to you. (780GB specified by Adviser. We have already recommended this.)

SQLs that perform high GET should be tuned so that they work with less data.

Motivation for further analysis:

Random Cursor S pin on X waits

High CPU usage ve Load

Random ORA-00600 errors

Low RU level

Possible Bugs

Lack of best practice development

Actions Taken:

RU upgrade

LOAD analysis

Platform Migration

Performance Analysis & Workshops

Error Diagnostics & Action Plans

Bug Fix- manual actions

Hang analysis & Support

Resource Management implementation tests

Recommendations for Improvements:

• Active Dataguard – Transferring the load from the source system to the secondary system with SQL/Report offloading (general benefit, I/O + Memory, less work to be done..)

• Continuous read/write data transfer to a secondary environment and SQL offloading with a tool such as Golden Gate (alternative to the above item)

• Abandoning the use of DB links. (Not very related to resources, but still noticeable and sometimes keeps queries waiting.)

• Partitioning – Sessions process with less data by partitioning operations based on queries. (This will reduce concurrency waits and reduce CPU usage.)

• In-Memory – Faster querying (will reduce concurrency waits and reduce CPU usage.)

• ILM (Information Lifecycle Management) implementation -- Data Archiving – Reducing and archiving data in the source system. (Less data, faster queries, less I/O and CPU usage due to concurrency)

• Oracle RAC – (Horizontal growth and node-based load distribution + many advantages.. -- Application side to be RAC Aware.)

• Reducing the number of parses – Mainly the work of the software developer. Increasing the number of parses in the current system, minimizing the hard parse and reaching the point of “Parse First Execute Many”. (More gain from CPU cycles)

• Testing of 23C / multi-tenant is mandatory – in terms of performance.. (it has many innovations and bug fixes. It may be beneficial indirectly.)

• What can be done to reduce the concurrency in memory.. (e.g., if there are cache buffer chains while reading the same part of the index, changing the index type.. reverse key index maybe.. Scenario-based actions) - low gain and challenging job.

• Hardware upgrade (last resort) – Compatible with licenses, but capacity on-demand. It will grow in place when necessary. Supported by virtualized structures that accept Oracle's capacity on-demand. Access to disks will still be physical.

• Cloud use can be considered (in all relevant contexts). It can be much more flexible and less costly.

• A fast cache in front of Oracle Database. (Fast reading and less work on DB. Maybe Redis. It needs to be supported with research.)

• Oracle Database Result Cache.. (It may seem contradictory depending on the situation, but it can be checked.)

EBS -- About Oracle Linux 8 upgrades

This is a hot topic for the EBS customers and wanted to clarify something about this upgrade process..

That is, the in-place OS upgrade for Oracle EBS from Linux 7 to Linux 8 is not certified. 

So , if we want to switch to Oracle Linux 8  a  Oracle Linux 8 must be installed to a new machine and then all the requirements in the MOS Doc ID 1330701.1 document must be met. Afterwards, EBS system needs to be moved to the new machine/OS using the cloning method.

This was already documented for Oracle Linux 7 upgrades (as shared below). But I also wanted to clarify this for the Oracle Linux 8 upgrade(s).. Basically the same things apply.

Oracle E-Business Suite Installation and Upgrade Notes Release 12 (12.2) for Linux x86-64 (Doc ID 1330701.1)

RDBMS -- 19C's Premier Support & 23C consideration

For 19C customers, currently Latest RU is 19.22.  However, 19C's premier support will end on April 30. Then this will happen -> “Premier Support (PS) ends April 30, 2024, two years of waived Extended Support (ES) fees will be in effect from May 1, 2024 until April 30, 2026.”

Error correction continues until 2026. In any case, we should be on 19C and be ready for the future  23C upgrades. (considering, 23C is the latest long term support release and will be available for on-premises starting 1H CY2024)

Additional information for 19C:

Premier Support (PS) ends: April 30, 2024

Waived Extended Support (ES): May 1, 2024 - April 30, 2026 (No fees required during this period)

Paid Extended Support (ES): May 1, 2026 - April 30, 2027 (Fees apply)

Patching:

You will have access to error correction and patching until April 30, 2027, but there's a distinction:

With paid Extended Support: You will have access to all available patches.

Without paid Extended Support (after April 30, 2026): You will only have access to patches released before May 1, 2026. This includes Monthly Recommended Patches (MRPs) if you are using a compatible platform.

19c Release Update Revisions (RURs):

These are no longer relevant for Oracle Database 19c versions 19.17.0 and above. You should focus on Monthly Recommended Patches (MRPs) instead, but be aware of their platform limitations.


Release Schedule of Current Database Releases (Doc ID 742060.1)

Monday, April 22, 2024

Erman Arslan's Oracle Forum / FEB 10 - APR 22, 2024 - "Q & A Series"

Empower yourself with knowledge! Erman Arslan's Oracle Blog offers a vibrant forum where you can tap into a wealth of experience. Get expert guidance and connect with a supportive community. Click the banner proclaiming "Erman Arslan's Oracle Forum is available now!" Dive into the conversation and ask your burning questions. -- or just use the direct link: http://erman-arslan-s-oracle-forum.124.s1.nabble.com A testament to its vibrancy: over 2,000 questions have been posed, sparking nearly 10,000 insightful comments. Explore the latest discussions and see what valuable knowledge awaits! Supporting the Oracle users around the world. Let's check what we have in the last few weeks..




Requests stuck in Pending Standby by VinodN

Databases to use Oracle forms and reports by kvmishra

E-business APEX HTTPS by big

Auto Generating -bash process by kvmishra

Auto Invoice Master / Auto Invoice import performance issues by raiq1

Package installation by kvmishra

Can you clone from the patch filesystem in EBS 12.2? by Srini

Oracle Bi by Mohammed Hamed

Moodle on Oracle Database by kvmishra

stty: standard input: Inappropriate ioctl for device by kvmishra

DBCS - Compute Instance Creation by satish

ERP cloud - Paas - oaf by satish

Question on VPN -OCI by satish

Oracle Base Database Service - OCI by satish

Provisiong db systems - paas oci by satish

concurrent program cannot create output by big

Confusion between application server 10.1.2 and managed servers by VinodN

Regarding audit in EBS 12.2 by VinodN

Oracle Base Database Service - Bastion - OCI by satish

xtts filename by Roshan

migration of oracle 12c from windows to linux by ayobi.shafiq

What should an EBS DBA Learn to be future ready by VinodN

EBS Upgrade 12.2.12 character set issue by raiq1

Encrypt Table Data by kvmishra

Dev instance slow by VinodN

xtts by Roshan

"Privileged" Users in EBS by VinodN

date:time of execution by big

convert log file parameter for clonning by big

Migration oracle database to OCI by satish

perl adcfgclone.pl appsTier fails by big

Oracle fusion - SAAS by satish

Migration to OCI - Oracle Database Backup Cloud Service - Object Storage by satish

Need advice on Configuration - R12.2 by satish

Bursting Emails R12.2 by satish