Tuesday, November 29, 2022

EBS -- Playing with Upload Limit top-down the stack - Upload File Size Limit, LimitRequestBody & SecRequestBodyLimit

In one of my Advanced Customer Support site visits, I came across an interesting issue related with the EBS file uploads.

Quickly check the Upload File Size Limit profile, as it is the only EBS side limit to the file size of an attachment. I have increased it up to a logical maximum ( A Quick lesson: Using lots of digits like 9999999 breaks the code it seems:), and retried. It just didn't help. The file upload screen recognized the new value, but the HTTP Server failed again while we were trying to upload the file. (file that I have used in my test was a PDF, and it was 350 MB in size)

It was clear that the issue was caused by some other limits configured in one (or two) technological components placed higher (or a little further on the front:) in the stack. So, I decided the check the Apache/Oracle Http Server configuration.

Two configuration parameter were visited. 

The first one was LimitRequestBody and actually it was managed with the Autoconfig (s_limitrequestbody) .. Checked the context file and saw that it was already set to unlimited (LimitRequestBody - from 0 (meaning unlimited) to 2147483647 (2GB)). It was a good try but no luck.. (as you may guess)

The second one was SecRequestBodyLimit (checked the see if it was set) set in the file named security2.conf placed in the following directory, $IAS_ORACLE_HOME/instances/EBS*/config/OHS/EBS*

security2.conf is the configuration file of the ModSecurity and it inspects the requests sent to the Web Server, which is Oracle HTTP Server in this case.

Added the following parameter in security2.conf;

SecRequestBodyLimit 1047527424

-- Note that, This parameter sets the maximum request body size ModSecurity will accept. This parameter is closely related with the file upload and it is set in bytes. (We have also SecRequestBodyNoFilesLimit parameter to play for tightening up the security , but it is another story -- not related with this case actually)

Anyways, after setting parameter, I restarted the Oracle HTTP Server, and tried to upload that big file again. This time it succeeded. The issue resolved!

Okay.. This was today's tip and it was for EBS community, because we don't find anything other than  Upload File Size Limit profile in the knowledge base. So I think this blog post may help to someone suffering from a same kind of issue.

Just a little extra info (something nice to have) -> Low heap sizes configured for Weblogic Managed Servers (especially for oacore) may also prevent us for doing these excessive stuff.. So increasing the heap size of Oacore Managed Server may be a good move -- not necessary but just in case)

Saturday, November 26, 2022

RDBMS / EBS 12.2 -- too many records in OBJ$ & other dictionary objects

In one of my ACS (Advanced Custom Support) site visits; I have dealt with a performance issue. The system was an EBS 12.2 running on multiple Sparc (&Solaris) nodes , and the performance problem was faced in the database management and developer related tasks that were done using GUI tools like TOAD.

Developers were complaining due to the slow response of Toad Code Road Map, and they named the problem as "Toad Code Road Map performance issue".. However; when I analyzed it, I saw that it was a general issue related with the internal data objects, especially the OBJ$.

Following SQL (which was executed by TOAD) alone was a good representation of the problem;

SELECT DISTINCT OWNER SCHEMA1, REFERENCED_OWNER SCHEMA2 FROM SYS.DBA_DEPENDENCIES UNION SELECT DISTINCT OWNER SCHEMA1, R_OWNER SCHEMA2 FROM SYS.DBA_CONSTRAINTS WHERE CONSTRAINT_TYPE = 'R'

So, I started my analysis by focusing on this SQL and first tried to improve its performance without touching any data.

Tried to following;

  • Checked the execution plan, saw the Full table scans there; checked all the execution plans(including the historical ones) and tried to find a better one. No luck. 
  • Executed SQL Advisor for that SQL. SQL Advisor recommended a SQL Profile, but again no luck, no gains.
  • Tried with parallel hint (and several other hints aligned with the context).. 4 parallel, 8 parallel, 16 parallel.. Just didn't improve the performance.
  • Considered creating a Materialized View and making use of Query Rewrite capability of Oracle, but again no luck due to the related tables being internal objects.. (can't use query rewrite with the internal ones)
After all these, continued my analysis with the data and concluded that the solution was there in the data. I checked the data in OBJ$, and found several similar objects that belong to different database editions.. (More than 150000 objects multiplexed 13 times) 
Remember this was an EBS 12.2, and we had online patching and that's why we had Edition Based Redefinition (EBR) there.

It seemed that excessive patching was done in this environment and no full cleanup (& no actualize all) was executed since the beginning of those patching activities. It was a big deal, because the multiplexed objects were the results of that.

Well.. Finishing my post with the action plan that I provided to customer;

Run the following;

adop phase=prepare
adop phase=actualize_all
adop phase=finalize finalize_mode=full
adop phase=cutover
adop phase=cleanup cleanup_mode=full

And then gather the dictionary and fixed object stats. (cause the number of the rows in the related object will change)

exec dbms_stats.create_stat_table('APPS','FIXED_STATS_BCK','USERS_TBS');
exec dbms_stats.export_fixed_objects_stats(stattab=>'FIXED_STATS_BCK',statown=>'APPS');
exec dbms_stats.create_stat_table('APPS','DICT_STATS_BCK','USERS_TBS');
exec dbms_stats.EXPORT_DICTIONARY_STATS(stattab=>'DICT_STATS_BCK',statown=>'APPS');
exec DBMS_STATS.GATHER_DICTIONARY_STATS;

That's all for today. I hope you will find it useful.

Friday, November 18, 2022

Natural Selection & Converged Databases (Oracle finds good solutions :) -- JSON & In-Memory

This is an Oracle Blog in the first place, but! I do intellectual gymnastics as much as I can and I develop or change my content in parallel with the output of these gymnastics.. There will still be pure technical posts from time to time, but most of the time; I will try write technological articles by making analogies with science. I think it will be more fun that way. 

Today's post will be about Natural Selection and Oracle Converged Database (especially about JSON and in-memory capabilities). These two topics seem so unrelated, don't they? 

Remember -> http://ermanarslan.blogspot.com/2022/08/same-thing-from-different-angles.html

The link that I shared above is the link to one of my previous blog posts and its title was "Looking at the same thing from different angles under different interpretations concluding different results -> Entropy & Information | + Cloud & On-Prem | + A little Farming" .  I think I could manage to make the connection there, and now I will try to do the same in this article.

I will start with the communication, the emergence of the optimal communication method and finding itself a solid space in our world. (reference: Arik Kershenbaum & Sean Carroll - What Aliens Will be Like) I will talk about the fact that; It is the requirements (and the environment) that makes that communication method be selected, and then I will quickly make a connection by changing the context and try to bring the same notions, same logic to the database world, and give my comments about why we should consider Oracle Converged Database and the related features of it for meeting our modern needs. So part1 will be about natural selection, part 2 will be the converged database and part 3 will be making the analogy. It is a hard one, but we will see in the execution.. We will see if I will be successful in conveying my thoughts.


Okay.. With that; lets go!

Communication is everywhere. All animals, plants communicate. Sight, hearing.. These are the products of sensing and all the communication evolves from sensing. And we speak in words because we can hear it. So, again sensing.. Sensing is what drives this process and it is the cause of the final product in a sense. In other words, we don't communicate, share our thoughts or ideas by means other than the known senses. (we don't use telepathy, we didn't.. It didn't just evolve) These are directly related with the needs actually.. What evolves, what doesn't evolve.. These are all related with the needs. For instance, we've used vision, because we used light to find food. 

Anyways, these communication channels based on sensory also depend on the physical environment. Electric field sensing is rare but it may be effective.. (we can transfer lots of information between long distances but it is expensive!) -- see -> https://en.wikipedia.org/wiki/Electric_fish -- note that; using the electric field for sensory is rare, but using it for communication is much rarer, and this fish does both of that.

Light can also be used for communication and it is a very very old ability, but! it is good for the environments where there is a transparent medium. If something is in the way, it is useless.. Imagine a wall between you and me, you used a flashlight and sent me a Morse code. Of course I wouldn't be able to receive it. Light can not go really around things at all (because of the wave length of light) and why would you just generate that Morse code? -- if I was not able see anything. (this can also be applied to the under water life, the oceans where there is no light, why would you bother generating it.)  Note that, that short wave length also makes light be very difficult to detect.

Now, sound.. Sound has a long enough wave length and it can go around the objects. It can also travel long distances and lots of information can be put in it. So, we used it to speak because of good reasons. It is a very good way for communicating in this environment.. (in the environment we live in, the world But it should also be a common way of communicating in the similar worlds where there is a dense gas atmosphere. 

Going fast, but we should be reaching the Oracle side of things really fast, right?

Well.. So far so good... We kind of made an entrance by implicitly touching the evolution.. So far; the bottom line is "natural selection finds good solutions". (remember the story of sound -- being the optimal communication method)

Okay, let's take a look at the Database World. In the last few years, we see a dramatic change in the database technologies as well.. NoSQL databases, and databases for specific purposes started to be used in the industry.. Environments and databases suitable for Machine Learning, Blockchain, Graph, Spatial, Editions, Microservices and IoT Streaming.. 

We are talking about software and data stores that enable us querying and writing high amounts of non-relational data faster. Powered with the fast caches for decreasing the latency while operating with the new and fast data interchange methods.

These good, beneficial and leading innovations brought a lot of new technologies, processes and software into our lives. For instance we deploy software to operate faster on JSON data, we cache the data separately , but we still have a big Oracle databases, Exadata platforms , Oracle RAC clusters and so forth in the backend.

This means we usually have a big package of separate software components and data stores to address different data storage needs, data manipulation and access requirements.. We also usually have an integration and data flow between these components. 

At the end of the day, we do our work in modernized way, but we also find ourselves managing those components, paying the support and consultancy for each of them and looking for skilled resources/administrators/developers who can understand the related technology and solutions, and add value to our enterprises. Not to mention that, it is a complicated architecture that comes with its own non-technical challenges.

It is still worth mentioning that, these all increase management complexity (that is already deducible from the previous paragraph)

But! remember, we usually have Oracle at the database tier right? and also some of us believe that Oracle finds good solutions. (like the natural selection does)

So why don't we just check what we have in Oracle for addressing all those modernized needs?

At this point; I will call the notion of "Converged Database" for help.

Oracle is a converged database and we have started to feel it more with the release 19C.

Of course it a big topic consisting of the following:

  • PLUGGABLE Databases, that let micro services to store their data in a logically separate data containers,
  • API Driven Architecture, that automatically generates REST API on top of SQLs or stored procedures,
  • APEX, for generating simply low code applications,
  • SHARDING for having a distributed data architecture,
  • Edition Based Redefinition (EBR), Online Table Redefinition, native JSON and XML for simplifying continuous delivery that enables application enhancements,
  • Automatic Machine Learning (AutoML) for enabling Machine Learning for non-experts,
  • In-Memory and In-Memory Columnar formats for increased OLTP and Data Analytics performance,
  • Simplified Graph Analytics, Simplified Use of Document Data (JSON, XML), Spatial Data, Block Chain Data and so forth.

So, these all are provided with the Oracle Database under the umbrella of the Converged Database notion and these just simplify the developer's work and administrator's work due to having a single converged database rather than implementing multiple fragmented databases and using distributed execution and data movement across them..

I think you already guessed how these things and the natural selection is related. But before that, I want to focus on 2 specific capabilities in these context. Storing & Modifying "JSON" data and "advanced but still in-database caching capabilities" that we have in Oracle Database, or should I say Converged Oracle Database?

Let's start with JSON.. (Note that, we even have Autonomous JSON Database (AJD) --in addition to ADW and ATP-- in the family of Autonomous Database Services in the Oracle Cloud)

We just basically insert JSON data, load JSON Document and query JSON data using SQLs in Oracle Databases. We have built-in functions to update specific parts of JSON documents and we also have built-in functions to query specific records in the JSON data that is stored in the Oracle Database. 

We have standard functions to map JSON content in a columnar view and we have all the SQL power to operate on that JSON that is exposed as columns. We can index our JSON data in the usual ways and we can even move a table with a JSON column data to the in memory column store for improving the performance of queries that are interested in that JSON data. 

We can partition a table with JSON columns in regard to the JSON value. Lots of performance enhancements for SQL/JSON processing and these are all integrated with the Oracle's in-database optimizer and that's why we can expect results in significant performance speed up for queries..

These things I just mentioned are mostly based on On-Prem deployments, but they also apply to cloud.

As we said JSON, some people may already start to think about MongoDB right?

Following is a graph that quickly represents the benchmark results that are gathered in a study that is made in-house at Oracle in August 2020. It is for Autonomous JSON Database, but it unleashes the JSON performance of Oracle Database and it does this by comparing Oracle with MongoDb Atlas.

Here is the link for the related whitepaper. -->

https://www.oracle.com/a/ocom/docs/autonomous-json-database-whitepaper.pdf

Above graph (+ whitepaper) reveals that Oracle provides 2-3 times better throughput than MongoDb can provide.

Of course, people are skeptical about such benchmarks. I respect them and sometimes I can understand them. But, what we see in the document and the graphs shared above is that, it works.. Oracle database is quite good at JSON.

Okay.. Let's take a look at the In-Memory side..

Oracle Database In-Memory (Database In-Memory) is based on a suite of features that greatly improves performance for real-time analytics and mixed workloads. The In-Memory Column Store (IM column store) is the key feature of Database In-Memory. 

(See -> https://docs.oracle.com/en/database/oracle/oracle-database/21/inmem/intro-to-in-memory-column-store.html)

The document that shared its link below; exposes the benchmark results, which realize the benefits of Oracle Database In-Memory for SAP Applications. 

Here is the link -> https://www.oracle.com/technetwork/database/in-memory/overview/benefits-of-dbim-for-sap-apps-2672504.html

Following is the conclusion for the guys who don't like to read relatively large documents.

Using the SAP BW-EML benchmark we have proved Oracle Database 12c with the Oracle Database In-Memory option outperforms SAP HANA by a factor of 2X on the same Intel processor for SAP workloads. Oracle Database 12c also shows near linear scalability as the number of database servers is increased and achieved more navigation steps per hour than SAP HANA on half the CPUs.

Well... We are talking about scalability in addition to the performance now. In this context we see that, Oracle Database In-Memory expands the scalability horizon and efficiency of Oracle Databases significantly.

A quick chart from the document above -- just for showing you the difference.

What I want to underline here is not how Exadata with In-memory can beat HANA, because these comparisons are relative and they may be falsified from some points of views.. But! my intention is to show that In-Memory option works and does what it promises.

JSON & In-Memory. These were the subjects of today's post, but these are just two important abilities/features of many, that are presented to us with the Oracle Converged Database..

As I finish my article, I would like to make a reference to the analogy used in the title of this post. Let's say "reminder of analogies". That is, natural selection is the engine that drives evolution, and in this physical world that we all live in , all the things evolve through natural selection.  

As the title of this article implicitly states, natural selection looks for good solutions, and it finds them. So, it is not hard to make a connection to our enterprise data environments and conclude that, in our enterprise environments where Oracle databases exist at the heart of all the things and processes related to data, many problems are solved with the capabilities of these databases and if that's the case it is similarly tempting to evaluate the good solutions that Oracle finds through its evolution.