The buffer is a cache and it is fast.. It is fast but it must support the concurrent access, right? As it must support the concurrent access and the concurrency itself, it must be aligned with the serialization devices to guarantee consistent modifications inside of it, the consistent reads from its contents as well.
That's why; when you concurrently access the related portions of the buffer cache, your performance slows down.. Sometimes it is not the only affect.. I mean, this serialization devices (latches basically) when they are locked , your sessions/processes which are trying concurrently to get them, spin on the CPU and increase the load average + Cpu utilization..
There is one more thing that we need to know in this context is the shape of data that is stored in the buffer cache.
It is a cache but it is actually for caching the buffers. This means; when you query or do a DML in the traditional way, the operations are done on the basis of buffers.
There is another caching mechanism in Oracle, which is a little different than the buffer cache. This caching mechanism, or caching store is called Result Cache, and it is not caching the data in the basis of buffers.. The Result Cache as its name suggests; is a cache where the result of the queries reside.
It is basically designed to store the static data and results.
Its statistics can be seen by querying the v$result_cache_statistics dictionary view , bytheway.
Anyways; this cache is designed for the static objects and it is recommend to be used when there is a static object and result which need to be directly accessed (in most of the cases concurrently) by the sessions.
If we have such a static object and if we need to query it, then we can consider using the result cache rather than using the conventional access / buffer cache access.
Result cache in these kinds of scenarios brings the data faster than the buffer cache, as it has the data in the result format and it just copies it to our session/process 's buffer.
However; if we try to use the result cache in a dynamic environment, things change.
Things change, because the invalidation ( when the data and objects that builds the contents cached in the result cache change) means "Writing the Result Cache"
Writing is not a very bad thing? Yes for 1 write or 10 writes maybe it is so, but consider 1000 concurrent writes..
Why writing to result cache concurrently is a bad thing then?
It is because , the result cache is protected by a single latch.
In order to write to the cache, it is required to get an exclusive latch.
This exclusive latch is need to be get when there is a cache miss or cache invalidation..
So cache hit doesn't require the session to get that exclusive latch.
This exclusive latch, on the other hands; is a single one, and it has no child.
It is basically, when you want to write to cache, you need to get it -> concurrent write access -> Problem
However, if you read the cache (cache hit), you need to get a shared latch -> no contention -> NO Problemo :)
Here is what Oracle Support says about it ->
The result cache was designed to cache small results sets for a short period of time and once it gets full there is an overhead of handling the data in it. When no empty blocks are found in the RC we attempt to make room first by removing any invalids, then requesting a new chunk to be allocated and finally by removing using LRU. Currently, there is a cost associated with making room in a full result cache, which increases with size of the cache, especially for LRU processing and this is unlikely to change.
If you indirectly or directly put an object or let's say a result into the result cache and if the data/object that produces that result is dynamically changing; then guess what?
You will need to write to the cache..
-- By the word directly, I mean using hints -> SELECT /+result_cache/
You will have Result Cache related activity.
You will see a peak for the latch named Result Cache: RC latch.
Note that, sometimes result cache size may be insufficient.. In this case you will see a peak in the latch free waits.. In this kind of a specific case, you may increase your result cache size -> init.ora parameter -> RESULT_CACHE_MAX_SIZE
As for the additional info; Result Cache increases its size till it gets its maximum..When it reaches the maximum size, it starts aging out its contents using the LRU algorithm. (in a single point of time, you may see both the invalid version and valid version of the same object in the Result cache, bytheway)
The LRU that is implemented in this context actually behaves like the following;
-> If you need to write; but if you can't write because there is no free space in the result cache; then first age out the invalid contents.. After aging out the invalid contents, retry writing. If you can't write even now; then age out the contents using the LRU.
This also means; if you reach the maximum size in result cache, then the overhead of writing to it is bigger.
When the overhead is bigger, latches are hold longer.
When the latches are hold longer, we end up with more processes spinning on CPU.
In short, there is already a significant cost for writing the Result Cache.. But that cost gets even biffer when the cache is full.
Bytheway; you may report your result cache usage using the Result Cache Memory Report.
execute DBMS_RESULT_CACHE.MEMORY_REPORT(TRUE);
Anyways; we said writing is bad :) What do we mean by "writing?"
Yes.. Any DML that changes a dependent object makes the related result cached in the Result Cache, become invalid, and that's trigger a write in the next result cache access..
Consider we have a query which queries table named products and it has a where condition on product type.
Now consider the result of this query is in the result cache and then you update the PRODUCTS table , but you update it for a different product type.
Yes.. Even such a DML, even a no-change DML will invalidate the cached result in the Result Cache and creates a need to write into the Result Cache, during the subsequent cache accesses of the relevant queries.
So it must be managed. I mean the Result Cache and the related hints, the result cache related configuration should be managed well.
--Double check where you use it. Double check the queries with that hint!
Basically->
The same queries, the same bind variables and no DML in the relevant objects -> GOOD for Result Cache.
DML - DDL activities in the relevant objects, changing bind variable -> NOT GOOD For Result Cache.
Anyways, I have one more thing to mention.. This cache is not so stable in release 11gR2 , 11.2.0.4..
So, if you can't manage it well, or if you think that you are managing but still getting latching issues time to time, then I may recommend you to disable the Result Cache. (for 11gR2)
There are bugs, and patches in this context -> see one of them ->
26436717 RESULT CACHE: "BYPASS OBJECTS" INCORRECTLY USED BY WAITING PROCESSES (Patch)
17305006 SEVERE PERFORMANCE SLOWDOWN WHEN RESULTS CACHE IS THRASHING (Patch)
High "Latch Free" Waits with Contention on 'Result Cache: RC Latch' when RESULT_CACHE_MODE = MANUAL on Oracle 12c (Doc ID 2002089.1) -- applicable for 11G as well.
In some cases Adaptive Dynamic Statistics causes issues with result cache -> alter system set "_optimizer_ads_use_result_cache" = FALSE; (workaround)
I know this became a long blog post and I feel that I can write more and more on this, but I need to stop myself somewhere :)
Anyways; I don't go in to the details of Result cache related db configuration and db parameters, but one last thing;
Even if your result cache mode is manual, your tables may still be configured to be cached ->
Example: ALTER TABLE sales RESULT_CACHE (MODE FORCE) , Ör: CREATE TABLE sales (...) RESULT_CACHE (MODE FORCE);
My last words on this topic are ; Result cache is a nice feature. But we need to be careful using it. We need to be careful making use of it. :)
No comments :
Post a Comment
If you will ask a question, please don't comment here..
For your questions, please create an issue into my forum.
Forum Link: http://ermanarslan.blogspot.com.tr/p/forum.html
Register and create an issue in the related category.
I will support you from there.