In these post , I will share some facts about configuring concurrent managers.
My reference is Maris Elsins's blog, thanks to him for pointing out this valuable information.
This is tested for EBS R12. (Recently, tested/revisited for EBS 12.2 as well.)
https://www.pythian.com/blog/performance-settings-of-concurrent-managers/
https://me-dba.com/2016/04/12/internals-of-querying-the-concurrent-requests-queue-revisited-for-r122/
Here are the facts:
So, suppose we have no concurrent request which is pending to be running and suppose we configure standard manager with 30 processes, with a sleep time of 30 seconds and with a cache size of 30 , see what happens;
30 seperate standard manager process starts. Each of them check the fnd_concurrent_requests table in every 30 seconds.
Let's suppose they run these checks sequential; see what happens;
What about sleep times?
in second 1, process 1 checks the fnd_concurrent_requests
in second 2, process 2 checks the fnd_concurrent_requests
in second 3, process 3 checks the fnd_concurrent_requests
in second 4, process 4 checks the fnd_concurrent_requests
...
...
in second 30, process 30 checks the fnd_concurrent_requests
in second 31, process 1 checks the fnd_concurrent_requests (again)
in second 32, process 2 checks the fnd_concurrent_requests (again)
in second 33, process 3 checks the fnd_concurrent_requests (again)
...
...
in second 59, process 29 checks the fnd_concurrent_requests (again)
So, almost 60 checks are done. So it makes 1 check per second.
So in second 60, checks are done.
This is a little high.
What is the correct setting?
It depends on the situation and can be derived using the formula that Maris Elsins shared in its blog.
Sleep time= "# of process" * (1-Avg Utilization percentage ) * Avg Time(seconds) for a request allowed to be pending
My reference is Maris Elsins's blog, thanks to him for pointing out this valuable information.
This is tested for EBS R12. (Recently, tested/revisited for EBS 12.2 as well.)
https://www.pythian.com/blog/performance-settings-of-concurrent-managers/
https://me-dba.com/2016/04/12/internals-of-querying-the-concurrent-requests-queue-revisited-for-r122/
Here are the facts:
- cache size is per process
- sleep time is per process
- A concurrent process waits for <sleep time> and checks the fnd_concurrent_requests table , only when it is idle.
- A concurrent process, which is not idle/running a concurrent request, directly checks the fnd_concurrent_requests table without waiting the <sleep time> to pass, after it complete the execution of the running concurrent request.
So, suppose we have no concurrent request which is pending to be running and suppose we configure standard manager with 30 processes, with a sleep time of 30 seconds and with a cache size of 30 , see what happens;
30 seperate standard manager process starts. Each of them check the fnd_concurrent_requests table in every 30 seconds.
Let's suppose they run these checks sequential; see what happens;
What about sleep times?
in second 1, process 1 checks the fnd_concurrent_requests
in second 2, process 2 checks the fnd_concurrent_requests
in second 3, process 3 checks the fnd_concurrent_requests
in second 4, process 4 checks the fnd_concurrent_requests
...
...
in second 30, process 30 checks the fnd_concurrent_requests
in second 31, process 1 checks the fnd_concurrent_requests (again)
in second 32, process 2 checks the fnd_concurrent_requests (again)
in second 33, process 3 checks the fnd_concurrent_requests (again)
...
...
in second 59, process 29 checks the fnd_concurrent_requests (again)
So, almost 60 checks are done. So it makes 1 check per second.
So in second 60, checks are done.
This is a little high.
What is the correct setting?
It depends on the situation and can be derived using the formula that Maris Elsins shared in its blog.
Sleep time= "# of process" * (1-Avg Utilization percentage ) * Avg Time(seconds) for a request allowed to be pending
#of process= 30
suppose we have only 1 hour peak time , 1 hour in a day whic we have important conc requests
Suppose these concurrent requests are critical and they should be wait max 10 seconds
30 * (1-1/24)*10 = 280 -> sleep time should be 280 seconds. (approximately.)
So, as we have 30 processes and they are idle most of the time, they will check the queues in every 280/30= 9 seconds. (in average)
If this formula calculates a sleep time lower than the "Avg Time(seconds) for a request allowed to be pending", then it means "the concurrent manager process count is not enough", the solution should be adding concurrent manager processes.
What about the caching?
process 1 caches 30 requests (1-30) and start executing request 1
process 2 caches 30 requests (2-31) and start executing request 2, as request 1 is already run by Process 1
process 3 caches 30 requests (3-32) and start executing request 3, as request 2 is already run by Process 1
....
.....
process 30 caches 30 requests (30-59) and start executing request 3, as request 29 is already run by Process 1
...
suppose now process 1 finishes the executing of request 1;
process 1 checks request 2 and see it is locked, as it is executed by another process.
process 1 checks request 3 and see it is locked, as it is executed by another process.
process 1 checks request 4 and see it is locked, as it is executed by another process.
process 1 checks request 5 and see it is locked, as it is executed by another process.
....
process 1 checks request 30 (which is the last request in its cache) and see it is locked, as it is executed by another process.
What process 1 does than?
it caches again. It caches another 30 request. So where it the benefit of caching?
Well, we can conclude that, cache give us benefits when there are few number of concurrent processes available. For the environments where we have several concurrent processes like 10 standard manager process or so, a general recommended setting can be "1" for it.
What about the caching?
process 1 caches 30 requests (1-30) and start executing request 1
process 2 caches 30 requests (2-31) and start executing request 2, as request 1 is already run by Process 1
process 3 caches 30 requests (3-32) and start executing request 3, as request 2 is already run by Process 1
....
.....
process 30 caches 30 requests (30-59) and start executing request 3, as request 29 is already run by Process 1
...
suppose now process 1 finishes the executing of request 1;
process 1 checks request 2 and see it is locked, as it is executed by another process.
process 1 checks request 3 and see it is locked, as it is executed by another process.
process 1 checks request 4 and see it is locked, as it is executed by another process.
process 1 checks request 5 and see it is locked, as it is executed by another process.
....
process 1 checks request 30 (which is the last request in its cache) and see it is locked, as it is executed by another process.
What process 1 does than?
it caches again. It caches another 30 request. So where it the benefit of caching?
Well, we can conclude that, cache give us benefits when there are few number of concurrent processes available. For the environments where we have several concurrent processes like 10 standard manager process or so, a general recommended setting can be "1" for it.
No comments :
Post a Comment
If you will ask a question, please don't comment here..
For your questions, please create an issue into my forum.
Forum Link: http://ermanarslan.blogspot.com.tr/p/forum.html
Register and create an issue in the related category.
I will support you from there.