As you may already know (actually, I think you definetely know since we are in 2021 and we are now talking about 9th generation of Exadata:), we have an IO Resource Manager in Exadata.. Actually, this is pretty old subject, but! I just could find a space to write it down. :)
Also check my previous post ( written in 2014) named "Exadata -- For Exadata Database Machine Admins -- Filtered Information" -> https://ermanarslan.blogspot.com/2014/03/exadata-for-exadata-database-machine.html .
In that post , I give a filtered information (including IORM , migration and other stuff) for Exadata Admins.
The I/O resource manager is called IORM (acronym) and it is used for managing the Storage /Cell I/O resources in Exadata. In addition to the database resource manager and instance caging for the CPU-like resource management; we can also manage our I/O resource with IORM in Exadata)
Here is a diagram for the description of the architecture of IORM. (reference: Centroid)
So, we can manage our IO resources based on the Categories, Databases and consumer groups. There is a hierarchy as you see in the picture.. The hierarchy used to distribute I/O.
IORM should be used on Exadata especially if you have a lot of databases running on Exadata Machine.. IORM is a friend of consolidation projects, in my opinion..
In this post, I will give some info about the implementation and some example commands along with their purposes, used in real life.
This will be about implementing inter-database IO resource management and what we do in these types of implementations is basically managing the I/O resources of the Oracle Databases that are running on Exadata.. So, we do our work in the Cell/Storage level .. The configuration is per cell basis and we use cellcli and/or dcli to configure the Cells accordingly.
We use the unique names of the databases ( db unique name) while configuring the IORM inter database plans and basically we tell IORM to manage our I/O by following a set of rules.
Here is an example;
"Let %80 of the IO Resources to be used by EBS (while the Store layer is not under a heavy load).. Let %70 of the IO Resources to be used by DWH (while the Storage layer is not under a heavy load).. Don't let EBS to occupy more than %65 of the I/O Resources, while DWH is doing a heavy I/O. Don't Let DWH to occupy more than %35 of the I/O Resources while EBS is doing a heavy I/O. Let other databases (not EBS, not DWH) to use the remaining I/O resources.. Don't let other databases to prevent EBS or DWH from using I/O resources when they needed."
Here is an example for configuring an IORM inter database plan;
We first check the current plan..
-----------------------
dcli -g ~/cell_group -l root cellcli -e list iormplan detail (I hope you have a proper cell_group.. If you don't, you can create one or you can use cellcli for issuing the commands in each cell)
We set iorm plan objective to auto ->
-----------------------
alter iormplan objective=auto (in each cell) -- auto is a must .. This objective lets the IORM to decide the appropriate objective depending on the active workload on the cell.
or
dcli -g ~/cell_group -l root cellcli -e alter iormplan objective = auto (in one go using dcli)
We alter the cell to set our IORM plan.. Note that, we use "-" in the end of the lines (except the last line) and we can't use limit for "other" databases.. Using LIMIT attribute for other databases is not allowed -> CELL-00048: The limit attribute is not permitted when specifying dbplan "other" directives.
Also note that, using LIMIT attribute, we can limit max I/O for a database. So, we ensure that database can not utilize more than that % of I/O resources.
Well, we connect to each cell and issue the following alter iormplan commands to tell IORM what to do while managing the IO resources of our databases;
-----------------------
alter iormplan -
dbplan=((name=EBSPRD, level=1, allocation=65, limit=80, flashcache=on), -
(name=DWPRD, level=1, allocation=35, limit=70, flashcache=on), -
(name=other, level=2, allocation=10, flashcache=on))
With this action, we actually configured the IORM and we are done.. Still, we check the IORM plan is active ->
-----------------------
dcli -g ~/cell_group -l root cellcli -e list iormplan detail
As for monitoring; we use metric_iorm.pl script. We get that script from MOS Note "Tool for Gathering I/O Resource Manager Metrics: metric_iorm.pl (Doc ID 1337265.1)" and follow the instructions documented there.
Okay.. This is pretty much it! I hope you find this useful.