Thursday, October 31, 2019

RAT -- Export/Import AWR problem // DBMS_WORKLOAD_CAPTURE.EXPORT_AWR // ORA-39213: Metadata processing is not available + ORA-39001: invalid argument value


Recently implemented RAT (Capture and Replay) in an Exadata Cloud at Customer migration project.
Capture phase was completed properly, but we weren't able to export AWR reports from the source to compare them with the target (with the AWR data generated during the the replay)

DBMS_WORKLOAD_CAPTURE.EXPORT_AWR was failing like below;

ORA-20115: datapump export encountered error:
ORA-39006: internal error
ORA-39213: Metadata processing is not available
ORA-06512: at "SYS.DBMS_SWRF_INTERNAL", line 1378
ORA-39006: internal error
ORA-06512: at "SYS.DBMS_WORKLOAD_CAPTURE", line 1911

Anyways, DBMS_WORKLOAD_CAPTURE.EXPORT_AWR actually doesn't do a magic.
awrextr.sql that resides in the $ORACLE_HOME/rdbms/admin directory does the same thing..
However; in this case awrextr.sql was getting the same ORA-39213 error..
The problem was probably related with the stylesheets that are used by these kinds of utilities..
However; the stylesheets were loaded already.
So I decided to check them with the following;

BEGIN
IF dbms_metadata_util.are_stylesheets_loaded THEN
dbms_output.put_line('TRUE');
ELSE
dbms_output.put_line('FALSE');
END IF;
END;
 /

The output was:

META:15:50:54.881: bad stylesheet: kuamvl
FALSE

So one stylesheet was corrupt , and then I decided to load all the stylesheets once again in one go.

exec dbms_metadata_util.load_stylesheets.

After this move, I got TRUE output by the same check.

BEGIN
IF dbms_metadata_util.are_stylesheets_loaded THEN
dbms_output.put_line('TRUE');
ELSE
dbms_output.put_line('FALSE');
END IF;
END;
 /
TRUE

This issue was solved, but there was another issue waiting for me :)

This time,  DBMS_WORKLOAD_CAPTURE.EXPORT_AWR was failing with the following;

ORA-39001: invalid argument value
Exception encountered in AWR_EXTRACT
begin
*
ERROR at line 1:
ORA-20115: datapump export encountered error:
ORA-39001: invalid argument value
ORA-39041: Filter "EXCLUDE_PATH_EXPR" either identifies all object types or no object types.
ORA-06512: at "SYS.DBMS_SWRF_INTERNAL", line 1378
ORA-06512: at line 3

This were all related with Datapump, as these types of commands were using datapump to extract the data they need in the backend.

This time, I checked metanametrans$ table.. -> select count(*) from metanametrans$;
It was empty... (it should have records normally.. record count should be higher than 2000 or  higher than 3000)

So I decided to run catmet2.sql

catmet2.sql

Rem DESCRIPTION
Rem Creates heterogeneous type definitions for
Rem TABLE_EXPORT
Rem SCHEMA_EXPORT
Rem DATABASE_EXPORT
Rem TRANSPORTABLE_EXPORT
Rem Also loads xsl stylesheets
Rem All this must be delayed until the packages have been built.

After that I executed the utlrp.sql.

Basically the solution was:

sqlplus / as sysdba

@$ORACLE_HOME/rdbms/admin/catmet2.sql
@$ORACLE_HOME/rdbms/admin/utlrp.sql

The issue was dissapeared :)

After this point, we could export the AWRs and then imported them to the target and created our comparison report successfully:)

Hope this helps..

Wednesday, October 30, 2019

RDBMS -- Migrating from Microsoft Sql Server to Oracle // Sql Developer 19.2 -- Migration Wizard & Copy to Oracle

Migrations are popular these days, just like they always were.
Cloud migrations , Oracle migrations, Sql Server migration, Data Center migrations, Cloud at Customer migration etc.. you name it :)

Today, I want to share a migration approach that can be taken in order to migrate data from Sql Server to Oracle.
I prefer using the term data rahter than the term database, because this database term is used a little different in Sql Server ,  a little different than it is used in Oracle world.

Anyways;

Sql Server to Oracle migrations can be done  in several ways. There are tools which can be purchased. Like this good one:

striim -> www.striim.com

Migrating Sql Server to Oracle is mostly a logical migration work, right? You can even write your own tool to migrate your data from Sql Server to Oracle.

In this post, however; I will give you an approach , which is based on using Oracle Sql Developer 19.2.

This tool has great features, and it is fun to play with them..

In this post; Sql Developer's Migration Wizard & Copy to Oracle features will be on our focus, while going through the migration process.

Let's see our source and target environments:


Quite realistic right? :) 11.2.0.4 customer should consider upgrading their databases, to a supported version , a long term supported one maybe 19C.. Anyways, we usually don't migrate the database and upgrade it at the same time, so let's not lose our focus.

As I mentioned, migration operations are performed using the SQL Developer as the Migration Tool. We run  Sql Developer on Linux, but it can be run on Windows too...

Sql Developer does this migration task in 3 steps...

Normally, all of these 3 steps can be performed ONLINE.
However; in our case, GENERATE and DATA MOVE steps are performed OFFLINE..
This is done on purpose bytheway..
We just needed to interfere the process, as some of the table formats and most of the time the naming that is used in source, couldn't directly be migrated to Oracle.  So we needed to do some manual work by stopping the Sql Developer migration and then by letting it resume, once we complete performing our manual actions.


PREREQUISITIES
  • Install SQL DEVELOPER (19.2.0) for Linux x86-64 Download the SQL Developer from the following link.
https://download.oracle.com/otn/java/sqldeveloper/sqldeveloper-19.2.1.247.2212.noarch.rpm
  • Install the SQL Developer via root user.
          rpm -ivh sqldeveloper-19.2.1.247.2212.noarch.rpm
  •  Download the jar file from the link below and follow the steps. 
https://sourceforge.net/projects/jtds/

  • Add jar to SQL Developer as follows;

SQL Developer > Tools > Preferences > Database > Third Party JDBC Drivers > Add Entry


  • Connect to SQL Server Database as a privileged user;
Add connection information as follows to “SQL Developer > File > New Connection “

Name : SQL
Username : user
Password : password
Hostname : ermanserver1
Port : 1433


  • Create Oracle Database Migration Repository

A database migration repository user/schema is created on target Oracle Database so that we can perform all data migration steps

CREATE USER MIGREP IDENTIFIED BY MIGREP DEFAULT TABLESPACE USERS TEMPORARY TABLESPACE TEMP;
ALTER USER MIGREP QUOTA UNLIMITED ON USERS;
GRANT ALTER ANY ROLE, ALTER ANY SEQUENCE, ALTER ANY TABLE, ALTER TABLESPACE, ALTER ANY TRIGGER, COMMENT ANY TABLE,CREATE ANY SEQUENCE,CREATE ANY TABLE,CREATE ANY TRIGGER,CREATE ROLE,CREATE TABLESPACE,CREATE USER, DROP ANY SEQUENCE, DROP ANY TABLE, DROP ANY TRIGGER, DROP TABLESPACE, DROP USER, DROP ANY ROLE, GRANT ANY ROLE, INSERT ANY TABLE, SELECT ANY TABLE, UPDATE ANY TABLE, DBA TO MIGREP;
GRANT ALL PRIVILEGES,CREATE ROLE , ALTER ANY TRIGGER,CREATE USER, CONNECT, RESOURCE, CREATE VIEW, CREATE MATERIALIZED VIEW, CREATE PUBLIC SYNONYM TO MIGREP WITH ADMIN OPTION;

  • Associate Migration Repository
After the migration repository user is created, we establish our Oracle Database connection via MIGREP schema using the SQL Developer. After that, we associate our migration repository by clicking on the Associate Migration Repository button, as shown below;


MIGRATION

Migration consists of 3 steps: Capture, Generate and Data Move.

CAPTURE

In Capture step , we connect both to the SQL Server Database and  to the Oracle Database via Sql Developer. After establishing our connections, we start our capture by clicking the migrate button as shown below. -- As a result, we get a master sql created. This master sql is used to create the database objects in the target Oracle Database, in the next step.

"SQL Developer Tools > Migration > Migrate"


The migration wizard requests some information for the capture phase.
As shown below, we populate the related screens and start our migration;

Specify a name  and directory for the migration project... Migration files are located on this directory.


Source Database - Select Online Mode for Source Database (SQL Server)


Choose the schema to be migrated. e.g. INTERFACE schema



Conversion Matrix ; in Convert Menu we can modify the data type conversion matrix. Although , Oracle Data Type Column is modifiable; it is okay to continue with the default options (at least, it was okay in our case).
However; we need to modify some modifiables under the advanced options tab. So we click the Advanced Options button..


In File Creation Options, we select "One Single File option."
In General Options, we uncheck "Generate Comments, Latest Privilege Schema Migration, Generate Failed Objects".
We check "Generate Stored Procedure for Migrate Blobs Offline, Create Users, Generate Separate Emulation Package".
We also uncheck  “Target Oracle Database 12c” as our Target database version is 11g.
Object Types: In our case we move only the data, the tables, so we check “Tables“ only.


Identifier Options : In Identifier Options window, we check only the checkbox named “Microsoft SQL Server: Is Quoted Identifier On“.


Translators: We uncheck all Oracle Database 12c Features.


Translate: SQL Objects : We move all the Sql Objects to the left side (In our case we migrate only the tables to the target system.)


Target database: Online generation doesn't really work for us, therefore we select the offline mode for the GENERATE step. It is mainly because, we need  to modify the generated script (when necessary) and run it manually.


Move Data:  As we select the Offline mode for the GENERATE Step, we select the offline mode again here, normally.


Summary: We  review the migration summary and continue..


Editing and Running the "Master.sql":

After completing Migration Wizard, a master.sql is created in the Project Directory. (Note that, we defined the Project directory before,  in the relevant migration wizard screen)

The Project directory contains folders named “generated” and “datamove”. The "master.sql" is created in the "generated” folder.


The master.sql is the one that is used for creating the users/schemas/objects in the target Oracle Database.
We execute this "master.sql" manually, after we change the password with the new passwords we prefer.
We just run the master.sql using the sqlplus.. If any errors are received when running master.sql script in SQL Developer, these errors should be carefully examined.

What do we modify in master.sql? Well it depends.. We replace the turkish characters used in table and column names. Basically this kind of stuff.. We edit the master.sql in a way to make the DDL statements inside of it be compatible with Oracle.. (Note that, our target sytem version (Oracle Database version) was 11.2.0.4)

GENERATE:

After the capture step, we continue with the Generate step.
In Generate step, we need to install SQL Server sqlcmd and bcp command-line tools on Oracle Linux 6 (in our case, the host that we run Sql Developer was an OEL 6).
After these steps are completed, we run "MicrosoftSQLServer_data.sh" script on our host.
The script exports data from the source database to several files.
Note that, before running this script, the Linux environments "LANG" and "NLS_LANG" must be defined properly in our shell.

Installing SQL Server sqlcmd and bcp Utility:

sudo su
curl https://packages.microsoft.com/config/rhel/6/prod.repo > /etc/yum.repos.d/mssql-release.repo
exit
sudo yum remove unixODBC-utf16 unixODBC-utf16-devel #to avoid conflicts
sudo ACCEPT_EULA=Y yum install msodbcsql
# optional: for bcp and sqlcmd
sudo ACCEPT_EULA=Y yum install mssql-tools
echo 'export PATH="$PATH:/opt/mssql-tools/bin"' >> ~/.bash_profile
echo 'export PATH="$PATH:/opt/mssql-tools/bin"' >> ~/.bashrc
source ~/.bashrc
# optional: for unixODBC development headers
sudo yum install unixODBC-devel

Running “MicrosoftSQLServer_data.sh

export LANG=tr_TR.ISO-8859-9   (in our case)
export NLS_LANG=Turkish_TURKEY.WEISO8859P9 (in our case)
cd /home/oracle/LOG/01102019/INT_ITE1/datamove/2019-10-01_14-28-24
./MicrosoftSQLServer_data.sh <ip_address_of_sqlserver> <user> <password>

DATA MOVE:

Once the generate step is completed, we have the data exported from Sql Server in several files located in the host that we run our Sql Developer on.
In the Data Move phase, we import this data to Oracle.
Generation uses sqlcmd and bcp utilities to export the data from Sql Server.
Data Move uses sqlldr (SQL Loader) to import this data to Oracle.

In the Data Move step, we copy all the files to the target database host, and execute the oracle_loader.sh from there.
Before running this script, Linux environment variables "LANG" and "NLS_LANG" must be defined properly in our terminal.

 Running oracle_loader.sh

export LANG=tr_TR.ISO-8859-9 (in our case)
export NLS_LANG=Turkish_TURKEY.WEISO8859P9 (in our case)
cd /home/oracle/LOG/01102019/INT_ITE1/datamove/2019-10-01_14-28-24
./oracle_loader.sh <db_name/TNS_Entry_of_oracle>  <user>  <password> 

--note that we run this oracle_loader.sh script from a host where an ORACLE_HOME is installed.  (recommended : from the target database host). That 's because -> when Oracle Home is installed, we get our Sql loader installed as well. Note that, we also populate the tnsnames.ora file in that Oracle Home with the TNS_ENTRY of our target Oracle database. (as you may already saw in the example command above, we use the TNS_ENTRY_of_oracle as the first command line argument, while runnning the oracle_loader.sh)

COMPARISON:

After the data move, we can say that our migration is completed. But we need to compare the data. We need to ensure that , the data stored in the source (Sql Server) is properly migrated to target (Oracle)..
So, after completing the migration steps, we must compare metadata and data between source and target databases. 

Compare Table Count

We compare the table count with the following sql queries.

SQL Server Table Count SQL

SELECT Count(*)
FROM erman_db.information_schema.TABLES
WHERE table_type = 'base table'

Oracle Table Count SQL

SELECT Count(owner) AS "TABLE COUNT"
FROM dba_objects
WHERE owner LIKE ( '%ERMAN_DB' )
AND object_type = 'TABLE';

Compare Row Count

We compare the row count of the migrated tables with the following sql queries.

SQL Server Table Row Count SQL 

SELECT Sum([partitions].[rows]) AS [TotalRowCount],
[tables].NAME AS [TableName],
Schema_name(schema_id) AS [SchemaName]
FROM sys.tables AS [Tables]
JOIN sys.partitions AS [Partitions]
ON [tables].[object_id] = [partitions].[object_id]
AND [partitions].index_id IN ( 0, 1 )
GROUP BY Schema_name(schema(schema_id), [tables].NAME

Oracle Table Row Count SQL

SELECT num_rows,
table_name,
owner
FROM dba_tables
WHERE owner LIKE ( '%ERMAN_DB' )
ORDER BY num_rows DESC;

KNOWN ISSUES:

In our tests, we saw some cases where some of the tables could not be migrated directly by using the Sql Developer's Migration Wizard.
They could not be migrated due to errors that Sql Developer or Sql Loader(in the background) got during the migration process.

The errors that we recorded during our tests were listed below;

ERROR : SQL*Loader-466: Column PROMPT does not exist in table XXXX.
ERROR : 2 Rows not loaded because all fields were null. (XXX)
ERROR : 74 Rows not loaded because all fields were null.
ERROR : SQL*Loader-350: Syntax error at line 4. Illegal combination of non-alphanumeric characters into table XXXX
ERROR : Record 71: Discarded - all columns null.
ERROR : ORA-01400: cannot insert NULL into (XXX.XXX)
ERROR : Error on table XXXX, column YYYY
ERROR : SQL*Loader-500: Unable to open file (XXX ) 
ERROR : SQL*Loader-553: file not found S
ERROR : SQL*Loader-509: Systemerror:No such file or directory 
ERROR : SQL*Loader-2026: the load was aborted because SQL Loader cannot continue.
ERROR : Field in data file exceeds maximum length
ERROR : Rejected - Error on table XXX, column YYY. Field in data file exceeds maximum length

Actually, by looking at the error messages, we can  directly tell the causes. Actually, these causes may be fixed sometimes by editing the SQL Loader files or by taking some actions in the source (in Sql Server side).. However; we didn't take any manual actions for fixing these errors.

What we have done is, we just used the "Copy to Oracle" method for the failing tables, and it worked.
We dropped the failed tables(which contains rows in the target) from the target and used Copy to Oracle to re-migrate them.
We didn't drop the failed tables(which doesn't contains any rows in the target) from the target and used Copy to Oracle to re-migrate them directly.
Somehow, the Copy to Oracle, which uses sqlcl in the backend, could migrate these kinds of table data properly. This may also mean that "Copy to Oracle" feature of Sql Developer is a little more stable than the Migration Wizard :)

Monday, October 28, 2019

ODA-- Health Check Reports // the control points & the subtitles

You probably already guessed that I m doing ODA health checks nowadays.
Mostly, these health checks are done on virtualized platforms, but they are applicable for the bare metal platforms too.


In this blog post, I will briefly introduce the things that I check in an ODA Health check.
I usually check these control points, try to detect any errors or misconfiguration, prepare my action plans, my suggestions and lastly create my health check report accordingly.
Almost forgot :) I also write a conclusion for managers... One or two paragraphs which can be understood without having a deep technical knowledge about ODA and its components. A plane and simple conclusion summarizing the controls and action plans.

So here is the list of the subtitles that I use in my ODA health check reports.. (note that they are based on my control points)
  • Current, minimum Supported and Recommended versions
  • Hardware checks
    • ILOMS
    • BIOS information
    •  NETWORK Cabling
    • Power Cabling
    • Led panels (the light in the front panel of the ODA nodes)
  • RDBMS
    • Parameters
    • Performance (mostly based on the AWR reports)
    • Backup policy
    • Errors and traces
    • Version, critical one-off patches and PSU level
  • GRID
    • ASM configuration
    • ASM errors and traces
    • Version, critical one-off patches and PSU level
    •  Cluster resources and cluster interconnect
  • Operating System (OVM)
    • Resource consumption
    • utilization
    • load
    • errors and traces
    • configuration (network configuration included)
    • OAKD & XEND analysis
    • boot logs, last command output
    •  SAR command outputs
  • SOS report outputs
  • OAK Validate outputs
  • ORACHK outputs

ODA X6-2 HA -- Virtualized // Network Architecture / Internal NFS and fault tolerant cabling

Recently dealed with a strange ODA issue.. The environment was a virtualized ODA X6-2 HA and the customer was complaining about the strange networking behaviour of the machine..
The issue appeared during a network H/A switch test...
That is, this ODA machine was connected to 2 network switches (good ones) to provide a full path network fault tolerancy.

As you may already know; this types, I mean ODA X6-2 HA machines have 2 x 4 port ethernet attached to each of their compute nodes. This makes a total of 8 ports, and when cabled correctly with to customer switches; it must supply at least something (in terms of fault tolerancy..)

Okay what do we expect from this kind of a configration?

Actually we expect a fault tolerancy against port failures, against cable failures and even against a switch failure..


Well.. as you may guess, this environment wasn't behaving as expected and that's why this issue was esclated to me...

Before giving you the solution (which is actually a basic one), I will try to give you the big picture...
What I mean by the big picture is the network architecture of a virtualized ODA.
I will give you that info because I see there is a misconception there.

This kind of a misconception may be expected actually.. Because, these machines have an architecture that contains lots of things.. Not only the hardware-related things, but also the software-related things (installations and configurations).

Although the ODA environments can be installed and configured easily. There are Infiniband cards, private networks, NFS shares between Domains, ASM , ACFS, OVM , Shared repositories, bonding, eth devices, virtual networks and virtual bridges installed and configured on them.

Let's start with the interconnect and with the internal NFS which is exported from the ODA_BASE to DOM 0.

The interconnect is based on infiniband.. (it is according to your order ofcourse, but most of the time it is based on the infiniband.)

So the RAC interconnect is based on the infiniband and the cluster_interconnects parameter should be set accordingly for the RAC databases running on the privileged ODA_BASE domain.

NFS shares, that store the shared repositories and the virtual machines files are a little interesting.
ODA_BASE reaches the shelf, actually the disks and creates ACFS/ASM filesystem on top of them, then it shares these filesystems to tDOM0.

So, yes these NFS shares are exported from a child domain (ODA_BASE) to the main domain/hypervisor level (DOM 0).

The role of these shares are very important.. They basically store the virtual machine files and this  means when they are off, the virtual machines are off .

By the way, these NFS shares are not based on the infiniband!
These NFS shares are not based on the ethernet as well..
These NFS shares are acutally based on the virtual bridges, a logical network between ODA_BASE and DOM0.. So this means, the network traffic on these NFS shares are based on memory copy. No physical devices are in the path..

Note that, the virtual bridges are in memory, they are codes, not physical network hardwares.
There is no physical media (ie. Ethernet cable) when we talk about these types of bridges. So their capabilities are based on limits of the CPUs moving the packets between the virtual nics. This configuration should give the best performance.. However; we need to take NFS and TCP traffic-related bottlenecks into the account as well..
In other words; even if this traffic is based on TCP and NFS in the upper layers, this traffic is actually based on memory operations between the VM domains in the background.

There is another thing to be aware of..
These NFS shares are running on the virtual ip addresses (VIPs). These VIP addresses are dynamic cluster resources, as you know.

So, if ODA_BASE fails on node1, the VIP of ODA_BASE node 1 is transferred to ODA_Base node2. However; at this time, the routes change.. In this case, the NFS traffic between node2 and the DOM 0 node 1 goes through the infiniband. (route outputs of DOM0 and ODA_BASE machines support this  info as well.)



Yes, by knowing this, we can say that even when there is problem in the infiniband or ethernet-based network of ODA, the NFS shares (which are very crucial), will continue to work!. The VM machines created on ODA will keep running..
  
Okay, lets take a look at the ethernet side of this.

As mentioned in the beginning of this blog post, there are 4 ethernet ports in each one of the ODA nodes.

When we look from the DOM 0 perspective, we see that in each ODA node we have eth0, eth1,eth2 and eth3.

These devices are paired and bonded. (in active-backup mode in Dom0 )

eth0 and eth1 are the slaves of bond0.
eth1 and eth2 are the slaves of bond1
bond0 is mapped to the virtual bridge named net1,  the virtual bridge named net1
bond1 is mapped to the virtual bridge named net2, and that net1 is mapped to ODA_BASE
bond1 is mapped to the virtual bridge named net2, and that net2 is mapped to ODA_BASE

ODA_BASE machines by default uses net1, so in the background they use -> bond0 in Dom0 , eth0 and eth1 in Dom0

net2 is not configured by default, so we must do an additional configuration for making use of it.

In order to do this configuration, we can use oakcli configure additionalnet or we can use a manual method, / editing the ifcfg files manually.

Anyways, suppose we have 4 port cabled and eth1 is enabled on ODA_BASE. Suppose we  have 2 switches in front of ODA and we connect our cables to these switches randomly. (2 cables to each of the switches per an ODA node)... So now, are we in the safe side? Can we  survive a port-failure , can we survive a switch failure?

Well that depends on the cabling.. This is for sure as this is the cause that made me write this blog post..

That is, we need to have some sort of cross cabling...

We need to ensure that eth0 (DOM0) of a Node and eth1 of that Node (DOM0) should be connected to the different switches.

If we cable them into the same switch, then we can't survice a switch failure, because our bond will be down.. And if it is bond0, we will lose all the databases traffic in ODA_BASE of the relevant node..

There is RAC yes! RAC makes us survive even from this kind of a disaster, but why  creating an extra job for RAC ?  Why losing our active resource because of a network problem:) ?
Bytheway, even in RAC, the application design (TAF and etc..) should be properly done in order to have a zero-downtime in the application layer.

Anyways, the cause that made me write this blog was an incorrect cabling and here I m giving you the correct cabling for eth devices of ODA nodes.


Saturday, October 26, 2019

RDBMS -- things to know about the Result Cache / Result Cache : RC latch

You know we have a buffer cache in the SGA to cache the database blocks in order to decrease the latency of I/O, in order to decrease the Disk I/O; in other words..

The buffer is a cache and it is fast.. It is fast but it must support the concurrent access, right? As it must support the concurrent access and the concurrency itself, it must be aligned with the serialization devices to guarantee consistent modifications inside of it, the consistent reads from its contents as well.

That's why; when you concurrently access the related portions of the buffer cache, your performance slows down.. Sometimes it is not the only affect.. I mean, this serialization devices (latches basically) when they are locked , your sessions/processes which are trying concurrently to get them, spin on the CPU and increase the load average + Cpu utilization..

There is one more thing that we need to know in this context is the shape of data that is stored in the buffer cache.

It is a cache but it is actually for caching the buffers. This means; when you query or do a DML in the traditional way, the operations are done on the basis of buffers.

There is another caching mechanism in Oracle, which is a little different than the buffer cache. This caching mechanism, or caching store is called Result Cache, and it is not caching the data in the basis of buffers.. The Result Cache as its name suggests;  is a cache where the result of the queries reside.

It is basically designed to store the static data and results.

Its statistics can be seen by querying the v$result_cache_statistics dictionary view , bytheway.

Anyways; this cache is designed for the static objects and it is recommend to be used when there is a static object and result which need to be directly accessed (in most of the cases concurrently) by the sessions.

If we have such a static object and if we need to query it, then we can consider using the result cache rather than using the conventional access / buffer cache access.

Result cache in these kinds of scenarios brings the data faster than the buffer cache, as it has the data in the result format and it just copies it to our session/process 's buffer.

However; if we try to use the result cache in a dynamic environment, things change.
Things change, because the invalidation ( when the data and objects that builds the contents cached in the result cache change) means "Writing the Result Cache"

Writing is not a very bad thing? Yes for 1 write or 10 writes maybe it is so, but consider 1000 concurrent writes..

Why writing to result cache concurrently is a bad thing then?
It is because , the result cache is protected by a single latch.

In order to write to the cache, it is required to get an exclusive latch.
This exclusive latch is need to be get when there is a cache miss or cache invalidation..

So cache hit doesn't require the session to get that exclusive latch.
This exclusive latch, on the other hands; is a single one, and it has no child.
It is basically, when you want to write to cache, you need to get it -> concurrent write access -> Problem
However, if you read the cache (cache hit), you need to get a shared latch -> no contention -> NO Problemo :)

Here is what Oracle Support says about it ->

The result cache was designed to cache small results sets for a short period of time and once it gets full there is an overhead of handling the data in it. When no empty blocks are found in the RC we attempt to make room first by removing any invalids, then requesting a new chunk to be allocated and finally by removing using LRU. Currently, there is a cost associated with making room in a full result cache, which increases with size of the cache, especially for LRU processing and this is unlikely to change.

If you indirectly or directly put an object or let's say a result into the result cache and if the data/object that produces that result is dynamically changing; then guess what?
You will need to write to the cache..

-- By the word directly, I mean using hints -> SELECT /+result_cache/

You will have Result Cache related activity.
You will see a peak for the latch named Result Cache: RC latch.

Note that, sometimes result cache size may be insufficient.. In this case you will see a peak in the latch free waits.. In this kind of a specific case, you may increase your result cache size -> init.ora parameter -> RESULT_CACHE_MAX_SIZE

As for the additional info; Result Cache increases its size till it gets its maximum..When it reaches the maximum size, it starts aging out its contents using the LRU algorithm. (in a single point of time, you may see both the invalid version and valid version of the same object in the Result cache, bytheway)

The LRU that is implemented in this context actually behaves like the following;

-> If you need to write; but if you can't write because there is no free space in the result cache; then first age out the invalid contents.. After aging out the invalid contents, retry writing. If you can't write even now; then age out the contents using the LRU.

This also means; if you reach the maximum size in result cache, then the overhead of writing to it is bigger.

When the overhead is bigger, latches are hold longer.
When the latches are hold longer, we end up with more processes  spinning on CPU.

In short, there is already a significant cost for writing the Result Cache.. But that cost gets even biffer when the cache is full.

Bytheway; you may report your result cache usage using the Result Cache Memory Report.
execute DBMS_RESULT_CACHE.MEMORY_REPORT(TRUE);

Anyways; we said writing is bad :) What do we mean by "writing?"

Yes.. Any DML that changes a dependent object makes the related result cached in the Result Cache, become invalid, and that's trigger a write in the next result cache access..

Consider we have a query which queries table named products and it has a where condition on product type.

Now consider the result of this query is in the result cache and then you update the PRODUCTS table , but you update it for a different product type.

Yes.. Even such a DML, even a no-change DML will invalidate the cached result in the Result Cache and creates a need to write into the Result Cache,  during the subsequent cache accesses of the relevant queries.

So it must be managed. I mean the Result Cache and the related hints, the result cache related configuration should be managed well.
--Double check where you use it. Double check the queries with that hint!

Basically->

The same queries, the same bind variables and no DML in the relevant objects -> GOOD for Result Cache.
DML - DDL activities in the relevant objects, changing bind variable -> NOT GOOD For Result Cache.

Anyways, I have one more thing to mention.. This cache is not so stable in release 11gR2 , 11.2.0.4..
So, if you can't manage it well, or if you think that you are managing but still getting latching issues time to time, then I may recommend you to disable the Result Cache. (for 11gR2)

There are bugs, and patches in this context -> see one of them ->

26436717  RESULT CACHE: "BYPASS OBJECTS" INCORRECTLY USED BY WAITING PROCESSES (Patch)
17305006  SEVERE PERFORMANCE SLOWDOWN WHEN RESULTS CACHE IS THRASHING (Patch)
High "Latch Free" Waits with Contention on 'Result Cache: RC Latch' when RESULT_CACHE_MODE = MANUAL on Oracle 12c (Doc ID 2002089.1) -- applicable for 11G as well.
In some cases Adaptive Dynamic Statistics causes issues with result cache -> alter system set "_optimizer_ads_use_result_cache" = FALSE; (workaround)


I know this became a long blog post and I feel that I can write more and more on this, but I need to stop myself somewhere :)

Anyways; I don't go in to the details of Result cache related db configuration and db parameters, but one last thing;

Even if your result cache mode is manual, your tables may still be configured to be cached ->

Example: ALTER TABLE sales RESULT_CACHE (MODE FORCE) , Ör: CREATE TABLE sales (...) RESULT_CACHE (MODE FORCE);

My last words on this topic are ; Result cache is a nice feature. But we need to be careful using it. We need to be careful making use of it. :)

Tuesday, October 1, 2019

LINUX -- Installing SQL Server sqlcmd and bcp command-line tools on Oracle Linux 6

Currently working on a "SQL Server to Oracle Migration" project and dropping a useful installation method here.

Note that, I will prepare a detailed blog post for this migration later in the upcoming days. Stay tuned for this as well!

This blog post is about BCP (Bulk import-export utility)...

As a quick info, I want you to know that we are using SQL Developer (and sum manual/scriptized methods) to do this migration, and BCP is the tool used in the capture phase. (for capturing the data from SQL Server).. 

It is important to know that we are using Oracle Linux 6 to execute the migration methods, I mean to execute the tools used in this migration, including the SQL Developer 19.2 itself.

As you may guess, we (actually the SQL Deveoper) need a SQL Server-aware utility to bulk export the data in the first place.

This tool is BCP and it can be easily installed into an Oracle Linux using the Microsoft's yum repo prepared for RHEL 6.

Note that, we also need to install the unix ODBC development libraries along with the mssql-tools package in order to make things work...

Here is a demo;

[root@ermanserver ~]# curl https://packages.microsoft.com/config/rhel/6/prod.repo > /etc/yum.repos.d/msprod.repo
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
193 193 193 193 0 0 16 0 0:00:12 0:00:11 0:00:01 4825
[root@ermanserver ~]# yum install mssql-tools unixODBC-devel
Loaded plugins: refresh-packagekit, security, ulninfo
Setting up Install Process
packages-microsoft-com-prod | 2.9 kB 00:00
packages-microsoft-com-prod/primary_db | 68 kB 00:00
Resolving Dependencies
--> Running transaction check
---> Package mssql-tools.x86_64 0:17.4.1.1-1 will be installed
--> Processing Dependency: msodbcsql17 >= 17.3.0.0 for package: mssql-tools-17.4.1.1-1.x86_64
---> Package unixODBC-devel.x86_64 0:2.3.7-1.rh will be installed
--> Processing Dependency: unixODBC = 2.3.7 for package: unixODBC-devel-2.3.7-1.rh.x86_64
--> Running transaction check
---> Package msodbcsql17.x86_64 0:17.4.1.1-1 will be installed
---> Package unixODBC.x86_64 0:2.3.7-1.rh will be installed
--> Finished Dependency Resolution

Dependencies Resolved
==================================================================================================================================================================================
Package Arch Version Repository Size
==================================================================================================================================================================================
Installing:
mssql-tools x86_64 17.4.1.1-1 packages-microsoft-com-prod 286 k
unixODBC-devel x86_64 2.3.7-1.rh packages-microsoft-com-prod 42 k
Installing for dependencies:
msodbcsql17 x86_64 17.4.1.1-1 packages-microsoft-com-prod 795 k
unixODBC x86_64 2.3.7-1.rh packages-microsoft-com-prod 213 k

Transaction Summary
==================================================================================================================================================================================
Install 4 Package(s)

Total download size: 1.3 M
Installed size: 1.3 M
Is this ok [y/N]: y
Downloading Packages:
(1/4): msodbcsql17-17.4.1.1-1.x86_64.rpm | 795 kB 00:00
(2/4): mssql-tools-17.4.1.1-1.x86_64.rpm | 286 kB 00:00
(3/4): unixODBC-2.3.7-1.rh.x86_64.rpm | 213 kB 00:00
(4/4): unixODBC-devel-2.3.7-1.rh.x86_64.rpm | 42 kB 00:00
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Total 1.9 MB/s | 1.3 MB 00:00
warning: rpmts_HdrFromFdno: Header V4 RSA/SHA256 Signature, key ID be1229cf: NOKEY
Retrieving key from https://packages.microsoft.com/keys/microsoft.asc
Importing GPG key 0xBE1229CF:
Userid: "Microsoft (Release signing) <gpgsecurity@microsoft.com>"
From : https://packages.microsoft.com/keys/microsoft.asc
Is this ok [y/N]: y
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
Installing : unixODBC-2.3.7-1.rh.x86_64 1/4
The license terms for this product can be downloaded from
https://aka.ms/odbc17eula and found in
/usr/share/doc/msodbcsql17/LICENSE.txt . By entering 'YES',
you indicate that you accept the license terms.

Do you accept the license terms? (Enter YES or NO)
YES

Installing : msodbcsql17-17.4.1.1-1.x86_64 2/4
The license terms for this product can be downloaded from
http://go.microsoft.com/fwlink/?LinkId=746949 and found in
/usr/share/doc/mssql-tools/LICENSE.txt . By entering 'YES',
you indicate that you accept the license terms.

Do you accept the license terms? (Enter YES or NO)
YES
Installing : mssql-tools-17.4.1.1-1.x86_64 3/4
Installing : unixODBC-devel-2.3.7-1.rh.x86_64 4/4
Verifying : msodbcsql17-17.4.1.1-1.x86_64 1/4
Verifying : unixODBC-devel-2.3.7-1.rh.x86_64 2/4
Verifying : mssql-tools-17.4.1.1-1.x86_64 3/4
Verifying : unixODBC-2.3.7-1.rh.x86_64 4/4

Installed:
mssql-tools.x86_64 0:17.4.1.1-1 unixODBC-devel.x86_64 0:2.3.7-1.rh

Dependency Installed:
msodbcsql17.x86_64 0:17.4.1.1-1 unixODBC.x86_64 0:2.3.7-1.rh
                                                         

At this point our demo installation is complete! Do not forget to add the directory (/opt/mssql-tools/bin/) where the bcp utility exist, to your PATH!.