Friday, August 19, 2016

RAC/Exadata -- Scan listener registration problem, EBS 11i , autoconfig TNS-12514

You may encounter a scan listener registration problem after enabling autoconfig on your EBS database tier. Altough , everything including remote_listener, local_listener parameters as well as network related oracle conf file such as tnsnames.ora,sqlnet.ora and listener.ora may seem perfect, the dynamic registration from EBS database to the scan listener(pointed by the remote_listener parameter) can be problematic.
This problem is encountered in an Exadata X6 environment, on which we hosted an EBS 11i environment with a 12C Oracle database.

The problem is directly related with autoconfig, as autoconfig in some cases can not handle RAC specific configuration perfectly well.
In this case, autoconfig creates a tns entry named as the scan_name:port  and actually creates it wrongly by putting the vips as for the host entry. Like below;

(ADDRESS = (PROTOCOL = TCP)(HOST = exa1vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = exa2vip)(PORT = 1521))

The result is ORA-12514 / TNS-12514 when trying to connect to scan name and port .

The solution is; removing the scan_name:scan_port tns entry from the tnsnames.ora , as there should not be any tns entry for registering the scan listener, no need.
After removing the tns entry, it is better to refill the remote_listener and make the pmon to register the database to all available listeners using alter system register; as shown in the following example;

sqlplus / as sysdba
alter system set remote_listener='' scope=both sid='*';
alter system set remote_listener='exa-rac:1521' scope=both sid='*';
alter system register;

Also, it is important to keep in mind that this solution should be reimplemented everytime the autoconfig is run.
An alternative maybe, modifying the autoconfig templates or codes to fix this issue, but it won't be supported and actually I dont have time for this , these days :)

Tuesday, August 16, 2016

Exadata - Cloning Oracle Home and installing bundles patch & psu in to the database afterwards, datapatch

If you clone a database oracle home( to create a new home -- only home...) in Exadata, you need to know that, your cloned home may be patched with the latest bundle patches & PSUs applied to your source oracle home, but any database that you will create from this newly cloned home will not have the bundle patches inside.

So, the bundle patches that are applied to the source home can not be cloned to the cloned home directly, as the bundle also writes to the database.

Well, what we should after such an oracle home cloning operation is, to install the Bundle in to our newly created database using the datapatch utility residing in our newly created oracle home (in opatch directory)

This utility will do the db work of bundle and make our databases created using our new cloned oracle home to be aligned with the RDBMS binaries in terms of Bundle levels.

Here , I m making a demo of applying the datapatch and making my newly created database on my new cloned oracle home to have Bundles inside.
Note that: datapatch requires the database to be in upgrade mode.

SQL> startup nomount;
ORACLE instance started.

Total System Global Area 2.5770E+10 bytes
Fixed Size                  6870952 bytes
Variable Size            5894412376 bytes
Database Buffers         1.9730E+10 bytes
Redo Buffers              138514432 bytes
SQL> alter system set cluster_database=false scope=spfile;

System altered.

SQL> shu immediaTE;
ORA-01507: database not mounted
ORACLE instance shut down.

[oracle@exadb01 OPatch]$ srvctl stop database -d TEST70
[oracle@exadb01 OPatch]$ sqlplus "/as sysdba"

SQL*Plus: Release Production on Mon Aug 15 16:02:31 2016
Copyright (c) 1982, 2014, Oracle.  All rights reserved.
Connected to an idle instance.

SQL> startup upgrade;
ORACLE instance started.

Total System Global Area 2.5770E+10 bytes
Fixed Size                  6870952 bytes
Variable Size            5894412376 bytes
Database Buffers         1.9730E+10 bytes
Redo Buffers              138514432 bytes
Database mounted.
Database opened.
SQL> exit

[oracle@exadb01 OPatch]$ ./datapatch -verbose
SQL Patching tool version on Mon Aug 15 16:02:53 2016
Copyright (c) 2015, Oracle.  All rights reserved.

Log file for this invocation: /u01/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_166016_2016_08_15_16_02_53/sqlpatch_invocation.log

Connecting to database...OK
Bootstrapping registry and package to current versions...done
Determining current state...done

Current state of SQL patches:
Patch 22674709 (Database PSU, Oracle JavaVM Component (Apr2016)):
  Installed in the binary registry only
Bundle series DBBP:
  ID 160419 in the binary registry and not installed in the SQL registry

Adding patches to installation queue and performing prereq checks...
Installation queue:
  Nothing to roll back
  The following patches will be applied:
    22674709 (Database PSU, Oracle JavaVM Component (Apr2016))
    22806133 (DATABASE BUNDLE PATCH: (22806133))

Installing patches...
Patch installation complete.  Total patches installed: 2

Validating logfiles...
Patch 22674709 apply: SUCCESS
  logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/22674709/20057886/22674709_apply_TEST70_2016Aug15_16_04_01.log (no errors)
Patch 22806133 apply: SUCCESS
  logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/22806133/19983161/22806133_apply_TEST70_2016Aug15_16_04_02.log (no errors)
SQL Patching tool complete on Mon Aug 15 16:04:30 2016

that s it; now I can query my database and see the Bundles 's applied.

Here is a query for you:

For 12C:
TO_CHAR(to_timestamp(substr(action_time,1,30),'DD-MON-YY HH.MI.SS.ff AM'),'DD.MM.YYYY HH24:MI:SS') AS ACTION_TIME,
--substr(patch_id,1,10) as patch_id,
--substr(patch_uid,1,10) as patch_uid,
substr(action,1,10) as action,
substr(status,1,8) as status,
substr(version,1,8) as version,
substr(BUNDLE_SERIES,1,6)||substr(BUNDLE_ID,1,6) as bundle,
substr(logfile, instr(logfile,'/',-1)+1) as logfile

Decreasing Exadata core count , X6-2 1/8

Decreasing the core count of Exadata compute nodes can be your issue one day, as Exadata does not let you to do so easily.
Decreasing the core count is not an operation that is frequently done, but still it may necessary some day.
That is, normally when you purchase Exadata and when you decide to have a minimal capacity on demand configuration, these cpu count decreases are done in the deployment phase, but I have seen some customer environment, on which we need to decrease the cpu counts to be aligned with the license terms , even after the deployment.

So the utility we use dbmcli, but we use it with an undocumented force parameter.
Note that, we execute the dbmcli and the commands in both of nodes and reboot the nodes afterwards.
Without using the force argument, dbmcli our the resoucecountrol do not let us to decrease the cpu cores to the minimum ( 8 cores)
Note, we also disable iaasMode just in case;


[root@ermanexa01~]# dbmcli
DBMCLI: Release  - Production on Mon Aug 15 15:56:16 EEST 2016
Copyright (c) 2007, 2014, Oracle.  All rights reserved.
DBMCLI> ALTER DBSERVER pendingCoreCount = 8 force
DBM-10022: At least 26 physical cores need to be active in order to support IaaS.
DBServer exadb01 successfully altered
DBMCLI> ALTER DBSERVER pendingCoreCount = 8 force
DBServer exadb01 successfully altered. Please reboot the system to make the new pendingCoreCount effective.

Exadata X6, fixing the martian source errors, static routing, asymetric network, route-eth3, reverse path filtering and more

This blog post will be about martian source errors which may be recorded in the syslog(/var/log/messages) of Exadata compute nodes. I want to give a place to this subject, as I have faced this problem in one of my customer environments.

The platform was an Exadata X6 and it was newly deployed. I was there to migrate EBS database to this new platform and seen the following errors in /var/log/messages when I was doing my very first health check.

martian source someip from someip, on dev some_interface
ll header: ff:ff:ff:ff:ff:ff:00:18:f8:0e:..........

When I interpreted this and spoked with the clients, realized that some clients could not ping or connect via ssh to Exadata compute nodes.

The martian erros was reported because of a network configuration problem. That is, for some reason, Linux Operating  running on Exadata compute nodes did not like the network interface from where some packets were coming from. 
If I need to be more specific , Linux OS did not like the packets that are coming to bondeth0(client connection interface) ,as it thought that those packets had to come from the eth3 interface(the backup interface).

At this point; I want to remind you something important for the Exadata network:

"If the backup network is on a tagged VLAN network, the client network must also be on a separate tagged VLAN network."

So this phrase clearly states that, if your backup network is on a vlan (suppose it is then your client network must be on another vlan . So the client network should not be on the same VLAN as the backup network. So if the backup network is, there should not be any clients on (client term can be thought as anything that connects to the Exadata via client interface or through the management interface.) So, the clients that are there for managing the Exadata machine via the management network should be on another VLAN  too.

Let me explain this with an example scenario;

the backup interface (eth3 for Exadata X6) is configured to run on 
the management interface is configured to run on
the bonded client network is configured to run on the ip addresses like and so on.
suppose there is a client with ip address (same network with the backup interface

So now imagine the client with ip address wants to ping the Exadata management interface. (ping
When he pings; the OS sees the packet and sees that packet is coming from the same network as the eth3(backup interface), however what OS also sees that the packet is coming to the management interface which is on network.
So at that time, OS thinks that this packet should have come from the backup interface(eth3), as its network is the same as the sender of the packet. Well, at this point; the reverse path filter of OS comes into play and filters that packet and prevent the network connection between that client and the management interface. so ping doesn't work.  (any connection won't work actually)

This is a problem right? Yes , it is . This is a problem of the deployment actually, but as we may already deployed the Exadata machine, let's see what we can do to fix/or workaround it.

Well, this behaviour can be changed by a workaround like disabling the packet filter in Exadata Compute node Operating Systems by modifying the /etc/sysctl.conf file and running sysctl -p to make it effect. (it is on by default, so we can make it off)

net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.eth0.rp_filter = 0
and so on..

After disabling the reverse path filter, now when we ping the management interface (ping, we see that it works.
This move may not be supported by Oracle, but it solves the problem.
So this is an asymetric routing problem, and turning of rp filter does the job.

After disabling the filter , the client can ping. However it still can not telnet to Database listener port using the client network.
That is, the client with ip is okay with the ping command after disabling the rp filter but now it wants to test the database connection. 
Now , it uses telnet to reach the listener port running on the client interface of one of the Exadata compute nodes (telnet 1523), however, this test fails.
The client cannot telnet to the client network using port 1523. (listener is up and running, no firewall is running).
So after disabling the rp filter, the problematic client can reach the management network via ping, but can not reach the client network via port 1523 using telnet.

ping uses icmp (level 3). On the other hand, telnet is upper level. When the client sends ping; the client first sends a ICMP Echo Request Packet to host and then, the host will reply with an ICMP Echo Reply to client.
However, as for the telnet, the tcp packets are dropped.
That is, as there is an asymetric routing happening out there. The network components visited all the way down from client to server and server to client such as routers/firewalls(any network components) block  the SYN-ACK  (the first server response in 3 way connection establishment of TCP) packets.  Remember, we disabled the reverse path filtering but , still we have a wrong route defined in our exadata compute nodes for the problematic client's network.
 That is , client still tries to connect to the management network, but the route for the server replies are still configured to be through the backup network.

Info:  Asymetric routing can be a problem for TCP which has strict state tracking but often does not affect "stateless" protocols such as ICMP or UDP.

In Exadata, we see some static routing are in place. When we look at the /etc/sysconfig/network-scripts directory of a compute node we see the rule and route files.

An example route for eth3 (backup interface): dev eth3 table 223
default via dev eth3 table 223

An example rule for eth3 (backup interface):

from table 223
to table 223

So these route and rule files simply say: use eth3 if you want to go to (remember our problematic client was in the same network as the backup interface and its ip address was

So when these files are in place, the routing table of the compute nodes have a static routing for eth3, so no matter what we do , we can't disable the path that packets will follow for going out from the server to the 10.56.23.* network.

But there is one thing that we can do ;

That is; we can simply change the backup network interface IP and take it to another network where there is no Exadata clients exists and lastly remove those static route and rule files from the /etc/sysconfig/network-scripts directory!

So as for disabling the static route(for backup interface) that comes with the deployment of Exadata, we simply remove the route-eth3 and rule-eth3 files and restart network services.

After this change, our compute node that we are dealing with does not contain any static route for the eth3 device, and then we can telnet to our listener using the client network. ( next move may be adding new route and rule files according to the new ip/network addresses of the backup interface)

This is what I did in our recent project. I did the exact same approach for working around the exact same problem and it did work.

I find this problem and its fix quite interesting and that's why shared this arcticle with you. Thanks for reading.

Wednesday, August 10, 2016

Resizing LVM mounted on root filesystem (online) on Redhat 4, ext2online

I already explained how to resize an LVM online in one of my previous post. (
In that post, I gave all the info for resizing LVM based mount points. The explanation was from stratch. (scan scsi bus, fdisk, resize2fs and so on) and I have followed that approach many many times.
However, today, when I was working on a EBS application tier environment which was running on a RHEL 4 32 bit server, I noticed something and I want to share it with you.

I was requested to enlarge a mount which was based on LVM and which was mounted to root (/).
Again, I followed the same approach as documented in the blogpost, pointed by the above url but this was when I was enlarging the filesystem, the following error produced

[root@erpapp ~]# cat /etc/redhat-release
Red Hat Enterprise Linux AS release 4 (Nahant Update 5)

[root@erpapp ~]# resize2fs /dev/mapper/Cluster1-Root
resize2fs 1.35 (28-Feb-2004)
/dev/mapper/Cluster1-Root is mounted; can't resize a mounted filesystem!

So, it seemed resize2fs was not capable of resizing a mounted filesystem in Redhat 4. In other words, the version of resize2fs that comes with the Redhat 4, can not enlarge a mounted filesystem.

While I was trying to find a way to make it work, the utility named ext2online helped me.
Actually it was the right utility to enlarge a mounted filesystem in Redhat 4 or in some of the other Linux Operating systems similar it.

[root@erpapp ~]# ext2online
ext2online v1.1.18 - 2001/03/18 for EXT2FS 0.5b
usage: ext2online [-C fd] [-dfqvV] device [new_size[bkmgt]]
        -C, --completion : print completion information
        -d, --debug      : turn debug info on
        -f, --force      : skip safety checks
        -q, --quiet      : be quiet (print only errors)
        -v, --verbose    : be verbose
        -V, --version    : print version and exit
        fd is the file descriptor to output completion data to,
        new_size is in ext2 blocks (1k, 2k, or 4k) (default),
        disk Blocks (512 byte), Kilo-, Mega-, Giga-, or Terabytes

[root@erpapp ~]# ext2online /dev/mapper/Cluster1-Root
ext2online v1.1.18 - 2001/03/18 for EXT2FS 0.5b

[root@erpapp ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
                      450G   12G  416G   3% /
/dev/sda1             112M   17M   90M  16% /boot
none                   16G     0   16G   0% /dev/shm
                      2.0G  288M  1.6G  16% /tmp

So, at the end of the day, there is an important lesson learned. That is, when you are on Redhat 4, use ext2online to resize your mounted filesystems. (not the resize2fs).
Note: The rest of the approach was exactly the same as documented in

Monday, August 8, 2016

Delphix - Notes vol 1, my new toy for the upcoming months

Recently I have taken a look at the Delphix software and its data as a service solution, as it was one of the prerequisites of our new delphix partner enablement process. I must admit that, I find it quite handy.

When we talk about Delphix, we actually talk about the Delphix engine. That is; there is an engine called Delphix engine, which runs on a Virtual machine based on a Solaris Guest OS and it sits between hosts and the storage for enabling fast clones, rewinds, restores, masking, versioning and provisioining of the non-prod enviroments.

Delphix supports a bunch of well known databases such as Oracle and Microsoft Sql Server and as far as I can see it works smoothly. In addition, it supports virtualizing the filesystems(such as Linux ext3 and ext4) and take fast filesystem clones from the sources to targets.

Using Delphix, we can create our clone environments in just 1-2 minutes. Also we can rewind them, restore them, take another clones from them easly and quickly. We can mask the data while creating those clones and we can even offload the rewind and cloning operations to our development teams by giving them the sophisticated Delphix tool called Jetstream.

In this post, I will give you my notes about the Delphix and this will be just a beginning, as we will dive in to this software deeply in the next months.

Before giving you the notes, I want to give you a quick and a -from surfrace- review about how Delphix works in Oracle environments. I want to give the overview about Oracle, as it is our main focus and I want to give it short because as i already mentioned it is just a beginning;

Delphix is installed on the site where our clone environments supposed to be.(recommended)
It is installed on a Virtual machine. This virtual machine is the platform where Delphix engine actually runs.
Delphix engine is responsible for the Delphix operations  and it can be invoked using Delphix tools, CLIs and APIS.
Delphix engine has all access to the underlying storage via the Virtual Server platform that it runs on.
For cloning an Oracle Database, Delphix connects to the production machine and triggers and rman level 0 backup there. However, this level 0 is only taken for one time, for the inialization. After the initialization is completed, Delphix regulary takes level 1 rman backups from the production and store them in its storage in a compressed format. These rman backups are called Delphix Snapshots by the way. (note that, there are 2 snapshot terms in Delphix, 1) rman backups 2) the snapshots that Delphix creates its inside, while we are using the jetstream)
Delphix , if required, can also regularly take the redologs + archivelogs from the production. This is called the log sync and it is done in a time basis way for having a more point in time based clones in the target site..
When we initialized our source environment in Delphix, we can take easy clones by just clicking a few buttons located in the delphix engine Web UI. When we clone a database, which is called "creating VDB operation", Delphix restores the database backup that it took in the first place and roll it forward to the point in time that we want to have our clone database to be. Delphix does these operations transparently and when we look at our target machine, we see our new clone database is running on our target environment and its database files are stored in NFS mounts which are coming from Delphix engine.

So, ofcourse there are a lots of things that Delphix can does as well. There are things that Delphix enables too. Imagine a site migration for example, or imagine a cloud migration. Imagine EBS database to be cloned in just a few miniutes and masked, imagine EBS apps tier can be cloned in just a few minutes. Well, lots of things to say, lots of things to remember, lots of procedures to follow and also lots of expertise to have.
I will give you much deeper details in my next blog posts, but for now, I give you the notes;

For what can we use Delphix:

Application developments: Create a QA environment for Developers. Create restore points(bookmarks), switch between those bookmarks with just a couple of clicks.

Refresh/Restore: Replace from Production/Referesh from Production.

Refesh from branches/bookmarks. It can mask the data from PROD when it is restored to DEV.

Creating new version of applications: By creating Branches/bookmarks, we can create a new timeline for the new version without deleting the old version data or applicaiton.

Data Masking: real data is replaced with fake data which is still realistic. Delphix delivers secure data to development environments.(secure virtual copies are created from the production) -- masking done is approx 4 hours..Deplhix masks the data without breaking the referential integrity-> for ex: Mary is always masked as Clara... in every table, every column...

Test: Test, fail, then go back and restest...

Migration: Place delphix to the target site, create clone environments in the target without copying lots of data. (done in Clorox Company, probably they had lots of clones in the source environment)

Delphix Tools:

JetStream is the Delphix UI console for the use of End Users to control Data Copies. (We give this to DEV and TEST developer, so they use it their own) Developers do the rollback and rollforward for their databases.
DEV and TEST developers creates their copies without touching the infrastructure.(no risk there)

Operation Console(Ops console) : is the tool used by Deplhix admins. The things that are done here are; provision,convert, replicate,mask, role based user privileges, policy management and more.

Security console:
Delphix finds the sensitive data by profiling.
Secure sensitive data by masking.
Audit the sensitive data and ensure it is masked. Alert admins if there are vulnerabilities.

Missing Control Management Console: Used for Reporting,monitoring health, enabling auditing for compliance(user based audit,limited data views and so on)

Delphix captures the production data (initialization), then it records the changes continously and lastly, creates the point it time virtual copies on demand.(delphix shares the data blocks for building virtual copies, rather than copying them)
3 test environment runs in parallel on shared data blocks. no copies are needed for building a new/4th environment.
These test environments can be refreshed , rewinded fastly.


Facebook used it. Facebook financials are on EBS.  They reduced their Quarterly Financial close from 3 weeks to 2 days. They use delphix for doing parallel environments so that analysts could find the transactional errors before doing the close on production.

Delphix answers "YES" to the following questions:

Create a new fresh copy of 5 TB DB in 5 mins?
Reset Database copy just before descrutive change without losing all of my work?
can you provide 3,5,10 copies of my database?
Can you provide securely masked copies in minutes?
Can I self service my data copies and version control them like source code?

Delphix generally work as -> Take data , compress it in Delphix engine, virtualize and connect the data to any kind of server. (data can be an Oracle DAtabase, or EBS application files, or can be gathered from any data source)
From One physical data, Delphix creates several virtual TEST, DEV etc environments --> %90 decrease in redundant data
Gains both in OPEX (Operation expenditure, management man/hours) and CAPEX(CApital Expenditure, less storage)

In DEV environment, the errors can not be found. Most of the time, the errors are found QA and UAT environments (too late..thus it increases the release time) This is because DEV environments are not fresh, not full, not synch. With delphix, the errors are found in DEV, less work for errors in UAT and QA, which is good.
So the development becomes more efficient.Thus, when the time of production , there are less errors to deal with.

How Delphix enables its capabilities?

Deplhix(Deplhix engine) sits between RDBMS and Storage. Deplhix runs on any hypervisor and on any server.
Deplhix doesnt touch production at all. It just takes data from there.
Delphix take the copy and compress it . Also record the changes from source.  So it tracks all the deltas coming from the source. Delphix masks the data if wanted(mask once, use many times, once masked several environments can  be created without need to mask once again). Thus, deplhix allows users to provision any point in time.

Gains & Some References & Some Success stories

Deutsche Bank -> Financial stress testing..  There were various reporting jobs which needed to be distributed. With delphix Deutsche Bank, they doubled application development outputs, virtual environments created, offload reporting from prod to these environments. Data management turned into self service.
Coherent -> Required to do the Critical upgrade of EBS 11i to R12 + migration of this system to a new data center. They used Deplhix to accelerate the upgrade and migration. Using delphix they proveed the ability to archive and the ability to recover the database efficiently.(in case needed)
Molina Healthcare-> personel healthcare information(sensitive) should be secured in Dev, QA, Reporting, Staging envs and so on. With delphix, Molina built lots of virtual test environments. (over 3000) They avoided 3Petabytes of Storage capacity. They also used efficient Data masking in deplhix in a way like "mask once", provision many.
Clorox -> increase the migration speed. They parallelized testing, used self service virtual environments for their migration related works.
Cisco -> on demand data access across all region. They increased their testing cycle speed. They also used Deplhix for migration.
US Marines -> They nneded to Oracle EBS 11i to R12, in parallel a migration to a new site was planned. They used Deplhix 2x acceleration in ERP upgrade.  They also used it in the retirement of their legacy data centers.
Facebook -> Increased the speed of Financial close from 3 weeks to 2 days.(as a result of increased error resolution and proactivity.)
Wallmart -> Accelerate release cycles, reduced online errors by cutting frequent refresh cycles (from 6 weeks to 2 days). Eliminated operational and capital cost of having multiple environments by %80.
StubHubb -> Accelerated Application releases. Normally, it was taking 3 weeks to create a testing environment. From monthly to daily mobile application releases. They reduced production errors by %20.
Gap -> Used fresh copies for offloading the ETL access and increased the speed the daily and weekly reports. They offloaded all BI and Reporting from Prod to virtual copies.
Informatica -> With Deplhix, accelerated its master data  management project 6 months ahead of its schedule.
Wind River -> They used Deplhix several virtual environments for continous and up-to-date access.. Intel could take the data from these virtual environments. (Intel purchased Wind River, so needed to process its data). Generally, production access is decreased from 8 hours to 15 minutes
Carnival -> Their main challange was Data Protection. They used Delphix to create low cost backup and disaster recovery solution. Also they gained 10x Storage and 2X Server utilization improvement.
P&G -> They used Delphix for application development, delivering quick copies, data protection. They also improved their RTO and RPO. They get superior SLAs.
Comcast -> They needed Much faster point in time recovery and they used Delphix for this, as Deplhix provides quick access to the data in any point of time. They increased the speed of their recovery also. Delphix -> Agility & Availability for Comcast

One of the business usecase of Delphix is moving data between site securely, such as "Delivering secure data from on-premise to cloud".

Using Deplhix, the data can be securely transffered between delphix engines. So Delphix can do engine to engine replication. This replication is done with the compression , so it is very quick.

Using Delphix bookmarks and branches several versions application can be coexist and developed in parallel.
We can create a new version , work on it, then create a bookmark to save our work. After that we can switch the old version's branch and work on it, save our work by creating a bookmark etc...
We can create branches from bookmarks, as well. For example: for version 1.1 -> to 1.1.1 : we create a branch from version's 1.1 's bookmark.
So we restore, we go forward-backward in time, but our data in any data & time is not deleted unless we want it to be deleted.
Bookmarks can be deleted according to the retention policies, automatically.
We do all the branch-bookmark things using jetstream tool.

With Delphix migrataions become : virtualize and migrate. Delphix even provides Linux to Unix endian conversions for Oracle Databases.

Accessing Delphix Information/the sources:

Support Portal: (download delphix software, knowledge base, public documentation, community forum and more)
Public Documentation: (most current documentation, all versions are available)
Shart Tank: Delphix user community, a forum

Virtual Databases are created using Deplhix Engine graphical interface.
-Deplhix has Web Server GUI, CLI through SSH and APIs for administration.  Two user roles are there for administration in Delphix: Delphix Admin(the real admin) and Sysadmin(a lightweight admin who can do the day to day tasks).

Delphix can virtualize Oracle, DB2, postgres, Mysql, Sybase ASE databases.


Delphix engine is installed as a VM guest on VMware VSphere environment(Version 4.x or 5.x) or Amazon Ec2 Cloud . It requires minimum 8vcpus , 64 gb memory minimum.
Delphix supports Oracle Databases, 10.2, 11.1, 11.2 and 12.1
Delphix supports Oracle Databases on Solaris(Sparc and X86), RHEL, OEL, Suse, AIX and HPUX.
There is no supported storage list for Delphix , so it can run on any storage.
Delphix supports both standalone and RAC hosts.

In Delphix, target environment means Virtual Databases.
In Delphix,  the delphix-side representation of a Source database is called as Dsource (Data Source).

Source environment speaks with Delphix using SSH, JDBC and DSP. DSP(Delphix Session Protocol) runs on port 8415 . Delpix requires port 8415 to be opened  between Source and Delphix systems.
Basically, what Delphix does is , it takes the Dsource and creates a VDB from it and then provision it to a target system as a Read Write fully functional Oracle Database.
spfile is also provisioned by Delphix while create VDBs.

Flow of data for Oracle:

Rman takes Rman Level 0 backup using Rman APIs for the initalization. (this is done once while adding the source system in to Delphix).
Delphix takes that level 0 backup in to the Delphix system for the initalization and then takes level 1 and optionally archive/redo backups to synch itself with the source.
After the level 0 is taken, Delphix compress it 2 to 4 times. Level 1 backups and archivelog backups used to synch the Delphix are also compresses in the same way.
The VDBs are presented to the target servers using NFS or DNFS (Direct NFS). --> Direct NFS has direct IO (no OS cache) and aysnc IO capabilities and it is the power of it. (when compared to NFS)
The changes done in the VDBS are stored in the Delphix Storage Layer Server , compressed.
the VDBS are like the snapshots of Dsources, so no extra storage is needed for creating them, and so that they are very fast to create.
The changes in VDBS are stored in the Delphix Storage layer (like snapshot deltas)
Dsources have a timeflow. We can see the snapshots in those timeflow. The snapshot are actually the level1 backups taken in to the Delphix. (Delphix iniates level1 backups by sending commands to the source environment)
This snapshots, in other words; level1 backups are done via the Delphix Snap Sync service.
Alternatively, the snapshots can be done by collecting the archivelogs and redologs from the source system. This is done via the Delphix Log Sync service.

IO is done : from target server -> Delphix -> Storage.

Deplhix can take data from ASM but can not provision to ASM. (In this scope, Exadata can be a source or target, but Delphix can not provision to Exadata Storage, but it can provision the data to Exadata as an NFS mount point)

VDBs can be created from other VDBS as well.
VDBs can be rolled back or rewinded point in time, as they have also their own timeflow. This rewinds takes place purely in Delphix's Storage Layer (No oracle technologies are used for that)
VDBs can be refreshed from production as well.  (from the dsource and from the snaphsots taken for that dsource)

The IO performance can be monitored and analyzed using Delphix 's Network Performance and Performance Analytics pages.


The requirements for the Delphix engine is the same as Oracle .
Delphix supports SQL Server 2005, 2008, 2008R2, 2012, 2014 and Windows Server 2003 SP2, 2008, 2008R2, 2012 and 2012R2 (64 bit)
Delphix supports Always on(for only source) + Windows Failover clusters(for both source and target)

When we talk about the database term in Delphix means Data files + Transaction logs + transaction log backups for Sql Server.

As for the Sql Server, Delphix recreates the source system in an intermediate system called the validated sync target/environment.
Validated sync environment is a replica, which is recovered using the database file backups and log backups from the production server via the SMB protocol.
Validated sync environment can be a target server or a standalone sqlserver.
The reason for having this environment is to take the backups from source sqlserver contiously.
When a dsource is created, Delphix kick offs a job to have the validated sync server to take the backups from the source server.
Validated Sync should be in the same Sql Server version as the source.
There is  a database created in the validated sync environment to be recovered by the source continously. (this databases is called the staging database)
there should be 1 Validated Sync should be for each Delphix engine. (one to one) . This is recommended.

So, Delhix itself does not consume any backup files.
Once the initilaziation is completed, Delphix remains in sync in terms of differential backups and transaction log backups.

Delphix talks with the source instance using jdbc and see if there are new transaction logs or backup have been taken. Then Delhpix instructs the validated sync database to apply them.
As for the sqlserver (like Oracle), Delphix provides network based storage services for the target servers (ISCI protocol)

Again, snap sync service takes backups from the source database.
Log sync service can also be used for retrieving the transaction logs.

The source Sql Server database must be Full recovery mode.
Without log sync, you can't restore to any time you want. So, with snap sync, you can only recover the time that are associated with backups.(you can only recover to snapshots)
Note: Backups taken from the source automatically creates a snapshot in Delphix.

Key factors on Installation of Delphix:

With IOCollector.sql , performance test are done.
We run it in PROD. Then we run it in current CLONE environment and see the difference.
Then, we will take the CLONE outputs as a performance goal, because we are producing clones(so no need to compare Delphix with the Production environment performance)
We expect most of the CLONE performance to be %50 of PROD performance.
If there are lots of clone environments needed, then multiple engine can be implemented. (this decision will be based on the outputs collected in the analysis phase/sizing phase)
For analyisis and sizing , 3 things matter.

1)CPU,RAM for Delphix Engine
2)Network between Source and Delphix  + Delphix and Target
3)IO for Storage

Installation and POV should be done by following the document: Delphix Pre Install User Guide

Jumbo Frames can be enabled between the Source and Deplhix Engine and Target. Jumbo Frames will increase the network performance (because we do DB IO , with big block sizes)

However, if the network between Source,Delphix and Target is not isolated, then all the network will be affected from that.

Jumbo frames will be enabled for all the hosts in the network and thay may harm them.. So be careful and analyze the environment of the customer.

Delphix should be installed closer to the target environment (not the source)

Delhpix Storage should be created on top of one to one relationship. 1 lun should be connected for 1 data store and for 1 vmdk....

Raid configuration is a concern for Delphix. Delhpix can only give recommendations... It does not matter for Delphix as long as it performs well.

VMDKs should be "thick" provisioned zeroed. (not thin provisioning) Thin provisioning can be used for this VMDKs as well.

Landshark environment is the demo environment for Delphix. It consists of 1 Delphix Engine, 1 target and 1 source systems.

The network for installing Landshark is very important. The documentation should be followed for configuring the network of the virtual machines, before starting the Landshark environment.

Landshark when configured, comes as a preconfigured inventory. So that, it automatically sees the delphix engine in it sees its source and its target automatically.

With just 1-2 click, we can create our first VDBs in our target. Also, we can create our Vfiles as well.

Vfiles is used when there is need to create a target filesytem from a source filesystem.

Cloning EBS apps tier using Delphix is an example where we use vfiles.

Tuesday, August 2, 2016

Reverse Proxy-- Enabling SSL on JIRA + Ngnix + Apache

This post is not directly related with Oracle products.
It is actually not so so related with JIRA, as well.
My main focus is to give a recall to one of the web tier components, that we use in some of our advanced EBS configurations like offloading SSL works to Web Servers or enhancing the security of our environment by putting an extra layer in front of our application server, which is supposed to be open to the internet.

However, altough we use this web tier component (called Reverse Proxy) in our EBS configurations as well, this time; I will make a change, and write about this topic, by going through an example of SSL enablement on JIRA, not Oracle EBS.

The Forward Proxy is the the default Proxy that we use in our daily lives with the Proxy term.
Reverse Proxy is something else. Altough reverse proxy is not the direct opposite of forward proxy, they are very different. That is, reverse proxy is for servers, and the forward proxy is for clients. I will not go very deep in that, but let's say, in forward proxy, the servers that are reached through the proxy, do not know the clients, but in reverse proxy, clients do not know the backend servers.
So in other words; in forward proxy, the clients want to go to the servers (like, in reverse proxy, the clients think that they are speaking with the proxy server, but they are actually speaking with the servers in backend.
That is Reverse Proxy talks with the client and passes his/or her request to the other servers that are logically behind of it, and then returns the responses of the servers to the clients.
For example, if I configure my Apache as a reverse proxy and say there :"change every url with the google's url/reverse proxy everything to google", and then if you write my Apache Server's ip address to your browser's address bar and it enter , you will reach the google through my ip address. So, you will not see any change in your browser's address bar, as you will think that you are talking with my Apache Server.

This reverse proxy component can be widely used in the Enterprises. It is used for security , it is used for adding an intermediate levels, it is used for SSL offloading and so on.

Below, you will find how we used a reverse proxy to open a JIRA system to internet.
In this scenario, we will use nginx reverse proxy for redirecting the https traffic to http . The clients will come to the reverse proxy with an internet dns name using https and we will pipe them transparently to our tomcat web server(JIRA) using http, which is running inside our network.
So the connection between the clients and our reverse proxy server will be https. Also from client's perspective, the connection from our clients to jira will be https as well.
We will also use another Apache(running on the same server as ngnix, and listening on port 80 of our internet DNS name) to redirect the http requests that may come to our internet dns name, to https automatically.

So , if I summarize;

Consider our internet dns name (public jira address) as :
Consider , we want the clients to access our jira system which is running inside our network, using
Consider , we have a ngnix server running on our reverse proxy server and configured to listen on port 443.
Consider, we have configured our network, so that all the request(that are coming from outside of our network) that are coming to ports 443 and 80 to be redirected to this reverse proxy server.
Consider, a separate Apache is running on this reverse proxy server with nginx and it is listening on port 80.
Lastly, consider, we want our JIRA to be accesses with https and http requests should be redirected to https automatically, but we also want the SSL works to be done by our ngnix proxy server. (not by JIRA-Tomcat itself)

I will give the instructions in a brief format, as follow;
  • First, we change the server.xml of JIRA (Tomcat) -- this is a jira specific configuration file, and it is a JIRA requirement for enabling SSL.
<Service name="Catalina">
<Connector acceptCount="100"
minSpareThreads="25" port="8080" protocol="HTTP/1.1" redirectPort="443" useBodyEncodingForURI="true"


  • Then in Jira administration screens; We choose > System, then select "General Configuration to open the Administration page" and then set "Use gzip compression Default: OFF". This is also a jira specific recommandation , as GZIP compression is known to cause performance issues using a reverse-proxy, especially if the proxy is also compressing the traffic.
  • After that, for SSL offloading on Nginx; We create new conf file under nginx/conf.d folder as below and restart Nginx. With this configuration Ngnix server will redirect this url to http://internal_ip (server that is inside our network and hosting JIRA)
server {
listen 443 ssl;
ssl on;
ssl_certificate /etc/nginx/erman.pem;     --> the ssl certificates should be already loaded in to this pem file.
ssl_certificate_key /etc/nginx/erman.pem;  
--> the ssl certificates should be already loaded in to this pem file. 
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 5m;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
location / {
proxy_pass http://<internal_ip>:8080;    --> forward the request to the Jira Server that is listening on port 8080 with the internal ip.
access_log /var/log/nginx/jiratest.access.log;

  • Then lastly, we add the following configuration to Apache and that's it. Apache will redirect all request from to https:// (just in case) . So, We edit the conf file under /etc/httpd/conf.d/jiratest.conf  and restart Apache to make the Apache forward all http request coming from port 80 to https:
<VirtualHost *:80>
Redirect "/" ""

Friday, July 29, 2016

Weblogic -- Single Weblogic Server & Multiple Domains

Here is the tip of the day and again I will keep it short, so directly jumping into the topic;
We don't need to install a seperate Weblogic Server in each time we want to create a new domain.
Suppose, we need to install OID + OAM.
We first a Weblogic Server, and then install OID on it. OID will have its own weblogic domain on top of this Weblogic Server, as it is a part of the installation.
So , when we come to OAM installation, we can use the same Weblogic Server to place our OAM software and create a specific domain for OAM as well.
At the end of the day, we install OID and OAM in to the same Weblogic Server but in to different Weblogic domains.

Well, Multiple domains can be created on single Weblogic Server. That's right. We tested it. We actually installed OID + OAM in to the same Weblogic Server and I can say that it is stable and it is not so hard to administrate.
However, there are 2 things that I will warn you if you want to do a similar configuration;

1) Ensure there is no conflict between Admin server or Managed Server ports, when we you have 1 WL Server running on one machine with multiple domains.
Altough Weblogic can listen on all avaiable interfaces(That is, while , Admin Server on domain 1 can listen from ip_address:7001, Admin Server on domain 2 can listen on localhost:7001) , having the same ports across domains will cause problems. Nodemanger will not work properly. There should be no admin server, or managed server sharing the same port accross domains.
2) Change the cookie name of the domains to be different from each other.
For example : FOR OID domain, the cookie name can be WLSOIDCOOKIE, and for OAM domain , the cookie name can be WLSOAMCOOKIE. Cookie change requires all the WLS server components in a domain to be restarted. If you don't do this change and restart, then you will not be able to administrate multiple domains in a single Browser, that is the cookie will conflict.

So, that 's all I need to mention about this topic, I hope it will help you.

Exadata- X6-2 1/8, its specs and its readiness for EBS migrations

We have done several EBS migrations and most of the time we do these migration using datapump method. The reason we choose this method was always a need for an upgrade. That is , ofcourse standby switchover/failover based migration is faster , and smoother then a datapump based one, but once you need to upgrade the database tier in the source instance, the things become complicated and you most probaby face with more downtime in total.

As for the new generation Exadata X6-2 , the migration of EBS is quite similar. We just create our database in the target Exadata patch EBS and EXA(if necessary) and then use datapump to exp-imp EBS database tier to Exadata. After the migration of the database, we do couple of configurations using post clone and autoconfig sometimes and that's it.
This general procedure still applies to Exadata with its new generation hardware and software.

In this post, I will give some quick overview of Exadata X6, a eight rack version of it and then give some instructions for the readiness of the Exadata environment to be used as an EBS database tier.

I will give the explanation with a Question an Answer list and try to keep it simple.

What do we have in Exadata X6-2?

Exadata X6-2 Eight(1/8) Rack:
-2 database servers:
Each server has:
*2 x 22-core processors, each with 11 cores enabled (Xeon E5-2699 v4 processors)
*256 default memory minimum(default) -->  256 GB (8 x 32 GB) RAM expandable to 512 GB (16 x 32 GB) or 768 GB (24 x 32 GB) with memory expansion kit
*4 x 600 GB 10K RPM SAS disks, hot swappable, expandable to 8x
*Disk controller HBA with 1 GB cache (no more batteries)
*2 x InfiniBand 4X QDR (40 Gb/s) ports (PCIe 3.0), both ports active
*4 x 1 GbE/10GbE Base-T Ethernet ports
*2 x 10 GbE Ethernet SFP+ ports (1 dual-port 10GbE PCIe 2.0 network card based on the Intel 82599 10 GbE controller technology)
*1 Ethernet port for Integrated Lights Out Manager (ILOM) for remote management
*Oracle Linux 6 Update 7 with Unbreakable Enterprise Kernel 2 or Oracle VM Server 3.2.9

-3 Exadata Storage Server X6-2 Servers:
Each server has:
*2 x 10-core processors(10-core Xeon E5-2630 v4 processors), each with 5 cores enabled
*128GB memory
*2 PCI flash cards (each 3.2 TB)+ 6x8 7200 rpm disk OR 8 PCI flash Drives(4 enabled, each 3.2 TB)

-2 Sun Datacenter InfiniBand Switch
-2 redundant PDUs (single phase or three phase, high voltage or low voltage)
-1 48-port Cisco Catalyst 4948E-F, model number WS-C4948E-F-S Ethernet switch
Raw PCI flash capacity: 38.4 TB for EF, or 19.2 TB for HC --> for EF(Extreme Flash) Calculation: 3.2 X 4 x 3 = 36 TB.
Raw hard disk capacity: 144 TB for high capacity disks --> for HC Calculation : 6 x 8 x 3 = 144TB.

Why is it called Extreme Flash (I mean Exadata Extreme Flash) ?

If we look at the spare flash drives of Exadata, we see the "SSD" clause on them.
This is because; actually SSD drives are Flash-based.

There is no difference between SSD and Flash. So altough it is called Extreme Flash, Exadata EF uses SSD disks, which actually use flash in the background.

How do we(Actually Oracle Field engineer) deploy Exadata for the first time, after all the cabling and other hardware related stuff is done?

The deployment of Exadata X6 is done using the script called
With the latest version, following software components are deployed by the installer:

Database Software Version:
Grid Software Version:
Os version: OEL 6.7

Kernel: 2.6.38-400.277 UEK

If we want to use NFS exports from Exadata to store our EBS database exp dump files, what do we do?

NFS configuration from Exadata to the source system can be done by following:

Exadata: How to Create a NFS Mount on a Database Node (Doc ID 1900335.1)

Do we use Oracle Home that is deployed with Exadata to provision our EBS database?

Actually, no. We leave that home as is, as an example for us.
We create/clone a new home on Exadata and use it for our EBS database tier.

How do we clone our database home in Exadata? (as it is RAC and as it is an appliance, it may be question?)

A new Oracle Home is created using cloning technique, for this the following actions are taken.
In each node
-> Create a new  oracle home by copying the existing one, use cp -rp to preverse the permissions.
-> Register Oracle home to inventory
    for example:
    cd clone
    cd bin

    '-O"LOCAL_NODE=exa01"'      --> we change it to second node when we are executing it in the second node.

What is next?

The next thing is to the migration :) by following Oracle Support documents (export import process of EBS, EBS - 12 C RDBMS interoperability and EBS on Exadata whitepapers) and ofcourse reading my blog posts which are about real life example of EBS - Exadata migrations:)

So that 's it for now. I will write a seperate blog post for migration EBS on Exadata X6 (altough it will not be so different than migrating to earlier releases of Exadata ;)

Thursday, July 28, 2016

RMAN-- Commvault restore, controlfile autobackup and a little more especially about the controlfile

In this post, I will give a quick overview of Commvault-Rman based database cloning..
This post is based on a customer site story, where we wanted to implement Commvault based Rman restores and even if it was not our duty or responsibility at all, we still needed to get our hands dirty with this 3rd party tool . (3rd party from Oracle Site)
Basically, Commvault uses rman to do online backups and also the restores of this backups.
Restoring a backup a to different server, can be considered as cloning and here in this post, we will talk about that.
Ofcourse a Commvault admin is also required to administrate the Commvault side and to initiate backups and restore from there, but  what needed in the database side are as follows;
  • init.ora must be in place.
  • commvault seees the database from oratab , so the database entry must be there in oratab.
  • Database should be opened in nomount mode for the commvault discovery.
  • If controlfile is not supposed to restored by Commvault , then Database should be opened in mount mode before the commvault restore job.
  • To be able restore rman wants an autobackup controlfile why?
With a control file autobackup, RMAN can recover the database even if the current control file, recovery catalog, and server parameter file are inaccessible. Because the path used to store the autobackup follows a well-known format, RMAN can search for and restore the server parameter file from that autobackup. Also -> If you have configured control file autobackups, then the server parameter file is backed up with the control file whenever an autobackup is taken. 
However, Rman does not want the autobackup of controlfile, but we make rman to want it. We tell rman to use the autobackup for controlfile restores with the command "restore controlfile from autobackup", and when we give that command, rman just tries to do the spfile and controlfile restore using the autobackups.
Commvault for instance; does use the "restore controlfile from autobackup" command , and because of that rman wants autobackups in place to be able to restore the controlfie.
Well, alternatively, we can use a a traditional controlfile backup(non autobackup) by specifying its backup piece (restore controlfile from '/BACKUP/09I594QI_1_1';) as well. Check: How To Restore Controlfile From A Backupset Without A Catalog Or Autobackup (Doc ID 403883.1)

When we are talking about the backups of controlfile, let's give some basic but important  information about the controlfiles.

Controlfiles contain information about the database, redolog files and other files that are required open an Oracle database.
All the structural database changes are stored in the controlfile (adding a datafile, renaming a datafile, dropping a database , or redolog files etc)
Without controlfile, we can't open the database, and that's why in the first place, we need to have a controlfile to provision our clone database environments.
The following information is stored in a Controlfile.

The database name 
The timestamp of database creation
The names and locations of associated datafiles and redo log files
Tablespace information
Datafile offline ranges
The log history
Archived log information
Backup set and backup piece information
Backup datafile and redo log information
Datafile copy information
The current log sequence number
Checkpoint information

Some of the information types in the controlfile may be overwritten when there is no space left in their region. 
These types of information are listed  below; 

Note that, the parameter named CONTROL_FILE_RECORD_KEEP_TIME plays a big role in this overwriting, as it specifies the minimum number of days before a reusable record in the control file can be reused. In the event a new record needs to be added to a reusable section and the oldest record has not aged enough, the record section expands. If this parameter is set to 0, then reusable sections never expand, and records are reused as needed.

Note that, the size of the database control file growth may depend on the number of: Backups taken and stored in the controlfile, the count of archived redo logs that our database generates, the number of days that the information is configured to be stored in the control file .

The other information types which are not listed above, such as the database name or the data file names can not be overwritten, so when there is a shortage on their region, the controlfile basically expands for storing the new records (such as when adding a new datafile and when there is no space left in the related area of the controlfile)

When there is no controlfile, or there is no autobackup or normal backup of controlfile , the controlfile can be created using CREATE CONTROLFILE command. However, this command will only a create the controlfile and will not load it with any information like archivelogs, deleted object, log history and so on. Similarly, using the "alter database backup controlfile to trace" command, we can create a controlfile creation statement for ourselves. This might be used in cloning. That is, we can restore our database without our controlfile and issue the command backup controlfile to trace in our source environment and then execute the created controlfile creation script in our target environment to have a brand new controlfile for it, but the same type of information will not be captured using this controlfile creation method. 

The I/O(read write) that is done from or to the controlfile, is done through the PGA.
Processes(foreground and background) write to the controlfile and when they write , they wait in Controlfile file parallel write wait.
The reads or done using the control file sequential read event, and the wait occurs here is the control file sequential read wait.
Control file is read in incidents such as when making a backup controlfiles, sharing info from the controlfile between instances (RAC), so in other words; the controlfile sequential read will be there for the admin activities.
These IOs are different than the DB block I/Os and they are done with the physical  block size of the underlying OS (For Linux it is 512 bytes)
Controlfiles are also written in single writes sometimes.. In those times, the wait event Control file single write wait occurs. This wait is signaled while the control file's shared information is written to disk.

These operations are under the control of Controlfile Enqueue (CF)
So when we see a high CF in our database, then there is a problem with the Controlfile transactions. The IO subsystem may be operating very slowly, or we may produce a high activity that will cause the controlfile is continously and aggressively updated.(such as producing lots lots lots of archivelogs)

Here are some recorded cause for that:

Very slow I/O subsystem where the Control files are stored. 
Frequent log switching - redo logs too small or insufficient logs 
Async I/O issue or multiple db_writers, you can't use both of them, back out one.
OS / Hardware issues

Why did I get into these details? I don't know:) maybe I wanted to give some solutions for the cloning problems that may arise because of lacking the controlfile backups..
Anyways, let's get back to our main topic:
  • db_create_file_dest should be set to the new path, that becausee: "ORA-01119 error in creating file +data" (415579.1 )
How commvault does the restore, an example:

It connects with rman target / to the target environment, and then restore the controlfile(from autobackup) , database and lastly recovers the database and opens it in resetlogs mode.

run {
allocate channel ch1 type 'sbt_tape'
restore controlfile ;
sql 'alter database mount';


run {
set newname for database to '/u02/ERMAN/Data/%U';
allocate channel ch1 type 'sbt_tape'
allocate channel ch2 type 'sbt_tape'
allocate channel ch3 type 'sbt_tape'
allocate channel ch4 type 'sbt_tape'
allocate channel ch5 type 'sbt_tape'
## send " -jm 14 -a 2:0 -cl 2320 -ins 234 -at 0 -j 309830 -bal 0 -rcp 0 -ms 5 -p 2 -PREVIEW";
restore database  until time = "TO_DATE('06/22/2016 01:25:42','MM/DD/YYYY HH24:MI:SS')" ;
switch datafile all;
run {
allocate channel ch1 type 'sbt_tape'
allocate channel ch2 type 'sbt_tape'
recover database  until time = "TO_DATE('06/22/2016 01:25:42','MM/DD/YYYY HH24:MI:SS')" ;
alter database open resetlogs;