Tuesday, October 14, 2014

Linux/LVM -- Online Migration of Oracle Database

Using LVM migration techniques, it is possible to migrate an Oracle Database online..
Probably, Oracle will not support this, and I dont know who to blame if it fails.
But I made the test, and there an 11gR2 database was migrated while the sessions were active and no errors or warnings have appeared in the alert log.

The approach is based on the  lvconvert command..
So what we do actually is, create a copy/mirror in the same VG , syncronize the mirror and detach the old disk when they are syncronized.
Thus , the newly mirrored disk become our new active disk, and our files become migrated to the new disk.

Mirrored Logical Volume

Note that, LVM keeps track of the regions that are in sync with a mirror logş. This log can be stored on a seperate disk or it can be stored in memory. In the example below, we will store it in memory.

lvconvert - convert a logical volume from linear to mirror or snapshot

-m--mirrors Mirrors
Specifies the degree of the mirror you wish to create. For example, "-m 1" would convert the original logical volume to a mirror volume with 2-sides; that is, a linear volume plus one copy.

Here is an example ;

[root@localhost ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root 18G 14G 3.0G 82% /
tmpfs 2.0G 560K 2.0G 1% /dev/shm
/dev/sda1 239M 54M 172M 24% /boot
/dev/mapper/vgu02-lvu02 12G 4.0G 7.2G 36% /u02  --> our disk that contains the datafiles

[root@localhost ~]# ps -ef |grep pmon
oracle 3979 1 0 10:11 ? 00:00:00 ora_pmon_ermantest -> Oracle instance is running

SQL> select name from v$datafile;

--> The database files are on the LVM

SQL> select name from v$controlfile;

--> The control file is on the LVM

fdisk -l
Disk /dev/sdc: 12.9 GB, 12884901888 bytes --> this is our newly added disk, we will migrate our db to this disk.
128 heads, 33 sectors/track, 5957 cylinders
Units = cylinders of 4224 * 512 = 2162688 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x23bdf117

[root@localhost ~]# lvs

LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert
lv_root VolGroup -wi-ao---- 17.51g
lv_swap VolGroup -wi-ao---- 2.00g
lvu02 vgu02 -wi-ao---- 12.00g  --> our source LVM

[root@localhost ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/sda2 VolGroup lvm2 a-- 19.51g 0
/dev/sdb1 vgu02 lvm2 a-- 12.00g 0  --> our source physical volume

Here we start >

[root@localhost ~]# pvcreate /dev/sdc1
Physical volume "/dev/sdc1" successfully created

[root@localhost ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/sda2 VolGroup lvm2 a-- 19.51g 0
/dev/sdb1 vgu02 lvm2 a-- 12.00g 0
/dev/sdc1 lvm2 a-- 12.00g 12.00g

[root@localhost ~]# vgextend vgu02 /dev/sdc1
Volume group "vgu02" successfully extended

[root@localhost ~]# vgs
VG #PV #LV #SN Attr VSize VFree
VolGroup 1 2 0 wz--n- 19.51g 0
vgu02 2 1 0 wz--n- 23.99g 12.00g

[root@localhost ~]# vgdisplay vgu02 -v
Using volume group(s) on command line
Finding volume group "vgu02"
--- Volume group ---
VG Name vgu02
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 3
VG Access read/write
VG Status resizable
Cur LV 1
Open LV 1
Max PV 0
Cur PV 2
Act PV 2
VG Size 23.99 GiB
PE Size 4.00 MiB
Total PE 6142
Alloc PE / Size 3071 / 12.00 GiB
Free PE / Size 3071 / 12.00 GiB
VG UUID 4ajgbO-4yf1-aH7V-DMOv-Q2wf-IlYC-91mzce

--- Logical volume ---
LV Path /dev/vgu02/lvu02
LV Name lvu02
VG Name vgu02
LV UUID qjopJ8-PCHS-ESyJ-C227-zpMH-zkIp-D0jbrV
LV Write Access read/write
LV Creation host, time localhost.localdomain, 2014-10-13 11:09:00 +0300
LV Status available
# open 1
LV Size 12.00 GiB
Current LE 3071
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 252:3

--- Physical volumes ---

PV Name /dev/sdb1
PV Status allocatable
Total PE / Free PE 3071 / 0
PV Name /dev/sdc1
PV UUID GMSidx-Yorp-NZN0-AYUP-aFF8-m6cf-t1LGj2
PV Status allocatable
Total PE / Free PE 3071 / 3071

[root@localhost ~]# lvconvert -m 1 /dev/vgu02/lvu02 /dev/sdc1 --corelog 

vgu02/lvu02: Converted: 0.4%
vgu02/lvu02: Converted: 2.7%
vgu02/lvu02: Converted: 5.0%
vgu02/lvu02: Converted: 7.1%
vgu02/lvu02: Converted: 9.2%
vgu02/lvu02: Converted: 11.1%
vgu02/lvu02: Converted: 13.3%
vgu02/lvu02: Converted: 15.3%
vgu02/lvu02: Converted: 18.5%
vgu02/lvu02: Converted: 20.8%
vgu02/lvu02: Converted: 23.0%
vgu02/lvu02: Converted: 25.2%
vgu02/lvu02: Converted: 27.4%
vgu02/lvu02: Converted: 29.8%
vgu02/lvu02: Converted: 32.1%
vgu02/lvu02: Converted: 36.2%
vgu02/lvu02: Converted: 38.6%
vgu02/lvu02: Converted: 62.4%
vgu02/lvu02: Converted: 100.0%

[root@localhost ~]# lvconvert -m 0 /dev/vgu02/lvu02 /dev/sdb1

Logical volume lvu02 converted.

[root@localhost ~]# lvs

LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert
lv_root VolGroup -wi-ao---- 17.51g
lv_swap VolGroup -wi-ao---- 2.00g
lvu02 vgu02 -wi-ao---- 12.00g

[root@localhost ~]# vgreduce vgu02 /dev/sdb1
Removed "/dev/sdb1" from volume group "vgu02"

[root@localhost /]# pvremove /dev/sdb1
Labels on physical volume "/dev/sdb1" successfully wiped,

That is it. The database is still running.
You may unplug the old disk from the server..


  1. Hi Erman ,
    Our scenario like below on linux server which is running oracle database and I want to migrate new storage from existing one without any downtime ...

    Control files located /u01 , /data & /data2
    Redolog files /data
    Datafiles /data2

    mount points belongs to different lvm groups ... During operation , will any problem occur for this configuration ? I couldn't find any supported documenation . What do you think about ?

    #df -h
    2.0T 138G 1.7T 8% /data
    985G 676G 259G 73% /data2
    lvdata vgdata -wi-ao 1.95T
    lvdata2 vgdata2 -wi-ao 1000.00G

  2. The operation is in the LVM layer, it is below the mounts. In the example , my controlfiles,redologfiles and datafiles were located in the lvm, and I have migrated the lvm to the new disk.. No problem encountered. I suggest you to make a test run in your TEST environment, before implementing this in PROD.

  3. Also ensure that your backups are good before making lvconvert style migrations ..

  4. Why not to just use ASM ?
    1. Disable rebalancing.
    2. Add new drives/Luns
    3. Drop old ones.
    4. Enable rebalancing.
    5. Take a coffee.
    6. Done.

    1. Yes Alex. I aggree with you. We do ASM rebalancing , too..
      But this blog post explains the LVM method.
      We have clients who doesn't use/doesn't want to use ASM.