Skip Headers
Oracle® VM Server User's Guide
Release 2.1

Part Number E10898-04
Go to Documentation Home
Home
Go to Book List
Book List
Go to Table of Contents
Contents
Go to Index
Index
Go to Feedback page
Contact Us

Go to previous page
Previous
Go to next page
Next
View PDF

6 Domain Live Migration

This Chapter discusses live migration of domains to other, identical computers. You must use identical computers to perform live migrations, that is, the computer make and model number must be identical.

To perform live migration of domains, you must create a shared virtual disk before you perform the migration. This Chapter contains:

6.1 Creating a Shared Virtual Disk for Live Migration

If you want to perform live migration of domains to other, identical, computers, you must create a shared virtual disk to be used during the live migration. You can set up a shared virtual disk in the following configurations:

You must make sure all Virtual Machine Servers in the server pool:

This section discusses creating a shared virtual disk to use for live migration.

6.1.1 Creating a Shared Virtual Disk Using OCFS2 on iSCSI

To create a shared virtual disk using OCFS2 on iSCSI:

  1. Install the iscsi-initiator-utils RPM on the Oracle VM Server. The iscsi-initiator-utils RPM is available on the Oracle VM Server CDROM or ISO file.

    # rpm -Uvh iscsi-initiator-utils-version.el5.i386.rpm
    
  2. Start the iSCSI service:

    # service iscsi start
    
  3. Run discovery on the iSCSI target. In this example, the target is 10.1.1.1:

    # iscsiadm -m discovery -t sendtargets -p 10.1.1.1
    

    This command returns output similar to:

    10.1.1.1:3260,5 iqn.1992-04.com.emc:cx.apm00070202838.a2
    10.1.1.1:3260,6 iqn.1992-04.com.emc:cx.apm00070202838.a3
    10.2.1.250:3260,4 iqn.1992-04.com.emc:cx.apm00070202838.b1
    10.1.0.249:3260,1 iqn.1992-04.com.emc:cx.apm00070202838.a0
    10.1.1.249:3260,2 iqn.1992-04.com.emc:cx.apm00070202838.a1
    10.2.0.250:3260,3 iqn.1992-04.com.emc:cx.apm00070202838.b0
    
  4. Delete entries that you do not want to use, for example:

    # iscsiadm -m node -p 10.2.0.250:3260,3 -T iqn.1992-04.com.emc:cx.apm00070202838.b0 -o delete
    # iscsiadm -m node -p 10.1.0.249:3260,1 -T iqn.1992-04.com.emc:cx.apm00070202838.a0 -o delete
    # iscsiadm -m node -p 10.2.1.250:3260,4 -T iqn.1992-04.com.emc:cx.apm00070202838.b1 -o delete
    # iscsiadm -m node -p 10.1.1.249:3260,2 -T iqn.1992-04.com.emc:cx.apm00070202838.a1 -o delete
    # iscsiadm -m node -p 10.0.1.249:3260,5 -T iqn.1992-04.com.emc:cx.apm00070202838.a2 -o delete
    
  5. Verify that only the iSCSI targets you want to use for the server pool are visible:

    # iscsiadm -m node
    
  6. Review the partitions by checking /proc/partitions:

    # cat /proc/partitions
    major minor  #blocks  name
       8     0   71687372 sda
       8     1     104391 sda1
       8     2   71577607 sda2
     253     0   70516736 dm-0
     253     1    1048576 dm-1
    
  7. Restart the iSCSI service:

    # service iscsi restart
    
  8. Review the partitions by checking /proc/partitions. A new device is listed.

    # cat /proc/partitions
    major minor  #blocks  name
       8     0   71687372 sda
       8     1     104391 sda1
       8     2   71577607 sda2
     253     0   70516736 dm-0
     253     1    1048576 dm-1
       8    16    1048576 sdb
    
  9. The new device can now be used.

    # fdisk -l /dev/sdb
    
  10. Create a new directory named /etc/ocfs2 directory:

    # mkdir /etc/ocfs2
    
  11. Create the OCSF2 configuration file as /etc/ocfs2/cluster.conf. The following is a sample cluster.conf file:

    node:
            ip_port = 7777
            ip_address = 10.1.1.1
            number = 0
            name = example1.com
            cluster = ocfs2
    node:
            ip_port = 7777
            ip_address = 10.1.1.2
            number = 1
            name = example2.com
            cluster = ocfs2
    cluster:
            node_count = 2
            name = ocfs2
    
  12. Review the status of the OCFS2 cluster service:

    # service o2cb status
    
  13. Load the OCFS2 module:

    # service o2cb load
    
  14. Set the OCFS2 service to be online:

    # service o2cb online
    
  15. Configure the OCFS2 service to start automatically when the computer boots:

    # service o2cb configure
    
  16. Start up the OCFS2 service.

    # service o2cb start
    
  17. Format the shared virtual disk from any of the Oracle VM Servers in the cluster:

    # mkfs.ocfs2 /dev/sdb1
    
  18. Mount the shared virtual disk from all the Oracle VM Servers in the cluster on /OVS/remote:

    # mount /dev/sdb1 /OVS/remote/ -t ocfs2
    
  19. Change the /etc/fstab file to include the shared virtual disk mounted at boot:

    /dev/sdb1               /OVS/remote              ocfs2   defaults        1 0
    

6.1.2 Creating a Shared Virtual Disk Using OCFS2 on SAN

To create a shared virtual disk using OCFS2 on SAN:

  1. Review the partitions by checking /proc/partitions:

    # cat /proc/partitions
    major minor  #blocks  name
       8     0   71687372 sda
       8     1     104391 sda1
       8     2   71577607 sda2
     253     0   70516736 dm-0
     253     1    1048576 dm-1
       8    16    1048576 sdb
    

    Determine the share disk volume you want to use.

  2. Create a new directory named /etc/ocfs2 directory:

    # mkdir /etc/ocfs2
    
  3. Create the OCSF2 configuration file as /etc/ocfs2/cluster.conf. The following is a sample cluster.conf file:

    node:
            ip_port = 7777
            ip_address = 10.1.1.1
            number = 0
            name = example1.com
            cluster = ocfs2
    node:
            ip_port = 7777
            ip_address = 10.1.1.2
            number = 1
            name = example2.com
            cluster = ocfs2
    cluster:
            node_count = 2
            name = ocfs2
    
  4. Review the status of the OCFS2 cluster service:

    # service o2cb status
    
  5. Load the OCFS2 module:

    # service o2cb load
    
  6. Set the OCFS2 service to be online:

    # service o2cb online
    
  7. Configure the OCFS2 service to start automatically when the computer boots:

    # service o2cb configure
    
  8. Start up the OCFS2 service.

    # service o2cb start
    
  9. Format the shared virtual disk from any of the Oracle VM Servers in the cluster:

    # mkfs.ocfs2 /dev/sdb
    
  10. Mount the shared virtual disk from all the Oracle VM Servers in the cluster on /OVS/remote:

    # mount /dev/sdb /OVS/remote/ -t ocfs2
    
  11. Change the /etc/fstab file to include the shared virtual disk mounted at boot:

    /dev/sdb               /OVS/remote              ocfs2   defaults        1 0
    

6.1.3 Adding a Shared Virtual Disk Using NFS

To add a shared virtual disk using NFS:

  1. Find an NFS mount point to use. This example uses the mount point:

    mycomputer:/vol/vol1/data/ovs
    
  2. Add the following entry to the /etc/fstab file:

    myfileserver:/vol/vol1/data/ovs /OVS/remote  nfs 
    defaults 0 0
    
  3. Mount the shared virtual disk:

    # mount /OVS/remote
    

6.2 Migrating a Domain

To migrate a domain from one computer to another identical computer:

  1. Create a shared virtual disk to use during the domain migration. See Section 6.1, "Creating a Shared Virtual Disk for Live Migration". Each computer involved with the domain migration must have access to the shared virtual disk in the same way, either as an NFS or a SAN virtual disk.

  2. On the Oracle VM Server that contains the existing domain, migrate the domain to to the remote computer with the following command:

    # xm migrate mydomain myremotecomputer
    

    The domain is migrated to the remote computer.

    To perform live migration of the domain, use the command:

    # xm migrate -l mydomain myremotecomputer