Skip Headers
Oracle® VM Server User's Guide
Release 2.2

Part Number E15444-04
Go to Documentation Home
Home
Go to Book List
Book List
Go to Table of Contents
Contents
Go to Index
Index
Go to Feedback page
Contact Us

Go to previous page
Previous
Go to next page
Next
PDF · Mobi · ePub

C Guest Configuration

This Appendix contains information about guest network driver installation, guest configuration file options and parameters, and examples of guest configuration files you can modify and use to create guests. A detailed explanation of the configuration parameters and common values is available in the /etc/xen/xmexample.hvm file in Oracle VM Server.

Create the guest configuration file as /OVS/running_pool/domain/vm.cfg and use the following command to create the guest:

# xm create /OVS/running_pool/domain/vm.cfg

This Appendix contains:

C.1 e100 And e1000 Network Device Emulators

You can use the network device emulators for the Intel 8255x 10/100 Mbps Ethernet controller (the e100 controller) and the Intel 82540EM Gigabit Ethernet controller (the e1000 controller) for hardware virtualized guests. The e1000 controller is a Gigabit Ethernet controller and increases the network throughput when compared to the default Ethernet controller.

To use these network device emulators, install the network device driver on the guest, then modify the guest configuration file to specify the controller model type: either e100; or e1000. For example, to use the e1000 controller, set model=e1000 in the vif entry in the guest configuration file:

vif = [ 'type=ioemu, mac=00:16:3e:09:bb:c6, bridge=xenbr2, model=e1000']

Create the guest again using the xm create command. The guest now uses the faster e1000 controller.

C.2 Quality of Service (QoS)

Quality of Service (QoS) is the ability to provide varying priority to different applications, users, or data flows, or to guarantee a level of performance to a data flow. You can set virtual network interface and virtual disk QoS parameters for guests running on an Oracle VM Server. Guest virtual network interfaces share a physical network interface card (NIC) and you can control how much bandwidth is available to the virtual network interface. You can also control the I/O priority of a guest's virtual disk(s).

This section contains:

You can set QoS parameters in Oracle VM Server, and in Oracle VM Manager. See the Oracle VM Manager User's Guide for information on setting QoS parameters in Oracle VM Manager.

C.2.1 Setting Disk Priority

You can set the priority of a guest's virtual disk(s). Eight priority levels are available to set the time slice a process receives in each scheduling window. The priority argument is from 0 to 7; the lower the number, the higher the priority. Virtual disks running at the same priority are served in a round-robin fashion.

The virtual disk priority is controlled with the disk_other_config parameter in the guest's configuration file (vm.cfg). The disk_other_config parameter is entered as a list; each list item represents a QoS setting. The syntax to use for the disk_other_config parameter is:

disk_other_config = [[ 'front_end', 'qos_algorithm_type', 'qos_algorithm_params']]

front_end is the front end name of the virtual disk device to which you want to apply QoS. For example, hda, hdb, xvda, and so on.

qos_algorithm_type is the QoS algorithm. Only ionice is currently supported.

qos_algorithm_params are the parameters for the qos_algorithm_type. For the ionice algorithm, this may be the schedule class and the priority, for example sched=best-effort,prio=5.

For example:

disk_other_config = [['hda', 'ionice', 'sched=best-effort,prio=5'], ['hdb', 'ionice', 'sched=best-effort,prio=6']]

If you make a change to a running guest's configuration file, you must shut down the guest, then start it again with the xm create vm.cfg command for the change to take effect. The xm reboot command does not restart the guest with the new configuration.

C.2.2 Setting Inbound Network Traffic Priority

You can set the priority of inbound network traffic for a guest. The inbound network traffic priority is controlled with the vif_other_config parameter in the guest's configuration file (vm.cfg). The vif_other_config parameter is entered as a list; each list item represents a QoS setting. The syntax to use for the vif_other_config parameter is:

vif_other_config = [[ 'mac', 'qos_algorithm_type', 'qos_algorithm_params']]

mac is the MAC address of the virtual network device to which you want to apply QoS.

qos_algorithm_type is the QoS algorithm. Only tbf is currently supported.

qos_algorithm_params are the parameters for the qos_algorithm_type. For the tbf algorithm, this may be the rate limit and latency, for example, rate=8mbit,latency=50ms.

For example:

vif_other_config = [['00:16:3e:31:d5:4b', 'tbf', 'rate=8mbit,latency=50ms'], ['00:16:3e:52:c4:03', 'tbf', 'rate=10mbit']]

If you make a change to a running guest's configuration file, you must shut down the guest, then start it again with the xm create vm.cfg command for the change to take effect. The xm reboot command does not restart the guest with the new configuration.

C.2.3 Setting Outbound Network Traffic Priority

You can set the priority of outbound network traffic for a guest. The outbound network traffic priority is controlled with the rate parameter of the vif option in the guest's configuration file (vm.cfg). The rate parameter supports an optional time window parameter for specifying the granularity of credit replenishment. The default window is 50ms. For example, you could set rate as rate=10Mb/s, rate=250KB/s, or rate=1MB/s@20ms. An example vif option to set the network traffic priority for a guest might be:

vif = ['mac=00:16:3e:31:d5:4b,bridge=xenbr0,rate=10Mb/s@50ms']

If you make a change to a running guest's configuration file, you must shut down the guest, then start it again with the xm create vm.cfg command for the change to take effect. The xm reboot command does not restart the guest with the new configuration.

C.3 Virtual CPU Configuration File Parameters

You can set the number of virtual CPUs to be used by a guest domain using the vcpus parameter in the configuration file. You can also set the vcpu_avail parameter to activate specific virtual CPUs when the domain starts up with the syntax:

vcpu_avail {bitmap}

The bitmap tells the domain whether it may use each of its virtual CPUs. vcpu0 is always activated when the domain starts up, so the last bit of vcpu_avail is ignored. By default, all virtual CPUs are activated.

For example, to activate only vcpu0:

vcpu_avail = 1

And the following activates vcpu0, vcpu3 and vcpu4:

vcpu_avail = 25 # 11001

This is equal to the following as vcpu0 is always active:

vcpu_avail = 24 # 11000

C.4 Simple Configuration File Example

A simple example of a configuration file to create a guest follows:

disk = [ 'file:/mnt/el4u5_64_hvm//system.img,hda,w' ]
memory=4096
vcpus=2
name="el4u5_64_hvm"
vif = [ ' ' ]   #By default no n/w interfaces are configured. E.g:  A default hvm install will have the line as vif=[ 'type=ioemu,bridge=xenbr0' ]
builder = "hvm"
device_model = "/usr/lib/xen/bin/qemu-dm"
 
vnc=1
vncunused=1
 
apic=1
acpi=1
pae=1
serial = "pty" # enable serial console
 
on_reboot   = 'restart'
on_crash    = 'restart'

C.5 Complex Configuration File Example

A more complex example of a configuration file to create a guest follows:

# An example of setting up the install time loopback mount  
#  using nfs shared directory with iso images 
#  to create "pseudo cdrom device" on /dev/loop*:
#
#    mount ca-fileserver2:/vol/export /srv/
#    mount -o loop,ro /srv/osinstall/RedHat/FC6/F-6-x86_64-DVD.iso /mnt
#
# You can tell what loop device to use by looking at /etc/mtab after the mount
# The first set of disk parameters commented out below are
#   "install time disk parameters" with the "pseudo" cdrom.
# Your new domU HVM install will see "/dev/sda" just like a usual hardware
# machine.
#disk = [  'phy:/dev/vgxen/lvol0,hda,w', 'phy:/dev/loop0,hdc:cdrom,r' ]
# Example of after-setup "HVM up and running" disk parameters below; 
#  the last three devices were added later 
#  and last two are shared, writeable.
# Note, for HVM you must use "whole" device. 
# Do not try to get domU to see a partition on a device...
#   For example, in a HVM this will not work :  'phy:/dev/vgxen/tls4-swap,hdb1,w'
# Best that you fdisk any extra or added devices within one of your domUs
disk = [ 'phy:/dev/vgxen/lvol0,hda,w',
         'phy:/dev/vgxen/tls4-swap,hdb,w',
         'phy:/dev/vgxen/sharedvol1,hdc,w!',
         'phy:/dev/vgxen/sharedvol2,hdd,w!' ]
# Result of this config file from within the new domU:
#  [root@ca-DomU ~]# sfdisk -s
#  /dev/sda:  10485760
#  /dev/sdb:   8388608
#  /dev/sdc: 104857600
#  /dev/sdd: 104857600
# For vnc setup try:
vfb = [ "type=vnc,vncunused=1,vnclisten=0.0.0.0" ]
# Example with a passwd of "foo".
#vfb = [ "type=vnc,vncunused=1,vnclisten=0.0.0.0,vncpasswd=foo" ]
# Remember, this file is "per individual" domU
# during install you will need to change 
# /etc/xen/xend-config.sxp
# (vnc-listen '127.0.0.1')
# to: (vnc-listen '0.0.0.0')
# 
# then from any machine do:
#  "vncviewer <your dom0 ip or hostname>"
# to see vnc console

C.6 Virtual Iron Migration (VHD) Configuration File Example

If you migrate a Virtual Iron virtual machine (or other VHD file), you must place all the files in the /OVS/running_pool/domain directory, together with a guest configuration file (vm.cfg). You must manually create the vm.cfg file. Import the virtual machine using Oracle VM Manager.

Note:

Before you migrate a Virtual Iron virtual machine, export any disks from Virtual Iron's VI-Center Storage view to VHD files, and uninstall any Virtual Iron VSTools.

A sample configuration file to migrate a Virtual Iron virtual machine guest follows:

acpi = 1
apic = 1
builder = 'hvm'
device_model = '/usr/lib/xen/bin/qemu-dm'
disk = ['file:/OVS/running_pool/domain/vm_name.vhd,sda,w',]
kernel = '/usr/lib/xen/boot/hvmloader'
keymap = 'en-us'
memory = '2048'
on_crash = 'restart'
on_reboot = 'restart'
pae = 1
timer_mode = 1
vcpus = 2