Monday, June 29, 2009

PV enabling an HVM from VMware on XenServer (SLES and Debian)

As a condition for paravirtualization to work, a kernel that supports the Xen hypervisor needs to be installed and booted in the virtual machine. Simply installing the XenServer tools within the vm does not enable paravirtualization of the vm.

In this example; the virtual machine was exported as an OVF package from VMware and imported into XenServer using XenConvert 2.0.1.

Installing the XenServer Supported Kernel:

1. After import, boot the virtual machine and open the console.

2. (optional) update the modules within the vm to the latest revision

a. If the kernel-xen package is installed from an online repository – best practice is to fully update the distribution to avoid problems between package build revisions.

3. Install the Linux Xen kernel.

a. yast install kernel-xenpae

i. the xen aware kernel is installed and entries are created in grub

ii. x64 can use kernel-xen, x86 requires kernel-xenpae

iii. This is not the same as installing “xen” which installs a dom0 kernel for running vms, not a domU kernel for running as a vm.

iv. yast is the package installer for SLES, Debian uses apt (apt-get).

4. Modify the grub boot loader menu (the default entries are not pygrub compatible)

Open /boot/grub/menu.lst in the editor of your choice

clip_image002

a. Remove the kernel entry with ‘gz’ in the name

b. Rename the first “module” entry to “kernel”

c. Rename the second “module” entry to “initrd”

i. SuSE and Debian require that entries that point to root device locations described by a direct path such as: “/dev/hd*” or “/dev/sd*” be modified to point to /dev/xvd*

d. (optional) Modify the title of this entry

e. Edit the line “default=” to point to the modified xen kernel entry

i. The entries begin counting at 0 – the first entry in the list is 0, the second entry is 1 and so on

ii. In our example the desired default entry “0”

f. (optional) Comment the “hiddenmenu” line if it is there (this will allow a kernel choice during boot if needed for recovery)

g. Save your changes

clip_image004

1. Edit fstab because of the disk device changes

a. open /etc/fstab in the editor of your choice.

clip_image006

b. Replace the “hd*” entries with “xvd*”

clip_image008

c. Save changes

2. Shut down the guest but do not reboot.

a. Shutdown now -h

Edit the VM record of the SLES VM to convert it to PV boot mode

In this example the VM is named “sles”

5. From the console of the XenServer host execute the following xe commands:

a. xe vm-list name-label=sles params=uuid (retrieve the UUID of the vm)

b. xe vm-param-set uuid=<vm uuid> HVM-boot-policy=”” (clear the HVM boot mode)

c. xe vm-param-set uuid=<vm uuid> PV-bootloader=pygrub (set pygrub as the boot loader)

d. xe vm-param-set uuid=<vm uuid> PV-args="console=tty0 xencons=tty" (set the display arguments)

i. Other possible options are: “console=hvc0 xencons=hvc” or “console=tty0” or “console=hvc0”

6. xe vm-disk-list uuid=<vm uuid> (this is to discover the UUID of the interface of the virtual disk)

7. xe vbd-param-set uuid=<vbd uuid> bootable=true (this sets the disk device as bootable)

clip_image010

The vm should now boot paravirtualized using a Xen aware kernel.

When booting the virtual machine, it should start up in text-mode with the high-speed PV kernel. If the virtual machine fails to boot, the most likely cause is an incorrect grub configuration; run the xe-edit-bootloader (i.e. xe-edit-bootloader –n sles) script at the XenServer host console to edit the grub.conf of the virtual machine until it boots.

Note: If the VM boots and mouse and keyboard control does not work properly, closing and re-opening XenCenter generally resolves this issue. If the issue is still not resolved, try other console settings for PV-args, being sure to reboot the vm and close and re-open XenCenter between each setting change.

Installing the XenServer Tools within the virtual machine:

Install the XenServer tools within the guest:

1. Boot the paravirtualized VM (if not already running) into the xen kernel.

2. Select the console tab of the VM

3. Select and right-click the name of the virtual machine and click "Install XenServer Tools"

4. Acknowledge the warning.

5. At the top of the console window you will notice that the "xs-tools.iso" is attached to the DVD drive. And the Linux device id within the vm.

6. Within the console of the virtual machine:

a. mkdir /media/cdrom (Create a mount point for the ISO)

b. mount /dev/xvdd /media/cdrom (mount the DVD device)

c. cd /media/cdrom/Linux (change to the dvd root / Linux folder)

d. bash install.sh (run the installation script)

e. answer “y” to accept the changes

f. cd ~ (to return to home)

g. umount /dev/xvdd (to cleanly dismount the ISO)

h. In the DVD Drive, set the selection to “<empty>”

i. reboot (to complete the tool installation)

clip_image012

7. Following reboot the general tab of the virtual machine should report the Virtualization state of the virtual machine as “Optimized”

Distribution Notes

Many Linux distributions have differences that affect the process above. In general the process is similar between the distributions.

Removal of VMware Tools was tested following import to XenServer and I do not recommend removal of VMware Tools after the VM has been migrated to XenServer. If it is desired to remove VMware Tools, the vm must be running on a VMware platform when the uninstall command is executed within the VM ( rpm -e VMwareTools ).

Some distributions have a kernel-xenpae in addition to the kernel-xen. If PAE support is desired (or required) in the virtual machine, please substitute kernel-xenpae in place of kernel-xen in the instructions. Please see the distribution notes for full details.

Saturday, June 27, 2009

PV enabling an HVM from VMware on XenServer (CentOS RedHat)

This example works for RedHat and CentOS, the instructions are slightly different for SLES and Debian.
As a condition for paravirtualization to work, a kernel that supports the Xen hypervisor needs to be installed and booted in the virtual machine.
Installing the XenServer Supported Kernel:
1. After importing the vm as HVM, boot the virtual machine and open the console.
2. (optional) update the modules within the vm to the latest revision
a. If the kernel-xen package is installed from an online repository – best practice is to fully update the distribution to avoid problems between package build revisions.
3. Install the Linux Xen kernel.
a. yum install kernel-xen
i. the xen aware kernel is installed and entries are created in grub
4. Build a new initrd without the SCSI drivers and with the xen PV drivers
a. cd /boot
b. mkinitrd --omit-scsi-modules --with=xennet --with=xenblk --preload=xenblk initrd-$(uname -r)xen-no-scsi.img $(uname -r)xen
i. This builds a new initrd for booting with pygrub that does not include SCSI drivers which are known to cause issues with pygrub and Xen virtual disk devices.
clip_image002
5. Modify the grub boot loader menu (the default entries are not pygrub compatible)
Open /boot/grub/menu.lst in the editor of your choice
clip_image004
a. Remove the kernel entry with ‘gz’ in the name
b. Rename the first “module” entry to “kernel”
c. Rename the second “module” entry to “initrd”
i. SuSE and Debian require that entries that point to root device locations described by a direct path such as: “/dev/hd*” or “/dev/sd*” be modified to point to /dev/xvd*
d. Correct the *.img pointer to the new initrd*.img created in step 4
e. (optional) Modify the title of this entry
f. Edit the line “default=” to point to the modified xen kernel entry
i. The entries begin counting at 0 – the first entry in the list is 0, the second entry is 1 and so on
ii. In our example the desired default entry “0”
g. (optional) Comment the “hiddenmenu” line if it is there (this will allow a kernel choice during boot if needed for recovery)
h. Save your changes
clip_image006
6. Shut down the guest but do not reboot.
a. Shutdown now -h
Edit the VM record of the CentOS VM to convert it to PV boot mode
In this example the VM is named “centos”
7. From the console of the XenServer host execute the following xe commands:
a. xe vm-list name-label=centos params=uuid (retrieve the UUID of the vm)
b. xe vm-param-set uuid=<vm uuid> HVM-boot-policy=”” (clear the HVM boot mode)
c. xe vm-param-set uuid=<vm uuid> PV-bootloader=pygrub (set pygrub as the boot loader)
d. xe vm-param-set uuid=<vm uuid> PV-args="console=tty0 xencons=tty" (set the display arguments)
i. Other possible options are: “console=hvc0 xencons=hvc” or “console=tty0” or “console=hvc0”
8. xe vm-disk-list uuid=<vm uuid> (this is to discover the UUID of the interface of the virtual disk)
9. xe vbd-param-set uuid=<vbd uuid> bootable=true (this sets the disk device as bootable)
clip_image008
The vm should now boot paravirtualized using a Xen aware kernel.
When booting the virtual machine, it should start up in text-mode with the high-speed PV kernel. If the virtual machine fails to boot, the most likely cause is an incorrect grub configuration; run the xe-edit-bootloader (i.e. xe-edit-bootloader –n centos) script at the XenServer host console to edit the grub.conf of the virtual machine until it boots.
Note: If the VM boots and mouse and keyboard control does not work properly, closing and re-opening XenCenter generally resolves this issue. If the issue is still not resolved, try other console settings for PV-args, being sure to reboot the vm and close and re-open XenCenter between each setting change.
Installing the XenServer Tools within the virtual machine:
Install the XenServer tools within the guest:
1. Boot the paravirtualized VM (if not already running) into the xen kernel.
2. Select the console tab of the VM
3. Select and right-click the name of the virtual machine and click "Install XenServer Tools"
4. Acknowledge the warning.
5. At the top of the console window you will notice that the "xs-tools.iso" is attached to the DVD drive. And the Linux device id within the vm.
6. Within the console of the virtual machine:
a. mkdir /media/cdrom (Create a mount point for the ISO)
b. mount /dev/xvdd /media/cdrom (mount the DVD device)
c. cd /media/cdrom/Linux (change to the dvd root / Linux folder)
d. bash install.sh (run the installation script)
e. answer “y” to accept the changes
f. cd ~ (to return to home)
g. umount /dev/xvdd (to cleanly dismount the ISO)
h. In the DVD Drive, set the selection to “<empty>”
i. reboot (to complete the tool installation)
clip_image010
7. Following reboot the general tab of the virtual machine should report the Virtualization state of the virtual machine as “Optimized”

Wednesday, June 17, 2009

XenConvert 2.0.1 is released with VMware OVF compatibility

We have been working on adding Citrix Project Kensho OVF capabilities to XenConvert.

XenConvert is the free Citrix machine conversion utility.  It is primarily focused on converting workloads to either Provisioning Server or to XenServer, however there are some more generic functions that are of interest to most any virtualization folk.

The download can be found here:

http://www.citrix.com/English/ss/downloads/details.asp?downloadId=1855017&productId=683148

If this moves in the future go here: http://www.citrix.com/English/ss/downloads/results.asp?productID=683148 and look for XenConvert in the XenServer download section.

OVF packages from any existing VMware product (known to this date) can be consumed (imported) direct to XenServer.

The physical to OVF path can be run within a Windows machine and convert it to an OVF (meta file + .vhd) or just a vhd.

The OVF can then be consumed to XenServer with XenConvert 2 or to XenServer and / or Hyper-V in the upcoming Kensho refresh.

The VHD can, of course, be copied to any engine that uses vhd.

It also does a binary conversion of vmdk to vhd and injects a critical boot device driver that is compatible with XenServer (and works with Hyper-V).

Also, the XenServer .xva (vm backup files) can be converted to OVF.

Download and enjoy!

Thursday, June 11, 2009

Virtual Machine storage considerations

Storage.

Storage is your issue.

Storage is all about design and deployment.

Passthrough disks were first used for SQL servers, file servers, and Exchange servers.  Workloads that all require large storage volumes with high Disk IO.

Using passthrough dedicates a physical storage resource to a VM.  Before that you carve up the physical resource.

The negative is that you lose flexability in HA, failover, etc.  Not that it cannot be done with proper planning, but it isn't just plug, click and go.  It does take planning, equipment, and design.
I know that lots of folks are producing incredibly large VHDs and using them as storage for VMs.  What does this give you?  A VHD to restore, and backup at the host level.

Otherwise all backup that you do is at the machine level with a traditional backup agent within the VM to back up the volume.
In my mind, it is all about how you design it and want to recover it.

After working through a Disaster Recovery exercise for a particular application, I frequently found myself re-architecting the deployment so I could not only get good running performance, but a fast and easy to execute recovery of the system.

Our most limiting factor was frequently the time to recover the system from the backups (disk or tape).

Again, it is all about design.

The most humbling DR exercise to do is to recover the backup system itself.  A DR exercise that is frequently over-looked. But that is a different story.

As far as tweaking - no, don't tweak storage, design smart.
Split the spindles, spread the load.  Is putting two disk intensive servers on the same RAID 5 array better or worse?  Could that big array be split in two so one VM does not limit the other?

This is the big thing with storage and VMs.

One consideration is volume (gigabytes / terabytes).
The second consideration is performance.  Unlike RAM and Processor - the hardware IO bus is not carved into virtual channels.  It frequently becomes THE limiting resource.  Especially when you have multiple disk intensive VMs fighting  for that same primary resource.  In this case it is not a pool, it is a host resource.  It is finite.  It takes planning. 

VM A will limit VM B (and vice versa) when they fight for the same read / write heads on the same disk array.

This is where you must think about the VMs that you place, where you put their OS VHD, where you put their data.  How you do that storage, how you present storage, etc.

This is where the SAN argument really wins.  As the throughput, carving of storage, sheer number of spindles and heads, really shines.

If you are resource limited and can't afford the SAN, then think about the workloads that you are placing and how you divide the physical resources.  Give each disk intensive VM enough to do its job, but isolate them from each other.

Another strategy is multiple hosts.  Each host has one disk intensive VM.  All other VMs are low disk.  This way they have less IO effect upon each other.

Be creative.

Tuesday, June 9, 2009

The hypervisor processor is not being utilized

Recently, I have answered this question in the forums quite a bit.

The basic situation is:  the processor within a virtual machine is running at 100%, but the host processors at sitting there, twiddling their thumbs and only using 5% (for example).

The next response that I usually see is:  How can I tweak this all about to make that more the way I see things happening when not running in a VM.

First of all, stop there.  Type I hypervisors (XenServer, ESX, Hyper-V, VirtualBox, etc.) all have to manage the pool of physical resources.  It is all about isolating, containerizing, containing, and sharing the physical resources.

If a guest goes 100% on a processor the host should not go 100% on a processor.

The Type I is a full hypervisor, therefore all physical processors have been virtualized and the hypervisor migrates processes between the processors to balance out the load.

This is to maintain the integrity of the entire virtual environment and to prevent a single VM from hogging the entire system.
What you see with Hyper-V you should see with ESX, or XenServer, or Virtual Iron, etc.

You will se different results with VirtualPC, Virtual Server, VMware Server - because they are not full hypervisors - they are hosted virtualization solutions and share in the physical resources in a different way.

Here is a scenario:  What if the VM processor utilization was dynamic, it is allowed to take more from the host as it needs it.

If the amount of processing power given to a VM was dynamic.  In that if the VM spikes, then a host processor spikes.
As soon as you have more than one VM, all the other VMs now lose.

And, if a second VM does this same thing, now the remaining VM lose even more.

In the mean time, the poorly written application that is causing the processor spiking in the first place is taking resources from all the other users that are sharing in the pool of physical resources, for no good reason.  He is just being a hog.

Also, that operating system that you login to at the console, think of that as a VM as well.  He also has to share in the pool of physical resources.  So, if a single VM is allowed to spike a physical processor, then the Host itself also loses and it not able to respond to all the other VMs that run on the host including the hog VM.

For there it is just a downward spiral into the depths of an impending crash of the entire host and all of the VMs.
this is the hypervisor model.  All machines running on the hypervisor must share (play nice) with each other, or everyone loses.

So each machine is placed into a container, and that container is bounded.

These bounds can be modified on a VM by VM basis.  And if you have a single host only running a couple VMs, then playing with these settings generally does no harm.  As soon as you scale and add more and more VMs, this tweaking gets out of hand very quickly.

You tweak VM A in a positive way, which in turn has a negative impact on VM B and C.  So you compensate and tweak VM B and C which in turn has an impact on VM A again.  And you end up tweaking the environment to death.

The recommendation from all hypervisor vendors is to not mess with the default settings unless absolutely necessary.  And if you do, document it very well.

Now, if you have a single VM that is miss-behaving, then you need to dive into that particular VM (just like a physical server) to determine why he is processor spiking.  Is it an application?  Is it threading?  Is it device drivers?  Was the VM converted from another platform or physical installation?

There are tons of factors.   But always begin by looking at the application or process that is taking the processor and expanding from there.