VMware Converter 5 Preview

I was listening to the VMware Communities Podcast the other week (6/8/11) and there was some discussion around the new features that the new VMware converter has to offer and thought it would make for a good blog post to discuss some of the new enhancements since there is a publicly available document out on the communities site now (posted June 3, 2011) and is a public beta release.

One of the most anticipated features that this version will bring is the ability to align volume based disks as they are imported to vSphere. This option will be on by default and is set to 1MB disk boundary for Windows based operating systems and will also default to 64KB for Linux OS’s. Don’t worry, you can adjust based on your storage array parameters. For those machines that have already been imported and are sitting in a mis-aligned fashion, a quick V2V might be in order to bring them in line (to coin a phase).

Another great new feature is that the data is now encrypted from source to server during the conversion process. This is great for those cross-segment P2V’s that take place over a WAN or through/to DMZ’s.

Additionally, for those Linux migrations – this version of converter now has the ability to preserve the logical volume manager (LVM) which has been lost in prior P2V’s.

As this product moves from public beta to full release, I will be sure to follow up on this post to see if anything has changed or been added.

PXE Manager for VMware vCenter

One of the products that I have been keeping my eye on over the last few months from Max Daneri over at VMware Labs, is the PXE Manager for vCenter that was released in April of 2011. The main goal of this tool is for automated provisioning of large quantities of ESXi nodes (Sorry, no ESX) and provides host state firmware backups, restore capabilities and archiving. You can also link this product with multiple vCenter servers and also has patch management options integrated within the tool.

Installation of the VMware PXE Server (Manager) side of this product is a simple executable (vpxecmd.exe) that is run on a stand-alone server or vm within the environment. You will need admin control of the vCenter server to complete the install and it actually creates a WebServer with self-signed certificates for it. During the installation phase, you will need to specify the FQDN or NetBIOS name of the vCenter server and cannot use an IP address. *One word of caution, if you are using the stateless feature, you must install and configure Microsoft NFS server.*

The other part of this is the VMware PXE Plugin that is installed from within the vSphere Client, just like any other plugin that you enable through the client. When it is downloaded and the install is complete on the client end, you will see a Solutions and Applications section in the Home folder of your client to access the new PXE Manager.

More great integration with this client, is the ability to deploy directly to vCloud Director as well as to Cisco UCS blades. Additionally, you can run MEMTEST on your hosts which could be quite handy for diagnosing problems.

Might have to do a follow up to this post when this product becomes more mature and is enhanced since I see this as a great asset to keeping the cluster uniform in conjunction with host profiles. Best of all, it is free!

Storage I/O Control (SIOC) and the VMware Disk Scheduler

There are a number of fundamentals that are pertinent to discuss when looking at how VMware SIOC really works. The fact that it automatically throttles I/O based on load is a great feature in itself, but what really triggers this event and what mechanisms comprise this feature?

In the storage world, there is one enemy of performance that outweighs all the rest and really causes major issues for data access. This is none other than latency. Quite simply put, Storage I/O Control is the prioritization of I/O on every ESX server that leverages a common datastore with additional detection of SAN bottlenecks. It accomplishes this by utilizing the virtual storage stack and giving priority to those vm’s with higher disk shares.

VMware Virtualized Storage Stack

The virtualized storage stack contained within vSphere has basically two disk scheduler parts to it – the host-level and datastore disk scheduler.

Host-Level Disk Scheduler: Virtual machines that reside on the same ESX node have priorities on the I/O traffic only in the event that the local host bus adapter (HBA) becomes overloaded. This type of scheduler has been around since the ESX 3.x days and has configurable limits for I/O throughput.

Datastore (SAN) Disk Scheduler: vSphere 4.1 added this feature that contains two functions and only functions on block based storage (i.e. Fiber Channel or iSCSI).

1) I/O prioritization across all ESX nodes in the cluster.

2) SAN path contention analysis/calculations.

Again, both of these actions are facilitated by the distributed disk scheduler analysis based on each virtual machine’s share value.

Remember, SIOC kicks in only when the latency threshold has been breached (which has a default value of 30ms). I also put together a very basic flowchart of this operation so you can see where the logic is injected in this process.

 

As an added benefit, SIOC allows for the maximum density per ESX host while maintaining maximum performance on each cluster node.

SIOC Support:

One thing to consider are the threshold settings on SIOC with automatic tiering arrays. This must be adjusted based on the vendors recommendation so that the SAN is not adversely affected by SIOC.

At the time of this writing, there are three things that are not supported by SIOC:

  1. NAS based arrays (NFS in particular)
  2. RDM’s (Raw Device Mappings)
  3. Datastores that have multiple extents.
  4. Datastores that are managed by multiple vCenter servers.

Look for a subsequent posting on my blog about implementation considerations.