Storage I/O Control (SIOC) and the VMware Disk Scheduler

There are a number of fundamentals that are pertinent to discuss when looking at how VMware SIOC really works. The fact that it automatically throttles I/O based on load is a great feature in itself, but what really triggers this event and what mechanisms comprise this feature?

In the storage world, there is one enemy of performance that outweighs all the rest and really causes major issues for data access. This is none other than latency. Quite simply put, Storage I/O Control is the prioritization of I/O on every ESX server that leverages a common datastore with additional detection of SAN bottlenecks. It accomplishes this by utilizing the virtual storage stack and giving priority to those vm’s with higher disk shares.

VMware Virtualized Storage Stack

The virtualized storage stack contained within vSphere has basically two disk scheduler parts to it – the host-level and datastore disk scheduler.

Host-Level Disk Scheduler: Virtual machines that reside on the same ESX node have priorities on the I/O traffic only in the event that the local host bus adapter (HBA) becomes overloaded. This type of scheduler has been around since the ESX 3.x days and has configurable limits for I/O throughput.

Datastore (SAN) Disk Scheduler: vSphere 4.1 added this feature that contains two functions and only functions on block based storage (i.e. Fiber Channel or iSCSI).

1) I/O prioritization across all ESX nodes in the cluster.

2) SAN path contention analysis/calculations.

Again, both of these actions are facilitated by the distributed disk scheduler analysis based on each virtual machine’s share value.

Remember, SIOC kicks in only when the latency threshold has been breached (which has a default value of 30ms). I also put together a very basic flowchart of this operation so you can see where the logic is injected in this process.

 

As an added benefit, SIOC allows for the maximum density per ESX host while maintaining maximum performance on each cluster node.

SIOC Support:

One thing to consider are the threshold settings on SIOC with automatic tiering arrays. This must be adjusted based on the vendors recommendation so that the SAN is not adversely affected by SIOC.

At the time of this writing, there are three things that are not supported by SIOC:

  1. NAS based arrays (NFS in particular)
  2. RDM’s (Raw Device Mappings)
  3. Datastores that have multiple extents.
  4. Datastores that are managed by multiple vCenter servers.

Look for a subsequent posting on my blog about implementation considerations.

Leave a Reply