The Virtual Machine I/O Blender Problem – What It Is and How to Fix It – Virsto CEO Mark Davis Interview Part II

Companies are adopting server virtualization at an accelerating rate each year and, as they do, the need for performance on the back end hardware is growing right along with it.  To accommodate this, enterprises need a way to increase the I/O throughput of their virtual machines (VMs). Today, I continue my blog series talking with Virsto Software CEO Mark Davis where we discuss the VM I/O blender problem, what it is and how Virsto boosts VM performance using a hypervisor plug-in that is up to ten times faster than what VM hypervisors natively provide.

Ben:   One problem associated with hosting multiple VMs on the same storage is that of merging or consolidating data on disk that is shared between VMs. I am sure with Virsto this problem still persists. Because of this sharing of disk space, you could have data from two different VMs sitting together, right next to each other, interspersed, correct?

Mark:  That is very correct. Of course it is almost never just two VMs. It is more like 20 or, in the case of virtual desktops, up to 200. But that is exactly it. Doing this well is a fundamental requirement in virtualization, because we do have numerous operating systems running at the same time on one physical piece of hardware. Then we have lots of physical servers also doing I/O to shared storage.

The deleterious performance impact of that is enormous. The loss of I/O throughout and I/Os per second that are caused by what we call the virtual machine I/O blender, and lots of other people are calling it that these days too, can reduce the total throughput that a given set of hardware can deliver by easily 90 percent.

What we do is merge these streams of I/O together so that we can much more efficiently use the I/O channel and disk spindles. We do this in a way that is completely transparent to the hypervisor and guest operating systems, but can deliver performance that is up to 10 times faster. Customers who run our software show us benchmarks where adding our software gets them 10 times the throughput.

At the same time, we have a way of intelligently managing the layout of the data on the back end storage so that it is both high performance and super space efficient. If we had 1,000 VMs all running Windows 8 in a Virtual Desktop Infrastructure (VDI) environment, the 99 percent of the OS images that are going to be exactly the same across all those thousand images, we store them only one time. Therefore we save an enormous amount of disk space in the process.

Ben: Is there an inherent deduplication process going on within your software?

Mark:  Yes. Although I should be more precise and say it is more like “no dupe” than dedupe. What we mean by that is dedupe is all about letting some process in your data center waste a lot of disk space by duplicating things, then some time later running a process that will get rid of those duplicates. We do not make the duplicates in the first place.

Ben:  How do you manage that? Do you have some kind of an index look up? It seems like that would take extra processing power to maintain that.

Mark:  You think it would, although it turns out in a couple of benchmarks that when you add our software to an ESX server, the CPU utilization goes down dramatically. Why? Because we are managing the I/Os so much more efficiently by streaming all the I/Os from all of the different guest operating systems into one I/O stream.  

Virsto is using the channels much more efficiently. It is using the Direct Memory Access (DMA) buffers and all of the hardware more efficiently. So even though we are doing this whole extra processing, the CPU utilization goes down.

We just had a customer send us some data. This is on VMware ESX 4.1 where they had run a benchmark.  In this benchmark without Virsto software installed, they had seen a few hundred I/Os per second out of a given set of hardware and servers.  

The CPU utilization just to run the benchmark in this case was almost 60 percent. The CPU was relatively consumed. They added our software, got 10 times the I/O.  At the same time, it took about 85 percent less CPU cycles. So instead of using 60 percent of the CPU, it was 6 percent of the CPU.

So the reason why Virsto can do that is because of how intelligently it uses the resources. It is really about solving this I/O blender problem I talked about before, instead of issuing a whole bunch of very small chopped up random I/Os, we can be extremely efficient about how we issue that I/O to the back end storage.

In Part I of this interview series, we look at how Virsto creates a VMware storage hypervisor in VMware
vSphere to give incredible boosts in performance using even traditional

In Part III of this interview series, we talk about where Virsto sits in the vSphere stack and how it works to deliver these increases in performance.

In Part IV of my blog series with Virsto Software CEO Mark Davis, we will look at how Virsto fits into the private cloud infrastructure storage space and what it does to optimize the performance of SSDs.

Click Here to Signup for the DCIG Newsletter!


DCIG Newsletter Signup

Thank you for your interest in DCIG research and analysis.

Please sign up for the free DCIG Newsletter to have new analysis delivered to your inbox each week.