Fusion-io’s Take on EMC’s VFCache (Formerly Known as ‘Project Lightning’); Interview with Fusion-io CMO Rick White Part V

EMC’s VFCache announcement caused a lot of the buzz in the storage industry a few months ago as it was seen by some to be done in direct response to Fusion-io’s very disruptive ioMemory architecture. Today in the conclusion of my interview series with Fusion-io’s CMO Rick White, he provides his take on EMC’s recent VFcache announcement and how he sees this impacting both Fusion-io and EMC. (Editor’s Note: This interview with Rick was conducted when EMC’s VFCache was still known as “Project Lightning.”)

Jerome: There have been a lot of rumblings coming out of EMC about its “Project Lightning” and how it is a Fusion-io killer. How does Fusion-io view the potential threat that Project Lightning presents to Fusion-io?

Rick: Today EMC is talking about Project Lightning (formally announced as VFCache on February 6, 2012) which is such a huge shift for them. EMC has traditionally never had a footprint in the server. It is not what it does.

From what I am hearing the overall cost of ownership is not changing much. Hypothetically speaking, say you have a SAN that costs a million dollars. Now you have a new server caching solution, the SAN costs are cut in half.

That sounds pretty cool until you discover the cost for your 10 servers to have this new caching system installed is $50,000 per server so $50,000 times 10 for the caching solution is $500,000. So that cost plus the half a million dollars and suddenly you are back at a million dollars. Nothing has changed. The only changes are how EMC is going to invoice you for it.

I would be surprised if EMC builds a solution that decouples performance and capacity by deploying a scalable solution that lowers overall cost and improves efficiency. This is a fundamental difference we see between our two companies. For us – it is performance plus capacity. EMC is performance times capacity.

It happened to Digital Equipment Corporate (DEC). It happened to all mainframe manufacturers. The client-server environment was tough for them. They were selling a quarter of million dollar proprietary systems and suddenly a competitive solution emerged that had essentially the same performance for $10-15,000 per box based on commodity, off-the-shelf hardware and software components. It was a huge shift then. It will be a huge shift now for anyone in the storage business including EMC. It will be interesting to watch what happens.

Jerome: Can’t Fusion-io just go to server manufacturers and beat EMC at its own game?

Rick: Manufacturers are getting closer. Fusion-io has been establishing relationships with worlld’s largest server manufacturers for the last couple of years. EMC is the newcomer to this space and is being forced to play catch-up with us.

That is probably why they need to OEM key technology and are looking at acquiring other pieces of technology. The market is moving fast and they just do not have time to do it themselves.

But it is frustrating. Others think flash on PCI-Express is all we do. Somehow they think they are entering ‘SSD nirvana’ because they have put flash on a PCI-Express card. You cannot put the flash on a PCI-Express card and call it ‘good.’ All they have done is taken the metal coverings off of flash drives and stuck them on a RAID controller. To be like Fusion-io, you also have to eliminate the large storage protocols and have the CPU interact with flash natively over the PCI-Express bus.

Flash drives have been speaking to the CPU through PCI-Express since they have first launched. They are only two ways to talk to the CPU – system bus (PCI-Express) or memory bus. That’s it. There is no other way. Everything, host bus adaptor, RAID controller, graphics card, all communicate with the CPU through PCI-Express.

So just because you put the flash drives on a RAID controller and put them in the PCI-Express slot, it is no different than a RAID controller with eight (8) drives hanging off of it.  You are still going to have a ton of context switching, which can cause dramatic and unpredictable swings in latency.

Jerome: So is that all it takes to be like Fusion-io? Lose the storage protocols and interact natively with the flash?

Rick: That is only the first step. Once you do that, you have to onload to the host CPU. This is similar to RAM. I have not seen a memory DIMM with an embedded CPU. I have not seen a memory DIMM with SRAM as cache either. Most of us expect that, with more RAM, we can get our CPUs to do more work which means our CPU utilization goes up.

This idea that server flash has to use CPU offload and RAM as cache are both concepts inherited from hard disk drives. Hopefully the industry catches on to the fact that to unleash flash’s true potential they need to treat it like memory rather than a hard disk drive.

The disk infrastructure was designed for a very slow medium – magnetic
disk.  Lose the storage protocols. Use the DMA straight to the NAND
flash. Let the CPU make calls and access the NAND flash directly like as
if it were accessing a disk.

If they do then maybe we will stop hearing competitors say things like, “You use host CPU cycles.” Although the simple answer to this is, “Yes, yes we do. Just like your server’s RAM does. We are persistent memory, not disk.

Another misnomer is that processing is the bottleneck. It is not. The reason many of today’s biggest data centers do scale-out is not to get more processors because customers do not need more CPUs. They need memory. Often it is the easiest way to get the memory they need to keep data hot. It is not like they can go to disk for everything because of latency and what they are doing is not CPU-intensive.

I cannot tell you how many Fusion-io customers use less than 20 percent of their CPUs before adding Fusion-io. We help them improve the efficiency of their servers by allowing each CPU to do more work and increase utilization which ultimately increases the overall work output and productivity.

In Part I of this series, Rick discussed how server-based flash is poised to change the enterprise.

In Part II
of this interview series with Fusion-io’s CMO Rick White, we will
discuss why this decoupling of I/O performance from storage is necessary
and why this creates a new tier of memory as opposed to a new tier of
storage.

In Part III of this series, Rick explains the new Fusion-io Octal drive, what makes
it different from Fusion-io’s earlier ioDrives and how Fusion-io is
going to market with it.

In Part IV of this interview series, Rick and I discuss why Fusion-io is opening up its virtual storage library (VSL) APIs to developers.

Share
Share

Click Here to Signup for the DCIG Newsletter!

Categories

DCIG Newsletter Signup

Thank you for your interest in DCIG research and analysis.

Please sign up for the free DCIG Newsletter to have new analysis delivered to your inbox each week.