Day 1 at VMworld Begins with a Look at the Future Pain of Server Virtualization: Attending the IBTA Tech Forum

It’s day one at VMworld in Las Vegas and while the day for me began in Omaha NE at 4:30 am CST before landing in Las Vegas around 7:30 am PST, I did not join the throngs basking in the VMworld love fest. Instead I spent the day educating myself more about the topic of Infiniband by attending the InfiniBand Trade Association’s (IBTA) annual tech forum that was held at Harrahs (Harrahs is adjacent to The Venetian where VMworld is being held). The reason that I elected to first attend the IBTA Tech Forum and not VMworld is simple. Everyone already knows that server virtualization is the BIG thing. What everyone doesn’t know or understand is why Infiniband is making a case to become the next big thing in another form of virtualization: Virtualizating server I/O.

If one looks strictly at the numbers of people attending the IBTA Tech Forum (55 – 60) versus the throng of 6,000+ descending upon VMworld (my best guess – Vmware would not disclose the actual number), you would think I had missed the boat. But judging by the number of grey hairs in the room (the number 55 – 60 seemed to coincide with the average age of those in attendance at the IBTA) versus the vast numbers attending VMworld that were in their 30’s, I doubt it. Many in attendance at the IBTA were the same individuals virtualizing servers and infrastructure before it was in vogue to do so. Now these same individuals are pointing out the new problems that server virtualization creates and building a case as to why InfiniBand used as a server I/O interconnect is the logical technology to solve them.

I choose the term “server I/O interconnect” carefully because InfiniBand advocates made a critical mistake in the past when they initially marketed Infiniband and they admit it, at least privately. The initial noise around Infiniband was that it would become the data center interconnect and eventually replace Ethernet in the data center. As anyone in the data center knows, that’s ludicrous. So not only did if show the Infiniband folks were drinking a little too much of their own Kool-aid, it hurt the credibility of InfiniBand in an industry that has a long memory and is slow to forgive. 

This time around it does not want to get into an “InfiniBand versus Ethernet” debate. It will loose that debate every time and it knows it. By positioning InfiniBand as complementary to Ethernet and avoiding the “rip and replace Ethernet” concern, it better positions itself to make an entrance into data centers.

The good news for InfiniBand is that demand for high speed, high throughput, low latency interconnects is on the verge of exploding. The bad news is no one knows about it (other than the 60 people who showed up for the tech forum. The big challenges that InfiniBand faces in addressing this are:

  • Positioning itself against the forthcoming Data Center Ethernet (10G) as complementary to it and not a replacement for it.
  • Overcoming Cisco’s marketing of Ethernet at the executive level
  • Providing mature Infiniband management tools
  • Educating people on how InfiniBand differs from Ethernet
  • Agreeing among themselves on a common message as to why it is so compelling and how companies should first look to deploy it

So how close is InfiniBand to having a realistic play in enterprise data centers? Best guesses put it anywhere from 1 – 4 years away. The reason for the delay is that the wide scale corporate adoption of server virtualization is just getting under way as only about 7% of corporations have adopted it according to Gartner’s VP John Enck. It is not until a large number of companies implement server virtualization and start to experience some of pain associated with server virtualization creates that companies will begin to understand the performance issues associated with server virtualization. More specifically, I/O on virtualized servers starts to surface as a performance bottleneck that InfiniBand is better suited than Ethernet to solve.

While this is not the right time to get into all of the technical details of why that is the case, the overhead of managing I/O impacts virtual servers such that hardware utilization of CPU and memory can top out at around 40% using Ethernet due to the latency involved waiting for responses from network and storage resources. InfiniBand’s higher throughput (40Gb/sec) and lower latency eliminates this I/O bottleneck plus it becomes more cost effective and uses less power than Ethernet.

VMworld is undoubtedly where the buzz is. Goofy VMware signs are everywhere; EMC is giving T-shirts and $100 gas cards away, and some vendor was giving away green stuffed animals that looked like Kermit the Frog (for what reason, I have no idea). Yet the real action today was behind the scenes as the Infiniband industry refines its message and shores up its product lines to get ready to go into the enterprise because everyone knows the next generation of server virtualization problems in the form of I/O congestion is coming. The big question is will it be Ethernet or InfiniBand to which companies turn to solve this problem? Right now the IBTA wishes they knew the answer to that question.

Click Here to Signup for the DCIG Newsletter!

Categories

DCIG Newsletter Signup

Thank you for your interest in DCIG research and analysis.

Please sign up for the free DCIG Newsletter to have new analysis delivered to your inbox each week.