Is FCoE a Diabolical Plot?; Musings on SNW’s Day 2 FCoE Announcements

I initially intended to share in this blog posting what I learned from my briefings on Day 3 of Storage Networking World (SNW). However I’ve had some more time to digest the news surrounding the Fibre Channel over Ethernet (FCoE) announcements at SNW on Tuesday and the more I think about it, the more this whole FCoE strikes me as a huge setup to lock users into Fibre Channel (FC) that is being carefully orchestrated by the FC industry. Though this was hinted at about a year ago in an article that appeared on Computerworld’s website, the roadmap and agenda of how vendors like Brocade, Emulex and Qlogic and, to a lesser extent, Cisco and Intel, intend to do so or the next 10 years is more clear.

My understanding is that 8 Gb/s FC represents the end of the upgrade cycle for the current generation of FC technology. Whether enterprises are running 1, 2, 4 or 8 Gb/s FC, the underlying optics are essentially the same allowing for interoperability between new HBAs and existing FC cables and directors. Most importantly, the FC infrastructure did not need to dramatically change from generation to generation as upgrades occurred or new products were released.

However those days are over. Again, as I understand it, the next FC upgrade cycle in data centers beyond 8 Gb/s, whether it is to 10 or 16 Gb/s FC, is going to require a rip-and-replace of the current data center FC infrastructure. With that looming, FC vendors knew they needed to cooperate and collaborate to keep FC viable regardless of which FC technology path users choose. Otherwise when users start to take a long, hard look at the pros and cons of FC versus InfiniBand during the next data center refresh cycle, 40 Gb/s InfiniBand stands an above average chance of replacing FC.

So to avert this, my guess is that the FC vendors concocted a plan: Use FCoE to connect all enterprise servers, get a few analysts on board to endorse the idea and then convince end-users to take their eyes off the longer term ramifications of using FCoE. By getting enterprise users to bite on FCoE and spend the next few years connecting their remaining 85% of their servers to  existing FC SANs, users are locked into FC for the next 10 years until the next disruptive technology comes along.

Now with the remaining 85% of the servers in the data center running FCoE, the most logical upgrade path for users for the original 15% of servers and storage is FC. Then regardless if the next FC upgrade is 10 Gb/s or 16 Gb/s FC, when the inevitable rip-and-replace comes in 2 – 4 years, FC lives on and InfiniBand remains a niche market.

Tuesday’s announcement had less to do with what’s best for the end users and everything to do with preserving Brocade’s, Emulex’s and Qlogic’s core FC business. To do so, they needed Intel and Cisco to come on board, support it and promote it. If this FCoE initiative fails and users actually start to compare the benefits of InfiniBand to FC and realize that they can get 10x the benefits at the same cost as FC.  FC and InfiniBand could swap places.  Then FC could become the new niche market and InfiniBand may begin to dominate in the data center.

Look for my notes and thoughts on my Day 3 SNW briefings and meetings on Monday.

Jerome M. Wendt

About Jerome M. Wendt

President & Founder of DCIG, LLC Jerome Wendt is the President and Founder of DCIG, LLC., an independent storage analyst and consulting firm. Mr. Wendt founded the company in November 2007.


  • Rob says:

    Have a couple of issues with your thoughts.
    First, a company or an industry looking ahead to changes in technology and make plans for it isn’t really a conspiracy to defraud the consumer though it is bit harder to say this about the auto industry with a straight face. It is forward looking and seeing what you need to do to stay in business.
    Certainly FC technology is expensive and sometimes overly complex but it does work and it works well.
    Something that can not be said for Infiniband. Right now IB is more a curiousity than a viable solution for customers. If you are using IB for your servers then adding your storage to it makes sense. But buying IB solely for storage is just silly at this time and will remain so for some time.
    But, ultimately the market and the consumer will decide which technology makes sense for them and having another option to consider is not a bad thing. Brocade, Emulex, Qlogic, et. al. will not stay in business if they deliver garbage.

  • Dan says:

    In this post, you use FCoE as a decision maker between FC and IB. One of the main pushers of FCoE is no other than Mellanox – the IB chip makers, and with the slow progress of 10GBe and IB being far ahead both in cost & performance, it is no wonder they want FCoE to run over IB.
    I see two domains here – the physical network, and the storage protocol. For network the choices are FC, IB or 10GBe. The protocol for networked storage is SCSI (over FC, ethernet (iSCSI), or IB (ISER/SRP).
    I very much doubt that the protocol can choose the network. As far as storage management goes, FC has the most to offer today. That’s why FCoE makes sense – use it with IB/10GBe or straight FC. The switch management can be extended to non FC servers and so on. This is not possible with iSCSI/ISER/SRP.
    IB technology is making slow progress into the enterprise. There is nothing on the horizon that even have the promise to look better. It has a long way to go before it overcomes the politics and skepticism, but from a technical point of view it is superior, so I believe it is here to stay.

  • Ryan says:

    “[Infiniband] has a long way to go before it overcomes the politics and skepticism, but from a technical point of view it is superior, so I believe it is here to stay.” -Dan
    This is the same thing that was said about ATM, Betamax, FireWire, and any number of other technologies that never gained widespread adoption.
    The economics determine the winner in the marketplace, not technical superiority. The commodity economics of Ethernet will liikely make it the eventual winner, especially once 10GBe over twisted pair is widely available. Ethernet is fast enough, simple, backwards compatible, and inexpensive. I believe this means iSCSI over 10GbE is the best choice now, and iSCSI over 40 and 100 GbE in the future.
    iSCSI and Ethernet are where my organization is spending all of it storage dollars, and that won’t change until there is a can’t-miss economic case for FCoE or InfiniBand.

Leave a Reply