Fibre Channel (FC) HBAs Will Not Be Embedded on Server Motherboards Anytime Soon; Interview with QLogic’s Vikram Karvat, Part 2

Ethernet adapters began migrating to LAN on motherboard solutions in the late 1990s. Yet this practice never took hold for other technologies like Fibre Channel.  The Fibre Channel (FC) market even today, as Gen 6 (32Gb) is being introduced, is dominated by host bus adapters (HBAs). In this second installment in my interview with QLogic’s Vice President of Products, Marketing and Planning, Vikram Karvat, he explains why 32Gb FC HBAs are still installed separately in servers, as well provides insight into what new features may be released in the Gen 7 FC protocol.

QLE2742 HBA Image

QLogic 2700 HBA; Source: QLogic

Jerome: Are the new QLogic 32Gb FC HBAs embedded in the server and/or storage array mother boards? If not, are there any plans to do so?

Vikram: The HBAs being discussed here are pretty much entirely add in cards on the server side. There are no embedded FC HBAs on servers. As a result, the FC HBA port counts analysts report represent not only the ports that are shipped from the vendors, but ports that are actually being deployed for use on an annual basis. This represents as close to a natural demand in any market as you could hope to measure.

Jerome: Why haven’t FC HBAs gone to being embedded?

Vikram: A network card typically goes embedded when it hits north of 50 percent connectivity. To get to north of 50 percent for FC, you would probably have to quintuple its volume. It’s a different set of economics.

We previously talked a little bit about the increased use of FC on the all-flash array (AFA) side, but QLogic is also actually seeing an increase in use and deployment of FC SANs in emerging markets like China. FC SAN deployments in China grew by 15 percent last year. That is huge and the growth rate has been like that for probably the last two to three years. Obviously, in the early years, growing even faster than that, but from a relatively modest base.

But it’s no longer just a modest base anymore. It’s significant at a global scale in terms of how many SANs are being deployed. Again, not to the same scale as in North America. But nonetheless, it’s measurable and is making an impact towards keeping the market relatively stable.

From a use case perspective, it’s interesting because it’s a market that tends not to want to spend money on something unless it’s absolutely necessary. It’s an indicator of the stability of the FC market.  And FC remains the predominant storage interconnect for storage arrays, as well as, servers. There are areas of growth like AFAs and emerging markets. All in all, FC is not a bad story. FC offers the availability, reliability, security, and lossless fabric that enterprises want.

Further, there is a lot of discussion about Remote Direct Memory Access (RDMA) and storage options with very low CPU utilization, etc. But FC has always been a fully offloaded architecture with ultra-low CPU utilization (in the single digits) which is why it is used for Online Transaction Processing (OLTP) types of infrastructures and it has always been zero copy (i.e. – does not require the CPU to perform the task of copying data from one memory area to another.)

The notion that there are new storage networking implementations out there that are more efficient is potentially a bit of a fallacy there. FC, as an industry, has not made a big deal out of its strengths because the industry just assumed everybody understood these concepts. We are having to remind people now.

Jerome: As FC is so mature and stable, what innovation is occurring?

Vikram: There a number of areas of innovation where the industry is investing. Obviously Gen 6 FC is good. Moving forward, the FC industry is actually in the process of defining Gen 7 FC as a next step up. Layering on to that, we are innovating in the flash space with Fibre Channel over Non-Volatile Memory Express (FC-NVMe).

FC-NVMe is an industry initiative to directly map the NVMe drive over a fabric. Why you would ask? The normal reasons why you map something over a fabric is for the ability to share, create pools, provision, and manage storage more effectively when it is connected, as opposed to having islands of flash floating around in servers.

The unique thing about FC-NVMe is that instead of using the standard SCSI stack, it actually bypasses the SCSI stack and uses native NVMe semantics to reduce both the latency of the access and the CPU associated with the SCSI infrastructure on both the storage and server sides.

You are effectively taking a technology that was initially very focused on driving latency and performance within a server and extending it out of the box to get some of these additional benefits. We were recently demonstrated the ability to run FC-NVMe, as well as traditional FCP traffic, simultaneously, on existing fabrics.

When we talk about developing a new technology, it’s like, “Hey, here’s my new thing. Oh by the way to get this, you have to go buy a whole bunch of new stuff.

What QLogic is doing is layering this functionality onto the infrastructure that’s already in place. It effectively comes for free.

We are pretty excited about that. We got a lot of interest from our OEM customers. I suspect that over the course of the next year, as this technology starts getting in front of end users via our OEM customers that our OEM customers will find it even more attractive. Again there’s everything to gain and nothing to lose.

In Part I of this series, we took a look at why all-flash arrays are driving the need for 32Gb fibre channel.

In the third and final of this interview series, Vikram reveals what new FC HBA features service providers are most eager to see and use.

image_pdfimage_print
Jerome M. Wendt

About Jerome M. Wendt

President & Founder of DCIG, LLC Jerome Wendt is the President and Founder of DCIG, LLC., an independent storage analyst and consulting firm. Mr. Wendt founded the company in November 2007.

Leave a Reply

Bitnami