In this fourth and final part of our interview series with GreenBytes CEO Bob Petrocelli, we hear about a three-second failover between canisters used in Solidarity, a solid-state storage array solution. If you’re not looking, says Petrocelli, you could miss the failover.
Ben: So, since most of your stack is software, it’s really not too big of a problem to be doing software updates on it. But I know you can run into a bit of an issue if there’s ever a bug with the ASICS [application-specific integrated circuits]. Have you guys experienced any of that yet?
Bob: Firmware updates for sure. It’s actually a lot easier to do on an HA [high-availability] box, because you can run on one canister and then do an update on the other canister. And then basically do a failover.
In fact, what Is interesting, I willl share a bit of info that we did when we were testing the failover benchmark. The standard HA box, which has magnetic drives in it, has 10-gig iSCSI failover of about 20 seconds, which is about par for the course. It isn’t bad; that’s good.
Ben: I laugh ’cause it’s pretty decent.
Bob: Yeah, that’s decent.
This unit, I was running an ESX server just earlier in the week. They had four VMs [virtual machines] running Iometer full tilt on a box. And they were running the way a customer would run it, which is the NFS [network file system], virtual disk provisioned against that, and Iometer against that. Some fancy pants going around VMware.
And I get failover between the canisters in three seconds. I could not even get my performance graph back up on the other canister quick enough.
So when you’re dealing with such fast response times through the entire I/O [input/output] chain, I’m not sure that you would even notice that. Because not only was I–the failover happened so quickly–but I kind of forgot that my VMs were also running off the same box.
And I was remote desktoping in and I was still interacting with them. So there was no interruption of service.
I think if I had a 20-second time, I might have noticed. But there was no interruption of service. The Iometer paused for about, again, three seconds before it continued.
So we are pretty optimistic that the user experience, as we start to drop these in for the things people like to do first, is going to be favorable because, of course, they are going to want to fail it. When they see it fail over … they’re going to miss it if they turn around.
Ben: What kind of rack real estate should users expect?
Bob: We are currently shipping a 3U unit with 16 drives. In this particular unit, all of the drives are three-and-a-half inch. The RAM-based drive has to be that big because it has got a lot stuffed in it.
The SSDs [solid-state drives] are interesting because they’re actually dual controllers that use dual SandForce 1550s, which are a well-proven controller. And then they have logic in them to make them SAS-2 [Serial Attached SCSI-2]. So … SAS-1 controllers and double the bandwidth, and they present a SAS-2 interface.
You are going to see a shift over the next year or two to more inexpensive flash technologies. That will lead–once people start to figure this out–to more intelligent up-front I/O that has much bigger caches to absorb the writes. And then, frankly, much cheaper solid-state drives. … They are not consumer.
By cheaper I do not mean cheap; I mean less expensive [drives] that are designed to absorb a lot of reads. They write fine, they just do not have a big write duty cycle and that is what we are aiming at.
We think it is a game changer, frankly, because the cost of magnetic drives has gone up and their availability turning to a spot market. When we build magnetic systems, we basically buy them on the spot market and then charge our customers on a per-drive basis. It makes a lot of sense to do what we are doing.
Ben: Because that is a 3U unit with two canisters, are you looking at a larger model that can handle more? I know that is a lot of IOPS [Input/Output Operations Per Second] for a medium-sized company. But when it comes to an enterprise–
Bob: There are two expansion strategies. One is a JBOD [Just a Bunch Of Disks] that plugs right into it. That doubles your I/O. That’ll be announced shortly after the base product is announced.
We are just qualifying that hardware now. Basically, it brings a second canister that is fully active; because right now that canister is more of a supporting role.
The second one is a 4U model, which is 24 drives. I expect that model to be available in the late spring-summer timeframe. It just started going through mechanical engineering with our hardware partner a few weeks ago. They Are definitely bigger.
Ben: That was my main question, was where you saw that double-headed piece moving?
Bob: When you think about, what Is the difference whether it Is in a canister or in a 1U or 2U head? It’s identical whether the connections are in a midplane or external–it’s the same software. So it Is really a question of market demand and where we can play.
In the Part I of this series, GreenBytes CEO Bob Petrocellis
gives us some background on how forays into SSD and the replacement of
magnetic drives led to the development of Solidarity, a solution that’s
got people talking.
In Part II
of this series, GreenBytes CEO Bob Petrocellis discusses the
architecture of Solidarity and what differentiates it from competitive
SSD solutions.
In Part III of this interview series, GreenBytes CEO Petrocellis shares what he refers to as some of the “dirty little secrets” about some hardware that was
cleverly repurposed to give Solidarity an edge in compression.