Solidarity’s Dirty Little Secrets; Interview with GreenBytes CEO Bob Petrocelli Part III

In this third part of our interview series with GreenBytes CEO Bob Petrocelli, we learn about some of the advantages of using solid-state drive (SSD) technology and how Solidarity’s use of SSD differs from others’ implementations of it. As well, Petrocelli divulges what he calls a “dirty little secret” about some hardware that was cleverly repurposed to give Solidarity an edge in compression.

Ben:
So, what makes Solidarity different from other SSD solutions?

Bob: I talked about the RAM SSD. You get them for free. Our deduplication engine is capable of deduplicating 100,000 blocks per second. So the deduplication engine is not a bottleneck in this case as we’re using relatively inexpensive flash and fronting it with good optimization technology.

This is where our difference starts to emerge when you start factoring even modest effects of optimization. What we see with VMs and deduplication is that you do not have to worry about having to license all of these storage add-ons on the front end. Basically, the more VMs you have which means more copies of the base operating system which may achieve up to a 92 percent data reduction on that front.

Then compression is also extremely effective. It’s hard to gauge it exactly and we take a very conservative path. But if you are overall running a large number of virtual machines and their client loads against this unit, you’re probably going to see a five-times data reduction, which is very modest. You’re probably going to do better than that.

The advantage of being all solid state is that we tend to measure our IOPS [input/output operations per second] in 32K chunks and you have to look at real-world IO sizes. We’re seeing 32K IOs in a range of 30,000 reads and writes simultaneously.

What’s nice about having a box like this is you don’t have to worry about the quality of service and provisioning for all these edge cases that you have with magnetic disks. So you can basically put the box up there, plug it into your switch, and then figure out what workload you want to throw at it. It’s able to absorb high-random-write workload and high-random-read workload about equally because of the RAM front end.

Ben: You’ve got hardware compression engine. Are you using that compression engine for both the reads and the writes?

Bob: Yes, of course. You get the advantage of having three times the effective capacity as well.

Ben: That’s the beauty of a system like this, is that you’ve got such an amount of caching in an area like VDI were you do have a lot of redundancy and such. You can really have some pretty significant IOPS on reads.

Bob: What’s interesting with VDI is a lot of people have said VDI has turned the read-write equation on its head, where they say it’s 70 percent write, 30 percent read.

But that’s really an artifact of thin clones. If you don’t use thin clones and you just use ordinary fat clones–like just regular images–it actually becomes more of a 50/50 load.

It actually improves the balance over your network to not use thin clones. They’ve done a lot of measurements and found that thin clones are impairing the balancing of the IO over the network because they cause a lot of data unpacking to occur when you access them.

We think that a unit like Solidarity is able to allow you to run out of storage space per desktop for user space before you run out of IO. So you’re really limited only by how much user space you want to give per desktop before you have to add more units onto your switch. That’s an alternative mindset from just having to add more spindles.

We’re pretty excited about it because one could have a unit like this with 60 terabytes and effectively host 6,000 to 7,000 VDIs, maybe more, including the user space. You end up with a cost of maybe of $150,000 for all that storage. That’s a pretty good deal.

Ben: You said that you are mostly dealing with 32K read blocks. After you compress and deduplicate the data within that block, that block size could be a few bytes, anywhere up to still being 32K, right?

Bob: Yes. We actually have a variable block that we write. So we only write a block that’s big enough to hold the data. If it compresses down to 8K, then we write an 8K block. We write the nearest power anyways; we don’t write some crazy number. We write the nearest power that the block will fit in. And if the block is a whole, if it compressed to basically nothing or deduplicated away, then it’s only the block pointer that gets put down on the disk.

Ben: So you don’t combine blocks within a flash RAM–with an SSD block, then?

Bob: There is a notion of combining, but it’s to really combine for IO purposes. It is IO coalescing that happens during a transaction. So when a transaction gets put in this IO strategy that is determined.

You might take a whole bunch of 32K blocks and write them as a much larger IO because it’s more efficient. But they still have their separate pointers to fit the excess access space otherwise you could not deduplicate them.

Ben: Understood. So you’ve actually got two blades being inserted into a larger appliance, is that the case?

Bob: Yes. Those are the high-availability controllers. And those are hot swap.

Ben: The power supplies are on the larger appliance, is that the case?

Bob: Yes. There’s actually two chassis: an A chassis and a larger B chassis. … Both of them have the power supplies in the appliance that swap separately. They also both have hot swap PCIE [peripheral component interconnect express] trays for the compression accelerator cards, which look like little grates that look like cooling grates. There are actually covers for the PCIE trays. So each canister needs its own accelerator card.

Ben: The GZIP ASICs [application-specific integrated circuits] are custom. Do you guys foresee any supply chain issues with using custom ASICs?

Bob: The thing about the GZIP ASICs is they were not designed for us. The dirty little secret is we’re buying them because they were designed for the web server market. They were designed to compress web traffic. We repurposed them and put a driver together to use for storage.

There are about three companies that make very similar cards that have similar performance attributes. We actually have a dual source on those. A couple of the companies are very healthy. So this size model has a canister-based design. A little bit later in the year we are going to introduce a dual-headed design for larger deployments.

In the Part I of this series, GreenBytes CEO Bob Petrocellis
gives us some background on how forays into SSD and the replacement of
magnetic drives led to the development of Solidarity, a solution that’s
got people talking.

In Part II of this series, GreenBytes CEO Bob Petrocellis discusses the architecture of Solidarity and what differentiates it from competitive SSD solutions.

In Part IV of this interview series, GreenBytes CEO Bob Petrocelli talks about Solidarity’s failover response, including a failover response time of merely three seconds between canisters measured during testing. 

Click Here to Signup for the DCIG Newsletter!

Categories

DCIG Newsletter Signup

Thank you for your interest in DCIG research and analysis.

Please sign up for the free DCIG Newsletter to have new analysis delivered to your inbox each week.