Riverbed Dedupes Data Domain; Managing Encrypted Data Archives for 100 Years: Final Insights from Fall SNW 2008

While the fall Storage Networking World (SNW) in Dallas, TX, finished well over a week ago, I still had a fair number of notes and comments that I gleaned from my briefings with the companies that I met with that I wanted to share in the form of a blog entry. So before the spring 2009 SNW is upon us, here are some of my final thoughts from those meetings:

One of the more interesting conversations I had was with John Martin, VP of Product Management with Riverbed Technology. For those of you unfamiliar with Riverbed, its Steelhead® appliances provides WAN acceleration to improve application performance across corporate WANs. As part of the underlying secret sauce in these appliances, Riverbed uses compression and deduplication technologies (among others) to accelerate application performance. That information is fairly well known. What is not so well known is that it has seen instances where it has improved the data reduction rates by 30 – 70% of data that was already deduplicated, and it has specifically seen these results when testing with Data Domain‘s appliances. (Customers may see similar data reductions when Riverbed’s appliances are used in conjunction with other deduplicating appliances but Riverbed has specifically done tests using Data Domain where it has seen these types of reductions in data.)

Riverbed found that it is able to dedupe previously deduped data by using a variable-length versus a fixed-block method of deduplication (fixed-block is used by a number of deduplicating appliances). While almost any deduplication approach will decrease the amount of data that companies store to disk, the type of deduplication technology used for replication arguably becomes more important since it can further reduce the amount of network bandwidth that companies need when replicating the data.

In cases where companies are replicating data that was previously deduplicated using a block-base method, they may see additional benefits when using an appliance such as Riverbed’s Steelhead that uses variable-length deduplication. Riverbed’s Martin did emphasize they had seen this only in a few cases when doing testing with Data Domain’s appliances and they did not see the same benefits every time. That said, companies looking to replicate previously deduped data to another site could benefit by using appliances such as Steelhead that use a dedupe technology that is different than what they used to initially deduplicate the data.

My meeting with Open-E, a provider of NAS software for white boxes, was of note for a couple of reasons. Open-E provides Linux-based NAS software that is targeted at SMBs that are looking for affordable NAS software that can run on off-the-shelf server hardware (starting price for the software that supports 16 TB of capacity is $1400). This is noteworthy not because of its price but because Open-E already claims it has 10,000 installations of its software worldwide.

Also of note is how it competes and differentiates itself from Microsoft Storage Server (its primary competitor in this market space). The latest version of its software now natively supports an Active-Passive failover configuration without the need for additional clustering software which indicates that even SMBs are seeing a need for high availability in their environments. However SMBs considering this solution need to weigh that this it is a Linux-based appliance and, if an SMB is already using Windows Active Directory, how well such an appliance play with with AD’s security structure.

The focus of my conversation with Hitachi Data Systems was primarily around their new AMS 2000 family of storage systems and that is now offers symmentric Active-Active controllers for these systems targeted at the midrange. As a former storage administrator and engineer, I have sometimes wondered what has taken companies so long to offer this type of high end functionality at the midrange space. (Anyone who has ever spent time mapping and balancing LUNs between specifics controllers on midrange arrays knows what I am talking about.) Not only does it simplify the backend management of these storage systems, it opens the door for companies to use any path management software on the host system since the software no longer needs to talk to the back end storage system to switch LUNs from one controller to another should a path go offline.

One final thought that came out of this briefing and which I will close this blog entry on had to do with managing encrypted archived data stores. We were talking about a recent development where RSA Security’s Shamir has supposedly developed an algorithm that could break AES-256 encryption which DCIG has discussed in an earlier blog posting. This prompted a comment from Eric Hibbard, HDS’s Sr Director of Data Networking Technology, who is involved with a SNIA subcommitte for coming up with best practices for managing 100 year archives. One of the challenges they are wresting with now is this exact issue. When data is encrypted and stored on media such as optical, how do companies deal with situations when an algorithm to decrypt previously encrypted data emerges 10, 20 or more years later after the data was orginally encrypted? Right now there are no good answers to that question.

Click Here to Signup for the DCIG Newsletter!

Categories

DCIG Newsletter Signup

Thank you for your interest in DCIG research and analysis.

Please sign up for the free DCIG Newsletter to have new analysis delivered to your inbox each week.