Close this search box.

Walking and Talking at Fall SNW 2009: HDDs are Dead; A Formula for the Right Amount of Cache for Databases; For Sale Sign on Storwize?

In today’s blog I simply wanted to recap some of the tidbits of information that I picked up while chatting with various folks while walking and talking at the Fall 2009 SNW show as well as comment on some interesting developments at a couple of companies.

First, in a meeting I had this past Wednesday with Brendan Howe, NetApp’s general manager of the V-Series, he conceded that high performance hard disk drives (HDDs) are dead. While that was a statement made multiple times by many SSD providers during SNW, hearing it come from NetApp made it a little more real. After all, it is one thing when an SSD provider with 0% of the enterprise storage market proclaims HDD dead. It is quite another when an enterprise provider admits the same thing.

In another conversation on a separate topic that I had Marc Crespi, ExaGrid Systems’ VP of Product Management, I had the chance to ask him about EMC Data Domain’s recent announcement regarding its 180:1 fan-in replication feature and why ExaGrid is not making more noise around replication. He told me that in the accounts ExaGrid is in, they are simply not seeing any demand for it.

He said that the largest replication deployment that ExaGrid currently has in place that is aware of is a 9:1 fan-in ratio and even those are pretty rare with 3 and 4:1 fan-ins more typical. While the 180:1 fan-in ratio that Data Domain is promoting sounds impressive, he is unclear exactly what business problem Data Domain is solving since he was not aware of any companies that have a need for fan-in ratios that large.

His experience was that any companies that had a need for fan-in ratios that large are probably large companies with lots of remote offices. In those cases, they would probably be better serviced by using deduplicating backup software such as CommVault’s Simpana or Symantec’s NetBackup PureDisk than putting deduplicating appliances in all of those remote sites.

In speaking to Riverbed Networks about this topic, they too were equally perplexed as to what business problems Data Domain was trying to solve. Riverbed finds that in many of the accounts it is in, companies are centralizing their data protection and do not want or need that many appliances replicating data back to a central site.

The only possible use case we could come up with is for managed service providers (MSPs) that want to offer online backup without requiring their clients to deploy the MSP’s backup software. By using something like Data Domain, they could deploy small appliances in customer accounts and replicate the data back to a central site where they would host a larger Data Domain DD880.

One of the more enlightening pieces of information that I came away with out of this conference was an answer as to what is the “right” amount of cache that an organization should have in front of a storage system with a high performance database. I had never even heard that such a percentage even existed and I have been around storage for a number of years.

Well, apparently no one else in the industry knew either because after Dataram‘s Chief Technologist, Jason Caulkins, told me the percentage (the cache should be 5% of the size of the database), I started asking different users and vendors at SNW if they knew the answer. No one knew which is rather amazing.

You would think storage vendors would be all over that percentage as a mechanism to justify selling more cache on their systems because it is not like this is new information. Apparently there was actually a paper written on this topic nearly 30 years ago that documented why 5% is the right number (i.e. – you get the most bang for your buck as after 5% the performance increase in database performance decelerates.) I asked Jason to send me a copy of the paper so hopefully I can post a link to that paper sometime soon.

The reason that percentage is significant in Dataram’s case is that it released an SSD appliance a couple of weeks ago called the XcelaSAN. The appliance is designed to sit in front of a FC storage array and serve as an economical front end cache for it. Configured this way, you get most of the benefits of SSD without having to buy SSD for your entire database application.

Finally, while walking down a hallway I bumped into and met the new CEO of Storwize, Ed Walsh, who came on board with them about a month ago. However, after doing a quick review of Ed’s background that was included in the Storwize press release, I am inclined to believe that he was rushing off to put a for sale sign on Storwize’s front door. Consider:

  • While Walsh was CEO at Avamar, it was acquired by EMC in 2006.
  • Virtual Iron, another company that Walsh led, was acquired by Oracle in May 2009.
  • Prior to that, Walsh worked at CNT in sales and marketing which was acquired by McData in 2005.

If any further evidence is needed, consider that just last month Storwize announced a partnership with Hitachi Data Systems. If that isn’t an omen that someone other than HDS is about to buy them, I don’t know what is.

That’s it for this week. Check out the website next week as I hope to post a blog on some of the issues associated with SSDs and how vendors are, in some cases, using them to their advantage. Have a good weekend!


Click Here to Signup for the DCIG Newsletter!


DCIG Newsletter Signup

Thank you for your interest in DCIG research and analysis.

Please sign up for the free DCIG Newsletter to have new analysis delivered to your inbox each week.