Almost 3 years ago now, Robin Harris over at Storagemojo.com starting posting the list prices for different vendor’s products so customers have at least a starting point when comparing product prices. Though I suspect the list prices associated with these vendors’ offerings have changed since he originally posted some of them, what I specifically found remarkable is how difficult it is to ascertain what a deduplication solution will cost for an organization. The difficulty in pricing deduplication solutions had less to do with making sure you getting deduplication than making sure you include in your configuration all of the options that your environment needs, such as failover, NAS or VTL interfaces, data retention periods or replication, to effectively compare different solutions.
Pricing deduplication solutions becomes especially difficult in enterprise organizations as they look to introduce deduplicating solutions into their environment for the first time. They may not always have all of the information that they need to properly size an appliance so they end up having to make an educated guess as to exactly how much data they are going to back up, how long they will retain it and what the throughput rates during backup windows are going to be.
This was the case recently for Memorial Care Medical Center in Long Beach, CA, as reported by SearchDataBackup.com. Memorial Care discovered after the fact that it had implemented an undersized deduplication solution. Nightly it was backing up 20 TB of data but the deduplicating solution only had 13 TB of raw capacity so after only a month, it ran out of space. When this occurred, it elected to purchase another deduplication solution which added both more cost and complexity into their environment since it now had multiple backup targets and more devices to manage.
But Memorial Care Medical Center’s situation is not unique. Doing all of the research necessary to put together an appropriately sized solution is an arduous task and, even then, circumstances can change. I wrote an article a couple of years ago where the Minneapolis law firm of Winthrop and Weinstine had done its homework, forecasted for growth and selected an appropriate solution. But an unexpected court case generated 1.5 TB of new backup data so the law firm, after only a month, had to start looking for a deduplication solution with more capacity.
In order for enterprise organizations to avoid these situations as they price and compare different deduplication solutions, they should take the following steps:
- First, factor everything into the price of the deduplication solution. Organizations need to make sure the solution has all of the features that they are going to need and use. Examples include compression, data resiliency, deduplication, failover, number of Ethernet ports, replication, sufficient capacity and performance scalability. As these aforementioned organizations found out, overlooking or failing to account for even one of these features can result in an organization needing to change out or purchase another solution much more quickly than they anticipated.
- Second, how much time will it take to manage? Organizations may assume that if they run out of capacity, it is no big deal since they can just add another one and keep going. That sounds good on the surface but the practical ramifications are that they need to go through the whole capacity sizing exercise again. Additionally, the ongoing management of multiple appliances becomes more difficult since administrators need to configure and direct specific backup streams to specific appliances. This may then require they constantly manually balance the load between them so they do not overload specific appliances with too much backup data plus administrators are less efficient since they have to manage separate appliances.
- Third, how are upgrades and maintenance handled? Again, this will not show up in a quote but if administrators need to regularly spend time applying upgrades and managing the growth of the appliances over their life, that adds to the total cost of ownership plus takes away time that the administrator could be doing tasks that add more tangible value to the organization.
In this respect, the NEC HYDRAstor is unique in that it helps organizations avoid most of the time-consuming, information-gathering sizing steps that other products normally require organizations perform. Yes, organizations still need to determine if they want a feature like replication, but HYDRAstor includes all other fundamental options as standard and minimizes the need to make many of the other upfront decisions that implementing deduplication normally requires.
Since the HYDRAstor is based on a grid storage architecture, organizations can dynamically add more performance or capacity in the form of Accelerator or Storage Nodes at any time. This allows organizations to start with a configuration and price point that matches their initial requirements and which they can then grow as quickly or as slowly as they want.
Most users find understanding and documenting one’s backup environment complicated enough without having to go through the exercise of trying to gather all of this information to make an educated guess about which deduplication appliance to buy. The NEC HYDRAstor eliminates the need to do this guesswork in the first place plus it shortens the time that organizations need to price a configuration, eliminates much of the time associated with managing it after it is deployed and takes the risk out of purchasing a solution that will not grow as their environment grows. So while NEC has not quite done away with price lists and feature options, it does make pricing a deduplication solution a manageable exercise for enterprise organizations so they can more quickly and confidently arrive at a price quote that they can trust and rely upon without a bunch of hidden “gotcha”s waiting for them after deployment.