There are so many options available in today’s next generation of backup and recovery tools that sometimes it can be tough to prioritize which features to implement. In this third installment of my interview series with Dell Software’s General Manager, Data Protection, Brett Roscoe, we discuss four (4) best practices that organizations should prioritize as they implement next generation backup and recovery tools.
Jerome: There are now a number of technologies available to provide faster (RTOs, such as snapshots, for example. As these technologies have been introduced into new backup and recovery processes over the last decade, and even more so in the last few years, what do you consider the best practices that companies should be internally implementing to align these new backup and recovery capabilities with their internal business processes?
Brett: I am going to give you four. Number one is to leverage high-performance, snapshot-based solutions. I am a big believer that if you are protecting applications, you need to look at application-aware, application-consistent snapshot tools to do that. If you are looking at VMware or Hyper-V, then you need to use agentless tools that work in coordination with the tools that are native to those environments to provide the kind of hypervisor level protection you need and minimize your disruption in those environments.
At Dell, we’re focused on giving customers the ability to use block-based snapshot technology that provides a high-performance way to capture data.. Using block-based technology, once I capture the first image, I am only looking for changes that happen at the block level.
An average change rate might be 10 percent a day. That means only 10 percent of the data that I backed up since yesterday is going to be captured for today. I keep everything consistent. I have full snapshots from which to create recovery points. But I only have to move the data or manage the amount of data that is actually changing. That is the first thing that really drives efficiency in the environment.
The second best practice is to make recovery the primary goal. Backup is not the goal. An unrestored backup never helped anybody. Instead, you want to look at how capable you are of meeting your service level objectives (SLAs), and whether you have the RPO/RTO capabilities do so.
Today, we can move workloads to a point in time backup of an application, and allow any users to be up and running utilizing the last good snapshot of data. Or they can pick a point in time that they know they have good data, and recover a historical data point that may have been accidentally deleted or corrupted in some way.
This gives you the granular capability to choose between a full image level or a granular recovery point, or five minutes ago versus two days ago. This really eliminates the traditional need to do a full, bare metal, OS application recovery in order to just access a single piece of data.
Historically, with some traditional backup applications, you would take a full backup, then mount that full backup somewhere, and then pick through it to find the piece of data you want. But with a snapshot architecture, because our snapshots are application aware, you can go back to any point in time and stand up that snapshot as a virtual machine in any environment. This allows you to run your backup as a fully functional application in a virtual environment both on and off premises during any failure or scheduled downtime.
Recovery becomes the primary goal. You want to look at what the recovery time objective (RTO) is and what your recovery point objective (RPO) is. Dell allows you to have very fast RTOs (minutes) and very high RPOs (down to 5 minutes) to make sure you can be up and running with up to date copies of your primary data. Additionally, you can verify your backups by running our native verification tools or running your own recovery tests to test your ability to recover from a disaster. Many organizations often overlook this vital step.
The third best practice is utilizing deduplication and compression technologies. We have come a long way in deduplication and compression. These are no longer new technologies. There are very few vendors out there who do not offer some kind of deduplication and compression. They are very reliable and hardened technologies that, at this point, customers should be taking advantage of.
When I talked a little bit about the way we track blocks and only backup unique blocks, not only do we do that on the application, but then we go compare those blocks to any other blocks in the environment from any other application or any other server. We thereby reduce and eliminate almost any redundant data across the organization, making it really space efficient for the customer.
This will save you on your storage costs, your network costs, and your infrastructure costs. We get compression ratios up to 20x. As you can imagine, if I am not moving that data over the network, if I am not having to store that in some back end disk system, the savings becomes quite significant.
The last best practice I will talk about is the cloud. You want a solution that can then help you take advantage of the cloud. Whether that’s a private cloud or a public cloud, you want to figure out a way to use that.
For our managed service provider (MSP) customers, they want a way to set up their customers to utilize their back end IT infrastructure and maybe have a local caching capability in their environment. If you are just an end user customer, then you want to figure out how to use a cloud in a hybrid kind of fashion.
In other words, how do I have some of my data on site, but the rest in the cloud? Maybe that’s the bulk of my data, or maybe its data that I want to retain in another location without having to pay for a colocation. If I don’t want to pay for a secondary site, how do I utilize the cloud to either replicate or move data off site over time as a tertiary kind of storage place for me that’s very cost effective? Once again, going back to that OPEX versus CAPEX equation.
Those are the four areas that customers should really look at in terms of the next generation backup solution, and the best practices around how to implement them.
In Part I of this interview series, Brett and I discussed the biggest backup and recovery challenges that organizations face today.
In Part II of this interview series, Brett and I discussed the imperative to move ahead with next gen backup and recovery tools.
In Part IV of this interview series, Brett and I will discuss the main technologies in which customers are currently expressing the most interest.
In Part V of this interview series, Brett and I examine whether or not one backup software product can “do it all” from a backup and recovery perspective.
In Part VI of this interview series, Brett and I discuss Dell’s growing role as a software provider.
In Part VII of this interview series, Brett provides an in-depth explanation of Dell’s data protection portfolio.
In Part VIII of this interview series, Brett and I discuss the trend of vendors bundling different but complementary data protections products together in a single product suite.
Four Best Practices for Implementing Next Gen Backup and Recovery; Interview with Dell Software’s General Manager, Data Protection, Brett Roscoe, Part III
- Jerome M. Wendt
- November 12, 2014
- Cloud, Continuous Data Protection, DCIG Sponsored Analysis, Deduplication, Disaster Recovery, Interviews
Share
Share