Search
Close this search box.

A New Benchmark for Backup Performance: Total Time to DR

Determining backup performance has consistently been
extremely difficult for customers to rationalize, seeing as there is no real
meter or benchmark
to look at. Just take a second and think of all the moving
parts inside your backup and recovery environment (media servers, clients, databases,
email, network, SAN, disk, tape, offsite vaults) – you name it, there is a
laundry list of things to look at when trying to determine accurate performance
metrics.

In most cases vendors will typically take the absolute
best-case scenario to provide performance metrics and, as anyone knows that has
spent any sort of time managing a backup infrastructure, those numbers are
usually just a metric that sound impressive but are never realized in most
production environments because there are too many variables one has to
optimize to even achieve them.

However, FalconStor has announced something extremely unique. It
has taken a scenario that is more likely to be encountered in most production
environments, tested that and come up with some very attractive performance
numbers
for its VTL product when used in 4 Gb FC storage environment in conjunction
with Veritas NetBackup’s OST API. It’s interesting and refreshing to see a
vendor provide performance metrics based on common denominators that are more
likely to be found in customer environments and ensure that if a customer does decide
to deploy its VTL, the bar for performance is more likely to go up than down.

The overall metric FalconStor was going after was Total
Time to DR
(Disaster Recovery), which is the time it takes to backup the data, deduplicate the data, replicate the data to a remote DR site, and then finally recover the
data so it is in an operational state. This metric is all encompassing, as it
takes every aspect of the backup and recovery environment into account when
performing a true DR.

As most of you know, most backup metrics are based on just
one variable: backup ingest performance. However, this is only one small piece
of the overall puzzle. Recovery and replication are the much more important
ones. It’s great to backup data really fast, but if it takes three times as
long to recover it, well, try to explain that to your CIO when a major
application goes out and he is standing over your shoulder waiting for the data
to be recovered.

One other key metric FalconStor tracked was the Total
Time to Protect Data
, which includes:

  • Backing up the data
  • Deduplicating
    the data, and
  • Replicating
    the data to the remote DR site

The configuration FalconStor choose to use was standard
Dell Servers with standard Storage (SATA Drives) in a Fibre Channel (FC)
environment. Using the latest FalconStor VTL software in combination with the
OST API from Symantec, it garnered some very good results which are detailed
below:

The test bed consisted of:

  • 100 TBs of production data
  • Single cluster of two FalconSotr VTL nodes
  • Four deduplication nodes
  • Backup and deduplication processes were run concurrently

In this configuration, the total time to backup and deduplicate the 100 TBs of data was under 14 hours which is an average of about 2 GB/sec. When reconfigured to just minimize total backup time, the two-node VTL cluster achieved 2.8 GBs/second of backup speed which reduced the total backup window to under 10 hours for the same amount of data.

Above and beyond the standard configuration of this test,
there are more options that exist that not only enhance these performance
metrics but also provide more all-inclusive protection of your data, some of
which include:

  • High
    speed tape duplication.
    This leverages the NDMP and OST protocols
    integration to initiate high performance tape exports to physical tape
    library directly from the VTL while maintaining full catalog consistency
    of the backup catalog.
  • Host
    Backup Software on the VTL.
    This provides the ability to install 3rd
    party backup software directly onto the FalconStor VTL system so
    organizations can ,move backup traffic off the SAN and onto the VTL’s
    server bus to increase backup performance
  • Backup
    Catalog Replication.
    Using Falconstor’s NSS functionality, which is built
    directly into all the FalconStor software packages, a customer can
    maintain the ability to replicate the backup catalog itself and then in
    turn recover it at the DR site. Organizations can then recover all components
    of the backup and recovery environment with minimal effort.
  • Platform
    Flexibility
    . FalconStor VTL software is
    available as a custom-built appliance, your own servers, or even thru a
    virtual machine. This offers numerous options when working with any size
    location or backup data-set.

This set of performance numbers from FalconStor is based a
more relevant, real-world scenario than many I have seen. These help to ensure
that when you decide to deploy a VTL solution, there are some actual benchmarks
that are both meaningful and achievable. 
FalconStor offers a number of flexible options for its VTL so it can be
architected to meet the needs of your environment as opposed to requiring a complete
re-architecture of your backup environment. Yet what was missing until now –
and what these performance numbers now provide – is  a great starting point for customers from which to work regardless of which FalconStor
VTL configuration they select so they can understand the real impact a FalconStor
VTL will have on their environment.

Share
Share

Click Here to Signup for the DCIG Newsletter!

Categories

DCIG Newsletter Signup

Thank you for your interest in DCIG research and analysis.

Please sign up for the free DCIG Newsletter to have new analysis delivered to your inbox each week.