Triggering Event Exposes Painful Realities (and True Costs) of Using Legacy Solutions for Business Continuity

Organizations do not like to think about business continuity for some very simple reasons: it’s costly, it’s complex and it exposes to companies just how vulnerable they really are should a disaster occur. So companies tend to live in denial about implementing a business continuity solution until some triggering event occurs that makes them have to deal with the problem head on.

That’s the situation that recently confronted one financial institution as it faced fairly heavy audit requirements. It needed to prove that it could recover its applications and application data in a time frame that satisfied the business. Though it had a production data center and a DR site in different cities and states, it had no firm idea of how long it would take to recover the production site at its DR site or if it would even work.

Needless to say, their business found this unacceptable so the financial institution commissioned its senior business continuity architect with the task of creating and delivering an enterprise business continuity solution for the financial institution that met these RPO and RTO requirements. To establish the scope of the DR project, he started by:

  • Documenting the applications that the business would need to recover at the DR site
  • Documenting the hardware that these applications were using
  • Working with the business units to provide a strategy which would meet the recovery point objectives (RPOs) and recovery time objectives (RTOs) for each application with minimal IT staff
  • Procuring the hardware needed at the DR site to recover these applications

From this study, the architect discovered the business had some challenges in its previous DR strategy. In reviewing the results of the study, he established that the business had many applications classified as “Critical” with RPOs of under 4 hours. Of those applications, a large portion of them were classified as “Mission Critical” that possessed more stringent RPOs of less than 1 hour.

Of these, at least one application had an RPO of 5 minutes and needed to recover to the last transaction that occurred just prior to the service disruption. What he found was that even though the financial institution already had a DR site, the best case scenario for recovering all enterprise applications and bringing them online at the DR site significantly exceeded their RPOs.

However the time frame to recover current applications was only the tip of the iceberg in terms of the challenges he faced. Other items he uncovered included:

  • The financial institution had adopted a tiered storage strategy. Its legacy storage systems at the company’s primary site were core to day-to-day production. Unfortunately the company did not have sufficient storage capacity at the DR site so it could not run the applications under production workloads even if it could recover them from an application server standpoint.
  • The inability to easily replicate data between different tiers of storage. The business could not use the replication software found on one tier of its legacy storage solution and replicate the data to a secondary tier.
  • DR options that were neither easy nor cost-effective to implement. When he asked the vendor of his legacy storage systems to present him with a solution, it gave him two choices:
      1. Buy a lot more of its software and then its professional services to make everything work together.
      2. Reprogram all of the financial institution’s applications to work with its storage in a mainframe environment.

From the architect’s perspective, neither of these options was viable, especially since the vendor indicated it could take months if not years to put either of these solutions into place. He rejected these options outright and began to search for an alternative solution that the business could implement much more quickly and would provide more flexibility in its choice of storage solutions and meet the business RPOs and RTOs.

It was at this point that the architect turned to HP, the financial institution’s provider of servers and server blades, to see what options they could present. HP reciprocated by introducing him to one its business partners through their VAR program, InMage Systems, and its Scout business continuity software. The architect’s experiences using Scout will be chronicled in an upcoming blog entry.

Click Here to Signup for the DCIG Newsletter!


DCIG Newsletter Signup

Thank you for your interest in DCIG research and analysis.

Please sign up for the free DCIG Newsletter to have new analysis delivered to your inbox each week.