We can all get caught up in the hoopla of new and slick storage technology features and lose sight of some the most important and basic details that keep our storage fabrics up and humming. Among these are the Fibre Channel cabling infrastructures and the distance limitations incurred by continued increases in FC speeds.
Author: Tim Anderson
Attempting to make your entire SAP environment highly available can be a gargantuan challenge, especially considering the number of moving parts contained within an SAP landscape. Most of the time when one looks to ensure that any application is protected and made highly available, it’s common practice to ask the application vendor for a set of best practices and guidelines to do so. However, SAP’s typical response is, “Work with our partners and/or 3rd party consultants to help you achieve the level of availability you are looking for.”
Determining backup performance has consistently been extremely difficult for customers to rationalize, seeing as there is no real meter or benchmark to look at. Just take a second and think of all the moving parts inside your backup and recovery environment (media servers, clients, databases, email, network, SAN, disk, tape, offsite vaults) – you name it, there is a laundry list of things to look at when trying to determine accurate performance metrics.
Determining a solid foundation for Disaster Recovery and Business Continuity plans are a significant challenge for enterprises of all sizes and shapes. Understanding the business value of your organization’s data is the first step to achieving that solid foundation as it provides a framework for what’s critical and what’s not so critical to the operation of your organization.
Whether taking on a new plan, or retrofitting an existing Disaster Recovery or Business Continuity plan, it’s extremely helpful to have a strict set of goals on how to accomplish, not only the DR when an actual incident occurs, but also to ensure that an appropriate test matrix is in place and utilized. Surprisingly the organizations I visit all seem very dedicated to DR and Business Continuity.
A clustered server environment is only as reliable as the system administrators who maintain it. The challenge they encounter after they configure and deploy the hardware and software that make-up a clustered environment is, “How to maintain it?” Most system administrators leave the configuration alone for fear of disrupting a mission critical application after it is initially deployed. Crucial details such as patches and configuration changes are not completed just due to the nature of the system itself. But what catches organizations off-guard is that at some point down the road when an event does prompt a failover from one server to another, the failover fails to occur because smaller changes have occurred in the environment that now preclude the failover from successfully taking place.