Virtualization, consolidation and servers are becoming inextricably linked in the minds of mid-sized organizations as they look to reduce data center footprints and energy consumption while increasing server hardware utilization. Yet what can get overlooked during the consolidation and virtualization of their Windows applications is the development of a corresponding storage strategy. This is where the specifics on what is needed to deliver on an appropriate storage solution for this environment become a necessity.
Executing on a successful application and server consolidation and virtualization strategy for mid-sized organizations now calls for networked storage. As far back as September 2006, one storage provider found that SAN attach rates for server virtualization environments ran at approximately 70% in enterprise environments. At that time, that provider forecast that as commodity Ethernet infrastructure became more commonplace, network storage attach rates would climb even higher.
Fast forward to today and 1 Gb Ethernet networks are almost ubiquitous among mid-sized organizations. Toss in the fact that analyst groups like Forrester Research anticipate that mid-sized organizations will virtualize another 25% of their server instances by 2010 so it becomes a near certainty that their use of external storage will increase.
However it is also a distinct possibility that these same organizations have not quantified the key attributes that their network storage solutions need to possess in order to support their newly virtualized application servers.
Consolidated networked storage solutions must account for new availability, performance and scalability requirements that server consolidation and virtualization creates. Microsoft Exchange, SharePoint, SQL Server and backup servers are just some of the applications that will be virtualized while organizations will also look to consolidate file and print servers onto a centralized, networked attached storage (NAS) device.
Meeting these various needs calls for storage systems that possess the following three characteristics:
The consolidation and virtualization of Windows applications onto just a few physical servers – or even onto one server – means mid-sized organizations are putting all of their “eggs in one basket” so they have new needs for high availability for both their servers and their storage.
To compensate for this heightened risk on the physical server, server hardware often includes dual power supplies and network cards. In addition, new software features such as Live Migration found in the release of Microsoft Windows 2008 Hyper-V R2 operating system or VMware vMotion™ enable high availability through the dynamic application failovers from one physical server to another.
These same principles of high availability need to carry over into the storage hardware that is used by these virtualized applications. For example, a storage system such as the NetApp FAS3100 Series possesses all of the redundancy features found in server hardware as well as features such as dual active controllers and hot-swappable components such as controllers, fans and power supplies that can be replaced without requiring system downtime.
The FAS3100 also allows data replication as part of its solution. Using this software, users can configure the FAS3100 to replicate data locally in the form of its Snapshot feature or remotely using its SnapMirror feature. Mid-sized organizations can even take advantage of the FAS3100’s MetroCluster feature which can continuously replicate data to another remote FAS3100 for continuous offsite application availability.
A growing concern with the virtualization of applications is how to best protect the data of these applications. Virtualized applications must now share their underlying physical server’s hardware resources with other virtualized applications.
The sharing of hardware resources is often taken into account before applications are consolidated but the network, memory and performance resources that each application’s backup software requires can be overlooked. This can result in server bottlenecks during off-peak hours as the backup software on each virtual machine (VM) contends for these limited server hardware resources.
Off-host backups in the form of snapshots that occur on the storage system are now seen as preferential to running traditional backups on each application’s VM. By using features such as the FAS3100 Snapshot, organizations can create near-instantaneous backups of individual VMs and associated data without incurring any performance penalty on the host physical server.
Once created, these snapshots can be then used in a couple of ways. They can serve as a primary source for recovering application data since administrators can directly access and recover date from these snapshots. Alternatively, they can act as a source for the application data that backup software running on another server can access and they copy the data off to disk or tape.
Scalability and Flexibility
It is no secret that data growth continues even in today’s continued tough economic environment. However server virtualization makes it even easier for organizations to create new VMs that require more data storage. This can aggravate problems on the storage side since organizations may buy a system that cannot scale, cannot support multiple tiers of storage, or both.
The dynamics of this environment makes it difficult to plan and account for every detail going forward which makes it a necessity to identify a storage system that:
- Provides flexible options for growth. Organizations should have options to add more capacity, new software features or even upgrade the entire storage system as the availability, capacity and performance demands of the virtualized server environment change.
NetApp FAS storage systems are notable in that they all use the same underlying operating system (Data ONTAP) so its advanced software features such as FlexVol (thin provisioning) and deduplication, are available on any NetApp model from entry level FAS2000 Series to high end FAS6000 Series. NetApp also gives organizations the flexibility to scale performance or storage capacity so organizations can theoretically start out with a FAS2000 and grow it to a FAS6000 without ever needing to do a data migration.
- Minimizes impact to environment during upgrades. Virtualized applications decrease the tolerance for downtime for any reason including storage system upgrades and NetApp provides this type of availability for higher end FAS models. For instance, if organizations start out with a FAS2020 and need to upgrade to a FAS2040, it can do an in place. upgrade of the existing system without incurring any significant applicati
mizes learning curve. IT staffing levels are remaining flat or even declining so the interface and commands that organizations use to manage the storage system should ideally remain the same as it scales to larger, more robust systems.
In this respect, NetApp is unparalleled among storage system providers as it uses a common management interface across all of its platforms so once users learn its commands, they can use the same commands on any of NetApp platforms without needing to relearn them.
- Block and file storage interfaces. Organizations are consolidating multiple types of applications – database servers, Exchange servers as well as file and print servers. As they do so, some applications belong on server virtualization platforms such as VMware vSphere or Microsoft Hyper-V while others more appropriately belong on storage systems that support NAS. In this regards, NetApp offers both block (SAN-FC, iSCSI, FCoE) and file (NAS-CIFS, NFS) interfaces so organizations can consolidate onto a single unified storage platform for simplified administration.
The adoption of server virtualization is well under way in mid-sized organizations and forecast to gain momentum in 2010 for a host of reasons. But as its adoption accelerates, mid-sized organizations must consider the entire scope of their virtualized environment which must include storage.
Selecting an appropriate networked storage system that will host the data of virtualized applications is now critical to the overall success of implementing virtualization. It is for these reasons that storage system characteristics such as high availability, data protection and scalability and flexibility play such an important role in determining how successfully applications will perform and are managed after they are virtualized.
So when one finds features like deduplication, high availability, replication, snapshots and thin provisioning on a storage platform, one can have a high degree of confidence that it offers these three characteristics that mid-sized organizations now need for their virtualized Windows environments. This is why platforms such as the NetApp Unified Storage Architecture are so well suited for virtualized environments and have become critical to delivering on all of the requirements that mid-sized organizations are sure to encounter both initially and in the future as they start down the virtualization path.