Search
Close this search box.

HP Storage Solutions for Virtualized Exchange 2010 Deployments

Virtualizing business critical applications such as Microsoft Exchange 2010 is the next frontier in server virtualization. But as organizations move down this path, sizing the underlying hardware that will host these applications becomes much more complex. This explains why we are seeing the emergence of reference configurations such as what HP has introduced for Microsoft Exchange 2010.

Virtualizing file, print and web servers as well as test and dev application servers using server virtualization platforms has become pretty straightforward. A determination is made as to what applications are going to be virtualized; the server, network and storage capacities are sized and configured; the hardware and software are purchased; and, then, applications are virtualized and consolidated into the new environment.

This is not meant to minimize or dismiss the work associated with virtualizing these applications. But virtualizing business critical applications requires much more planning and forethought than these other applications.

Among business critical applications, Exchange 2010 calls for special attention because of the highly visible and critical role it plays in all size organizations. However enterprises need to pay special attention to the factors that influence a successful Exchange 2010 deployment when it is virtualized. A recent HP technical white paper highlights some of these variables that enterprises need to consider as they configure their servers and storage in anticipation of virtualizing Microsoft Exchange 2010.

  • Microsoft changed its support policies regarding combining hypervisor based server virtualization HA with native Exchange 2010 data protection features. At Tech-Ed 2011, Microsoft announced that it had updated its Exchange 2010 virtualization support policy so its high availability (HA) options in Windows Server 2008 Hyper-V SP1 could be used with Microsoft Exchange 2010 Database Availability Groups (DAGs.)  

One ramification of this change in policy is that an individual Exchange Virtual Machine (VM) may be decoupled from its underlying hardware. Now a fail over of an Exchange VM can occur from one physical Hyper-V host to another and the Exchange VM, even though it is running on a new physical host, can still access the DAG running on the storage where the Exchange VM resided prior to it failing over to the other physical host.

This change in Microsoft’s support policy creates a number of new possibilities in terms of what server and storage configurations are available to support a virtualized Exchange deployment. But it also changes how enterprises need to configure the hardware to support Exchange.

  • The underlying physical server must be appropriately sized to handle both its regular Exchange workload as well as assume the workload of an Exchange VM that fails over.  Microsoft provides guidelines of how much processing power an Exchange 2010 server needs per active and passive mailbox in the context of “megacycles.” The trick is calculating how many megacycles each server in a specific Exchange deployment needs and then translating those megacycles into specifications that results in a server with the appropriate amounts of memory and processing power is a rather complicated engineering exercise.

Referencing page 8 the aforementioned HP technical white paper, here is the formula HP provides in that paper to calculate what size processor each server that is part of the virtualized Exchange cluster should have in order to support an Exchange mail server with 6,000 mailboxes (3000 active, 3000 passive):

Adjusted megacycles per core =
((new platform per core value) x (clock speed of the new processor))/(baseline per core value)

Now my guess is that there are enterprise shops out there that can make sense of that equation and confidently turn it into an actionable piece of information. More power to them. But for the rest of us, gathering the information that is needed to complete that equation and then confidently calculating the result is probably a pay grade or two above where most of us are at.

  • Storage configurations are now based on Exchange 2010 workload profiles. As intimidating as the equation to size server CPU and memory is, the guidelines for sizing storage get even hazier due to the number of storage arrays on the market and the features available on each. The HP technical white paper notes on page 10, “Sizing storage is a rather complex task having to account for many variables such as whitespace, deleted items dumpster and mailbox size to name a few.”

So my point here is not to say that enterprises should not virtualize Exchange 2010. Rather it is to highlight why vendors like HP and Microsoft are partnering to deliver Exchange 2010 solutions where the customer can purchase individual pieces from a single source (in this case HP) and put them together.

Configuring Exchange 2010 is already sufficiently complex as HP’s own technical white paper highlights throughout. The upside is that HP and Microsoft both recognize how complex virtualizing Exchange 2010 in enterprise environments and since they respectively understand the features of their own products, it enables them to build out a solution that is appropriately sized and configured to host Exchange 2010 in a virtualize enterprise deployment.

Contrast this with an enterprise attempting to successfully configure Exchange 2010 on its own with servers and storage from different providers. It is not that it could not be done. But the risks, considering all of the variables involved and the importance of Exchange in enterprises, are decidedly higher with an application that enterprise cannot afford to have offline.

Virtualizing business critical applications has gone main stream as evidenced by recent changes to Microsoft’s own support policies that facilitate the virtualization of Microsoft Exchange 2010 on Windows Server 2008 Hyper-V. But it also highlights that the days of “simple” virtualization deployments are over and that we are entering a new phase in server virtualization. As more business critical applications are virtualized, much more sophisticated, holistic solutions are needed to accelerate their deployments and ensure their short and long success.

HP providing reference configurations for its servers and storage with Microsoft Windows 2008 Hyper-V and Exchange 2010 does not eliminate the complexity associated with configuring Exchange 2010 deployments. But it does remove the burden from enterprises to do these tasks and puts in on HP. This should result in more predictable outcomes and stable solutions which, in the end, is what enterprises ultimately want when they virtualize Microsoft Exchange or any mission critical application in their environment.

Share
Share

Click Here to Signup for the DCIG Newsletter!

Categories

DCIG Newsletter Signup

Thank you for your interest in DCIG research and analysis.

Please sign up for the free DCIG Newsletter to have new analysis delivered to your inbox each week.