Performance Testing as a Technology Validation Method is as Complex as It Sounds

One of the main objectives of every DCIG Buyer’s Guide is to help enterprises and/or their buyers create a short list of technology products that align with their specific business or technical needs. But once they have a short list of products that meet those needs, they still need some criteria to help them make the right choice from among those products. While there is no silver bullet that guarantees they make the “best” selection, performance testing is an option to which organizations often turn to validate a choice in technology though using this method is as complex as it sounds.

Every Buyer’s Guide that DCIG produces has a number of objectives that DCIG’s analysts endeavor to achieve to the best of their abilities to include:

  • Identify products in a particular market segment
  • Gather the most up-to-date information about each product
  • Present product information accurately
  • Score and rank product features in a way that generally represents end user priorities

However perhaps chief among these objectives, which is not listed above, is to help organizations arrive at a short list of 3-5 products. From that list, they may then choose one that is appropriate for them to acquire and implement in their environment. The issue then becomes, “How does an organization go from this short list of 3-5 products to selecting the most appropriate for their environment?”

Organization employ a number of tactics to do so that include:

  • Conduct further research into the features of that product
  • Functional testing
  • Price comparisons
  • Prioritizing supported product features according to their needs
  • Performance testing

Of these five listed above (and there are others,) performance testing is often the logical and best means to arrive at the best solution. This is especially true in enterprise environments. Selecting a solution that is eventually determined to be undersized could prompt them to buy more products for their environment. This could in turn make the solution more complex and costly to implement and manage both short and long term. This undersized solution may even result in it not delivering the anticipated value and them being forced to bring in an entirely new solution to replace it.

Alternatively, an organization may choose to over-engineer the proposed solution to account for potential spikes in application performance that never materialize. This approach can again prompt organizations to invest too much in a solution as a means to insure themselves against acquiring an undersized solution.

Performance testing theoretically eliminates both of these possibilities as organizations may first test the solution to make sure it will work as intended in their environment. On the surface this approach sounds great but performance testing as a means to do a technology validation is as complex as it sounds for at least three reasons.

  1. It is often difficult if not impossible to simulate real world workloads in testing and development environments. Real-world workloads are called that for a reason. It is difficult to simulate the types of peaks and valleys of performance that they generate in lab environments intended for testing and development. While technologies such as virtualization, snapshots and instant recoveries certainly make it easier and more practical for organizations to run production applications in testing and development environments, they rarely can perfectly reproduce the exact conditions to which production workloads are subjected.
  2. Setting up performance tests and interpreting performance results is a specialized skill. Assuming an organization can create an environment that closely mimics their production environment, they still need to make sure they set up the test correctly so that they get meaningful results. To set up the performance test, capture the output and then interpret the results is a very specialized skill which organizations do not necessarily possess.
  3. It is time consuming. Organizations today are often extremely pressed for time and sometimes it is easier and potentially even less expensive to buy an overbuilt, over-engineered solution than to invest time in testing a variety of solutions to identify the perfect fit.

Performance testing sounds great in theory and can be an excellent means to identify the best product for your environment. Yet performance testing is as complex as it sounds even when the performance testing produces the desired results due to the time and effort required to execute upon it. In a forthcoming blog entry I will examine an appliance targeted at enterprises that helps them overcome these historical obstacles associated with performance testing of storage solutions being considered for acquisition.

image_pdfimage_print
Jerome M. Wendt

About Jerome M. Wendt

President & Founder of DCIG, LLC Jerome Wendt is the President and Founder of DCIG, LLC., an independent storage analyst and consulting firm. Mr. Wendt founded the company in November 2007.

Leave a Reply

Bitnami