Organizations typically leverage DCIG Buyer’s Guides in the following three ways and in this order:
1. They look to them as a resource to get a listing of enterprise products.
2. They look to the rankings to help identify the best products for a specific use case.
3. They evaluate the product features to perform product comparisons.
It is as organizations get down to this final step of product comparisons and selections that confusion sometimes arises about how DCIG evaluates product as some assume that DCIG tests each product feature. This is not the case and has never been the case.
Granted, it is possible that the DCIG analyst evaluating a product has perchance worked on or tested one of its features at some point in the past. However, DCIG makes no guarantee nor implies that any feature of any product that it evaluates has ever been tested or that the feature even works.
Rather, the analyst relies upon the publicly available information and feedback from the provider to guide him or her in their conclusion about product support. In cases where the analyst cannot confirm feature support, they merely denote they could confirm support for the feature.
Even in cases where an analyst can find evidence that a product supports a specific feature, organizations much exercise some discretion in terms of how much weight that they ascribe to that support.
This topic came up in a conversation I had earlier this week. A storage vendor with which I spoke expressed some concerns about one of its competitors. He admitted that the arrays his company put out did not have the product bells and whistles that his competitor had.
That said, of the features that his company’s array did support, he did express a great deal of confidence that they would work as advertised. Conversely, he felt that his competitor only implemented product features at a minimal level to permit them to truthfully mark the product feature check box. The questions then became how well did they test these features and would they work when deployed in production environments. His experience was that vendors that implement features this way generally did not see these features perform well in production environments.
I share this story to provide everyone some guidance in how to best leverage the DCIG Buyer’s Guides and to understand the information contained in them.
On one hand, the DCIG Buyer’s Guides do what they do better than any other analyst publication in the industry, bar none. They aggregate publicly available information about enterprise products, synthesize it, and present it in a manner that buyers can quickly obtain to making a relatively informed product buying decision.
That said, these Buyer’s Guides do not absolve you of taking responsibility for knowing about either your own environment or verifying that the features on a product work sufficiently well to meet the needs of your environment. While DCIG generally finds that vendors are truthful about the capabilities of their products, we also find there are differences in how vendors define and implement support for their product features.
These subtleties in the robustness of product feature implementation are unfortunately very difficult to detect and communicate, especially at the scale at which DCIG produces its Buyer’s Guides. Further, distinguishing between these nuances in feature implementation goes beyond what DCIG seeks to accomplish in its Buyer’s Guides.
Rather, DCIG encourages you to do a deeper dive into any product features that are of concern to your organization or to engage DCIG for its analyst services to do deep dive into these features so you can verify the product provides the functionality that your company needs, will use, and can rely upon.