NetApp Breaks Its Storage Infrastructure Offerings in Two

NetApp’s entrance in the storage industry 20 years ago could be no more humble: it started by offering a fast, reliable and simple to manage purpose-built appliance targeted at engineering workgroups. Since then it has continually innovated and evolved with the objective of becoming the predominant storage player with offerings for all size organizations – small or large. This week at its annual Analyst Days event NetApp laid it out for all to see that in order for it to achieve that objective it has effectively broken its storage infrastructure offerings in two.

To understand why NetApp is taking this path (and why it makes sense and is prudent for it to do so,) one must first grasp what is going on in most organizations today. At the infrastructure level these companies are breaking into two logical halves – physical and virtual – with the majority of their applications and application servers destined for the virtual realm (or the shared storage infrastructure as NetApp refers to it below.)

Shared Storage Infrastructure.JPGSource: NetApp

Of the physical servers that remain, they are typically hosting applications that do not lend themselves well to virtualization for whatever reason. Maybe they are Big Data apps that require lots of storage capacity and virtualizing them would be of only marginal or no benefit. Maybe they are high performance apps that would consume too much of the physical resources in a virtualized environment. Maybe it is some combination of both.

Regardless, this division of the corporate infrastructure into two halves is necessitating the introduction of two physical infrastructures to support them. One that is highly tuned for virtualized deployments and, by necessity, feature rich, and another that is adept at delivering high levels of performance, capacity or both at a more economical price point.

This is essence what NetApp delivers with its FAS and E-Series lines of storage. It has tuned and built into its FAS line of storage the features, functionality and, with this week’s Data ONTAP 8.1.1 release, the additional performance and scalability that organizations need to meet the demands of their virtualized environments. Conversely NetApp has built into its E-Series line of storage the flexibility for organizations to mix and match storage capacity and performance as specific application demands dictate.

So in this respect NetApp has done more than simply taken the time to understand the needs of its current and prospective customer base. It has assembled a set of storage offerings that are well aligned to meet them.

NetApp summed up the features and functionality found in its arrays (and more specifically in its FAS line of arrays) in 9 points that it shared in both its opening keynote at the conference and in a press release from earlier this week announcing its new software release of Data ONTAP. These were:

  1. Unified Architecture
  2. Seamless scalability
  3. Non-disruptive Operations
  4. Secure Multi-Tenancy
  5. Storage Efficiency
  6. Virtual Storage Tiering
  7. Embedded Data Security
  8. Integrated Data Protection
  9. Service Automation and Analytics

According to NetApp, this list of feature/functional requirements was based upon feedback from its current and prospective customer base as to what they wanted to see in a next generation storage platform.  So the questions become:

  • How accurate is this list?
  • How well is NetApp doing in delivering on them?

Overall I agree in most part with this list though notably absent in it is any mention of performance (which was a little surprising to be quite honest.) While its E-Series line of storage is arguably tuned to perform well in physical environments, recent anecdotal evidence that I have run across in a few blogs and in talking to a couple of MSPs at recent trade shows, the performance of NetApp’s FAS series in virtualized environments is being called into question by some.

The details I had around the FAS’s performance challenges in virtualized environments were at best sketchy. As near as I could tell it had to do with organizations virtualizing more of their performance intensive business-critical applications. So part of what I wanted to find out from NetApp while at its Analyst Days event is if these concerns had been brought to its attention and, if they had, what steps it was taking to address them.

To its credit, NetApp did admit that it had heard about these performance challenges from some of its customers with larger virtualized environments. Addressing these concerns was, in part, some of its motivation for its announcement and support for Flash Pools in its latest Data ONTAP 8.1.1 release as it puts the most performance sensitive data on SSDs and moves less frequently accessed data off to HDDs. In terms of how this works over time remains to be seen but off the cuff I would suggest that NetApp has properly responded to these performance concerns.

Then in terms of the nine items that are actually on the list, my sense is that is doing a pretty good job of delivering on most of those attributes in virtualized environments with the possible exception of Service Automation and Analytics. While its OnCommand System Manager software probably does as good a job as most products now available on the market, NetApp needs to continue to invest funds into and mature this product to make this a market leading product – not just a “me-too” offering.

This is driven by the consistent feedback that I get from managed service providers and large enterprise organizations that one of their biggest pain points is doing root cause analysis in virtualized environments. While many organizations have performance monitoring tools, very few have staff dedicated to monitoring or interpreting their results. As such, when performance problems do emerge (and they will emerge,) it becomes an “all hands on deck” event that can take hours, days or even weeks to pinpoint the source of the problem.

In fact, in speaking with NetApp’s OnCommand team about this, it shared a recent episode where it had to help a customer troubleshoot such a performance challenge in a virtualized environment. To NetApp’s credit, using its OnCommand software and Workflow Automation feature, it was able to recreate the customer problem in its environment and identify a firmware release as the source of the performance problem.

The problem? It took three (3) days for NetApp to identify the issue and that was only after the customer brought the issue to NetApp’s attention. So it is hard to say how long the customer was internally having the issue in its environment before it raised a red flag and asked NetApp for help.

So when NetApp says that organizations will find “Service Automation and Analytics” among the features on its arrays, I would say NetApp can deliver that feature up to a point. But NetApp still needs to do a MUCH better job of helping customers, especially those with virtualized environments, identify looming performance issues before they turn into an “all hands on deck” support call such as occurred here.

To NetApp’s credit
it has recognized that customer infr
astructures are becoming two logical halves – a physical half and a virtual half – and, in response to that, it has delivered two separate product lines to meet those demands. Further, it has for the most part added in much of the functionality that organizations currently need to support and manage them.

The area where NetApp (and frankly every storage vendor) must continue to innovate going forward are two-fold. First, it must continue to enhance its product to deliver the performance that virtualized environments need such as it did with its Flash Pools announced this week. Second, it must deliver a much better software management product that proactively identifies performance hot spots and then corrects them in virtualized environments before these problems become “all hands on deck” events.

Click Here to Signup for the DCIG Newsletter!

Categories

DCIG Newsletter Signup

Thank you for your interest in DCIG research and analysis.

Please sign up for the free DCIG Newsletter to have new analysis delivered to your inbox each week.