To say or imply that NetApp was in any near term danger of falling from its position as a storage leader would be a gross mischaracterization of its current condition. However it would be accurate to say that the industry lacked clarity as to how NetApp would respond to the encroachment of flash memory storage arrays on the high performance end of storage. After attending the NetApp Industry Analyst event this week it is now clear that to address this challenge NetApp plans to go back to its roots to lay the foundation for its future.
During the presentations at the NetApp Analyst Event and during the many 1:1 meetings that I had, NetApp wanted to make sure everyone knew that it had a flash strategy and that it had already shipped somewhere in the neighborhood of 40+ PBs of flash. Configured as Flash Cache, Flash Accel (server-based flash) and its latest EF540 series array, NetApp has flash firmly in hand. However the two issues with its current approaches to these methods of delivering flash are:
- The speed of flash quickly overwhelms the capabilities of its dual-controller storage arrays to effectively deliver performance
- Scaling flash by adding more individual NetApp units would quickly become cost-prohibitive and impractical.
While these shortcomings are not near-term threats (0-24 months) to NetApp’s business, there are deficiencies in its current underlying architecture that require NetApp to over the longer term (24+ months) adopt a new storage strategy to handle flash. This necessitates that in the years to come it do one of the following:
(a) Buy an outside flash memory array provider
(b) Re-architect its current NetApp ONTAP platform for flash
(c) Organically build a new platform tailor made for flash
None of these approaches are without risk. Each one to one degree or another fails to fully address the reliability, availability and support (RAS) that was also a theme that NetApp heavily stressed during this conference and which are aspects that NetApp encourages all of its enterprises customers to consider before buying any flash array.
Going into this conference, NetApp had already tipped its hand to some degree. It had previously revealed its MARS Project in late 2012/early 2013. However the details around it were sketchy at best and certainly insufficient to give anyone a high degree of confidence that what NetApp was doing internally was really any better than it just going out and buying another company.
Now having seen some preliminary information about its forthcoming FlashRay flash storage, it is more clear that with FlashRay NetApp is positioned well for the transformation in storage that is about to occur at the high end of the market. I say this for the following three reasons:
1. NetApp is bringing back the original guys who developed Data ONTAP to develop a “new” OS for flash. NetApp recognizes that flash is going to be a bet the farm type of decision when it comes to storage for most enterprises. Data ONTAP has had a good run but NetApp also recognizes that Data ONTAP was optimized to manage disk, not flash. By bringing back the original guys who wrote Data ONTAP, they can preserve the best of what Data ONTAP offers for disk today and make the changes necessary at the core of its code to manage flash. While I am not at liberty to say exactly who is working on this project, Dave Hitz was noticeably absent at this year’s NetApp Analyst Event. It makes me wonder why?
2. NetApp is not throwing out the baby with the bathwater. Data ONTAP is used in literally in tens of thousands of customer deployment and has rich, mature CIFS/NFS, snapshot, replication, volume management and now clustering functionality. NetApp would be stupid to throw all of that innovation away and start from scratch with all of those features just to get access to a start-up that has a product with great flash management functionality. It has made the decision – wisely if I might add – to solve this flash problem itself and then integrate its existing feature functionality along with its new flash management capabilities in a new OS for its FlashRay array. Is there still some risk with this approach? Absolutely! But the risk to itself and its customers is certainly much less than if NetApp had to rebuild all of this other functionality just to gain access to someone’s else’s flash management capabilities.
3. NetApp still has time. Most end-user companies are still kicking flash’s tires. Sure, some of the start-up flash providers have had success with their flash memory arrays and I expect that will continue into the foreseeable future. But most of those arrays are being deployed in support of dedicated application-specific workloads. No company (or at best a very small minority) is deploying flash on a wide scale with a “let’s sweep the floor” type of mentality using flash. NetApp recognizes that day is coming – probably sooner rather than later. But that day is still far enough off that it has time to act assuming it starts to act now.
I was actually a bit concerned about NetApp going into this analyst conference. NetApp is a great company but a lot of great technology companies have failed over the years because they could not evolve to meet a sudden shift in the market.
Flash memory storage arrays represent just such a market shift. As such, it was incumbent that NetApp innovates in a way that it really has not been challenged to do since its inception. While the jury is still out as to if its FlashRay architecture ultimately represents the “right” approach for flash management, the case NetApp made at its analyst days certainly suggests that it is on the right track.
On the surface NetApp appears to understand the idiosyncrasies of flash and that flash must be managed different than disk. NetApp also understands it too must evolve if it is to survive. But maybe most importantly, NetApp is not forgetting that as it evolves and changes, it must do so in such a way that takes into consideration customer concerns around reliability, availability and support if they are to eventually earn their flash business..