Nanoseconds, Stubborn SAS, and Other Takeaways from the Flash Memory Summit 2019

Every year at the Flash Memory Summit held in Santa Clara, CA, attendees get a firsthand look at the technologies that will impact the next generation of storage. This year many of the innovations centered on forthcoming interconnects that will better deliver on the performance that flash offers today. Here are DCIG’s main takeaways from this year’s event.

Takeaway #1 – Nanosecond Response Times Demonstrated

PCI Express (PCIe) fabrics can deliver nanosecond response times using resources (CPU, memory, storage) situated on different physical enclosures. In meeting with PCIe provider, Dolphin Interconnect Solutions, it demonstrated how an application could access resources (CPU, flash storage & memory) on different devices across a PCIe fabric in nanoseconds. Separately, GigaIO announced 500 nanosecond end-to-end latency using its PCIe FabreX switches. While everyone else at the show was boasting about microsecond response times, Dolphin and GigaIO introduced nanoseconds into the conversation. Both these companies ship their solutions now.

Takeaway #2 – Impact of NVMe/TCP Standard Confirmed

Ever since we heard the industry planned to port NVMe-oF to TCP, DCIG thought this would accelerate the overall adoption of NVMe-oF. Toshiba confirmed our suspicions. In discussing its Kumoscale product with DCIG, it shared that it has seen a 10x jump in sales since the industry ratified the NVMe/TCP standard. This stems from all the reasons DCIG stated in a previous blog entry such as TCP being well understood, Ethernet being widely deployed, its low cost, and its use of existing infrastructure in organizations.

Takeaway #3 – Fibre Channel Market Healthy, Driven by Enterprise All-flash Array

According to FCIA leaders, the Fibre Channel (FC) market is healthy. FC vendors are selling 8 million ports per year. The enterprise all-flash array market is driving FC infrastructure sales, and 32 Gb FC is shipping in volume. Indeed, DCIG’s research revealed 37 all-flash arrays that support 32 Gb FC connectivity.

Front-end connectivity is often the bottleneck in all-flash array performance, so doubling the speed of those connections can double the performance of the array. Beyond 32 Gb FC, the FCIA has already ratified the 64 Gb standard and is working on the 128 Gb FC. Consequently, FC has a long future in enterprise data centers.

FC-NVMe brings the benefits of NVMe-oF to Fibre Channel networks. FC-NVMe reduces protocol overhead, enabling GEN 5 (16 Gb FC) infrastructure to accomplish the same amount of work while consuming about half the CPU of standard FC.

Takeaway #4 – PCIe Will Not be Denied

All resources (CPU, memory and flash storage) can connect with one another and communicate over PCIe. Further, using PCIe eliminates the need for introducing the overhead associated with storage protocols (FC, InfiniBand, iSCSI, SCSI). All these resources talk the PCIe protocol. With the PCIe 5.0 standard formally ratified in May 2019 and discussions about PCIe 6.0 occurring, the future seems bright for the growing adoption of this protocol. Further, AMD and Intel having both thrown their support behind it.

Takeaway #5 – SAS Will Stubbornly Hang On

DCIG’s research finds that over 75% of AFAs support 12Gb/second SAS now. This predominance makes the introduction of 24G a logical next step for these arrays. A proven, mature, and economical interconnect, few applications can yet drive the performance limits of 12Gb, much less the forthcoming 24G standard. Adding to the likelihood that 24G moves forward, the SCSI Trade Association (STA) reported that the recent 24G plug fest went well.

Editor’s Note: This blog entry was updated on August 9, 2019, to correct grammatical mistakes and add some links.

NVMe: Four Key Trends Set to Drive Its Adoption in 2019 and Beyond

Storage vendors hype NVMe for good reason. It enables all-flash arrays (AFAs) to fully deliver on flash’s performance characteristics. Already NVMe serves as an interconnect between AFA controllers and their back end solid state drives (SSDs) to help these AFAs unlock more of the performance that flash offers. However, the real performance benefits that NVMe can deliver will be unlocked as a result of four key trends set to converge in the 2019/2020 time period. Combined, these will open the doors for many more companies to experience the full breadth of performance benefits that NVMe provides for a much wider swath of applications running in their environment.

Many individuals have heard about the performance benefits of NVMe. Using it, companies can reduce latency with response times measured in few hundred microseconds or less. Further, applications can leverage the many more channels that NVMe has to offer to drive throughput to hundreds of GBs per second and achieve millions of IOPs. These types of performance characteristics have many companies eagerly anticipating NVMe’s widespread availability.

To date, however, few companies have experienced the full breadth of performance characteristics that NVMe offers. This stems from:

  • The lack of AFAs on the market that fully support NVMe (about 20%).
  • The relatively small performance improvements that NVMe offers over existing SAS-attached solid-state drives (SSDs); and,
  • The high level of difficulty and cost associated with deploying NVMe in existing data centers.

This is poised to change in the next 12-24 months with four key trends poised to converge that will open up NVMe to a much wider audience.

  1. Large storage vendors getting ready to enter the NVMe market. AFA providers such as Tegile (Western Digital), iXsystems, Huawei, Lenovo, and others ship products that support NVMe. These vendors represent the leading edge of where NVMe innovation has occurred. However, their share of the storage market remains relatively small compared to providers such as Dell EMC, HPE, IBM, and NetApp. As these large storage providers enter the market with AFAs that support NVMe, expect market acceptance and adoption of NVMe to take off.
  2. The availability of native NVMe drivers on all major operating systems. The only two major enterprise operating systems that have currently native NVMe drivers for their OSes are Linux and VMware. However, until Microsoft and, to a lesser degree, Solaris, offer native NVMe drives, many companies will have to hold off on deploying NVMe in their environments. The good news is that all these major OS providers are actively working on NVMe drivers. Further, expect that the availability of these drivers will closely coincide with the availability of NVMe AFAs from the major storage providers and the release of the NVMe-oF TCP standard.
  3. NVMe-oF TCP protocol standard set to be finalized yet in 2018. Connecting the AFA controller to its backend SSDs via NVMe is only one half – and much easier part – of solving the performance problem. The much larger and more difficult problem is easily connecting hosts to AFAs over existing storage networks as it is currently difficult to setup and scale NVMe-oF. The establishment of the NVMe-oF TCP standard will remedy this and facilitate the introduction and use of NVMe-oF between hosts and AFAs using TCP/IP over existing Ethernet storage networks.
  4. The general availability of NVMe-oF TCP offload cards. To realize the full performance benefits of NVMe-oF using TCP, companies are advised to use NVMe-oF TCP offload cards. Using standard Ethernet cards with no offload engine, companies will still see high throughput but very high CPU utilization (up to 50 percent.) Using the forthcoming NVMe-oF TCP offload cards, performance increases by anywhere from 33 to 150 percent versus native TCP cards while only introducing nominal amounts of latency (single to double digit microseconds.)

The business need for NVMe technology is real. While today’s all-flash arrays have tremendously accelerated application performance, NVMe stands poised to unleash another round of up to 10x or more performance improvements. But to do that, a mix of technologies, standards, and programming changes to existing operating systems must converge for mass adoption in enterprises to occur. This combination of events seems poised to happen in the next 12-24 months.

Six Key Differentiators between HPE 3PAR StoreServ and NetApp AFF A-Series All-flash Arrays

Both HPE and NetApp have multiple enterprise storage product lines. Each company also has a flagship product. For HPE it is the 3PAR StoreServ line. For NetApp it is the AFF (all flash FAS) A-Series. DCIG’s latest Pocket Analyst Report examines these flagship all-flash arrays. The report identifies many similarities between the products, including the ability to deliver low latency storage with high levels of availability, and a relatively full set of data management features.

DCIG’s Pocket Analyst Report also identifies six significant differences between the products. These differences include how each product provides deduplication and other data services, hybrid cloud integration, host-to-storage connectivity, scalability, and simplified management through predictive analytics and bundled or all-inclusive software licensing.

DCIG recently updated its research on the dynamic and growing all-flash array marketplace. In so doing, DCIG identified many similarities between the HPE 3PAR StoreServ and NetApp AFF A-Series products including:

  • Unified SAN and NAS protocol support
  • Extensive support for VMware API’s including VMware Virtual Volumes (VVols)
  • Integration with popular virtualization management consoles
  • Rich data replication and data protection offerings

DCIG also identified significant differences between the HPE and NetApp products including:

  • Hardware-accelerated Inline Data Services
  • Predictive analytics
  • Hybrid Cloud Support
  • Host-to-Storage Connectivity
  • Scalability
  • Licensing simplicity

Blurred image of pocket analyst report first page

DCIG’s 4-page Pocket Analyst Report on the Six Key Differentiators between HPE 3PAR StoreServ and NetApp AFF A-Series All-flash Arrays analyzes and compares the flagship all-flash arrays from HPE and NetApp. To see which product has the edge in each of the above categories and why, you can purchase the report on DCIG’s partner site: TechTrove. You may also register on the TechTrove website to be notified should this report become available for no charge at some future time.

Data Center Challenges and Technology Advances Revealed at Flash Memory Summit 2018

If you want to get waist-deep in the technologies that will impact the data centers of tomorrow, the Flash Memory Summit 2018 (FMS) held this week in Santa Clara is the place to do it. This is where the flash industry gets its geek on and everyone on the exhibit floor speaks bits and bytes. However, there is no better place to learn about advances in flash memory that are sure to show up in products in the very near future and drive further advances in data center infrastructure.

Flash Memory Summit logoKey themes at the conference include:

  • Processing and storing ever growing amounts of data is becoming more and more challenging. Faster connections and higher capacity drives are coming but are not the whole answer. We need to completely rethink data center architecture to meet these challenges.
  • Artificial intelligence and machine learning are expanding beyond their traditional high-performance computing research environments and into the enterprise.
  • Processing must be moved closer to—and perhaps even into—storage.

Multiple approaches to addressing these challenges were championed at the conference that range from composable infrastructure to computational storage. Some of these solutions will complement one another. Others will compete with one another for mindshare.

NVMe and NVMe-oF Get Real

For the near term, NVMe and NVMe over Fabrics (NVMe-oF) are clear mindshare winners. NVMe is rapidly becoming the primary protocol for connecting controllers to storage devices. A clear majority of product announcements involved NVMe.

WD Brings HDDs into the NVMe World

Western Digital announced the OpenFlex™ architecture and storage products. OpenFlex speaks NVMe-oF across an Ethernet network. In concert with OpenFlex, WD announced Kingfish™, an open API for orchestrating and managing OpenFlex™ infrastructures.

Western Digital is the “anchor tenant” for OpenFlex with a total of seventeen (17) data center hardware and software vendors listed as launch partners in the press release. Notably, WD’s NVMe-oF attached 1U storage device provides up to 168TB of HDD capacity. That’s right – the OpenFlex D3000 is filled with hard disk drives.

While NVMe’s primary use case it to connect to flash memory and emerging ultra-fast memory, companies still want their HDDs. Using Western Digital, organizations can have their NVMe and still get the lower cost HDDs they want.

Gen-Z Composable Infrastructure Standard Gains Momentum

The emerging Gen-Z as a memory-centric architecture is designed for nanosecond latencies. Since last year’s FMS, Gen-Z has made significant progress toward this objective. Consider:

  • The consortium publicly released the Gen-Z Core Specification 1.0 on February 13, 2018. Agreement upon a set of 1.0 standards is a critical milestone in the adoption of any new standard. The fact that the consortium’s 54 members agreed to it suggest broad industry adoption.
  • Intel’s adoption of the SFF-TA-1002 “Gen-Z” universal connector for its “Ruler” SSDs reflects increased adoption of the Gen-Z standards. Making this announcement notable is that Intel is NOT currently a member of the Gen-Z consortium which indicates that Gen-Z standards are gaining momentum even outside of the consortium.
  • The Gen-Z booth included a working Gen-Z connection between a server and a removable DRAM module in another enclosure. This is the first example of a processor being able to use DRAM that is not local to the processor but is instead coming out of a composable pool. This is a concept similar to how companies access shared storage in today’s NAS and SAN environments.

Other Notable FMS Announcements

Many other innovative solutions to data center challenges were also made at the FMS 2018 which included:

  • Solarflare NVMe over TCP enables rapid low-latency data movement over standard Ethernet networks.
  • ScaleFlux computational storage avoids the need to move the data by integrating FPGA-based computation into storage devices.
  • Intel’s announcement of its 660P Series of SSDs that employ quad level cell (QLC) technology. QLC stores more data in less space and at a lower cost.


Based on the impressive progress we observed at Flash Memory Summit 2018, we can reaffirm the recommendations we made coming out of last year’s summit…

  • Enterprise technologists should plan technology refreshes through 2020 around NVMe and NVMe-oF. Data center architects and application owners should seek 10:1 improvements in performance, and a similar jump in data center efficiency.
  • Beyond 2020, enterprise technologists should plan their technology refreshes around a composable data centric architecture. Data center architects should track the development of the Gen-Z ecosystem as a possible foundation for their next-generation data centers.

Seven Key Differentiators between Dell EMC VMAX and HPE 3PAR StoreServ Systems

Dell EMC and Hewlett Packard Enterprise are enterprise technology stalwarts. Thousands of enterprises rely on VMAX or 3PAR StoreServ arrays to support their most critical applications and to store their most valuable data. Although VMAX and 3PAR predated the rise of the all-flash array (AFA), both vendors have adapted and optimized these products for flash memory. They deliver all-flash array performance without sacrificing the data services enterprises rely on to integrate with a wide variety of enterprise applications and operating environments.

While VMAX and 3PAR StoreServ can both support hybrid flash plus hard disk configurations, the focus is now on all-flash. Nevertheless, the ability of these products to support multiple tiers of storage will be advantageous in a future that includes multiple classes of flash memory along with other non-volatile storage class memory technologies.

Both the VMAX and 3PAR StoreServ can meet the storage requirements of most enterprises, yet differences remain. DCIG compares the current AFA configurations from Dell EMC and HPE in its latest DCIG Pocket Analyst Report. This report will help enterprises determine which product best fits with its business requirements.

DCIG updated its research on all-flash arrays during the first half of 2018. In so doing, DCIG identified many similarities between the VMAX and 3PAR StoreServ products including:

  • Extensive support for VMware API’s including VMware Virtual Volumes (VVols)
  • Integration with multiple virtualization management consoles
  • Rich data replication and data protection offerings
  • Support for a wide variety of client operating systems
  • Unified SAN and NAS

DCIG also identified significant differences between the VMAX and 3PAR StoreServ products including:

  • Data center footprint
  • High-end history
  • Licensing simplicity
  • Mainframe connectivity
  • Performance resources
  • Predictive analytics
  • Raw and effective storage density

blurred image of the front page of the report

To see which vendor has the edge in each of these categories and why, you can access the latest 4-page Pocket Analyst Report from DCIG that analyzes and compares all-flash arrays from Dell EMC and HPE. This report is currently available for purchase on DCIG’s partner site: TechTrove. You may also register on the TechTrove website to be notified should this report become available at no charge at some future time.

Five Key Differentiators between the Latest NetApp AFF A-Series and Hitachi Vantara VSP F-Series Platforms

Every enterprise-class all-flash array (AFA) delivers sub-1 millisecond response times using standard 4K & 8K performance benchmarks, high levels of availability, and a relatively full set of core data management features. As such, enterprises must examine AFA products to determine the differentiators between them. It is when DCIG compared the newest AFAs from leading providers such as Hitachi Vantara and NetApp in its latest DCIG Pocket Analyst Report that differences between them quickly emerged.

Both Hitachi Vantara and NetApp refreshed their respective F-Series and A-Series lines of all-flash arrays (AFAs) in the first half of 2018. In so doing, many of the similarities between the products from these providers persisted in that they both continue to natively offer:

  • Unified SAN and NAS interfaces
  • Extensive support for VMware API’s including VMware Virtual Volumes (VVols)
  • Integration with popular virtualization management consoles
  • Rich data replication and data protection offerings

However, the latest AFA product refreshes from each of these two vendors also introduced some key areas where they diverge. While some of these changes reinforced the strengths of each of their respective product lines, other changes provided some key insights into how these two vendors see the AFA market shaping up in the years to come. This resulted in some key differences in product functionality emerging between the products from these two vendors that will impact them in the years to come.

Clicking on image above will take you to a third party website to access this report.

To help enterprises select the solution that best fits their needs, there are five key ways that the latest AFA products from Hitachi Vantara and NetApp differentiate themselves from one another. These five key differentiators include:

  1. Data protection and data reduction
  2. Flash performance optimization
  3. Predictive analytics
  4. Public cloud support
  5. Storage networking protocols

To see which vendor has the edge in each of these categories and why, you can access the latest 4-page Pocket Analyst Report from DCIG that analyzes and compares these newest all-flash arrays from Hitachi Vantara and NetApp. This report is currently available for sale on DCIG’s partner site: TechTrove. You may also register on the TechTrove website to be notified should this report becomes available at no charge at some future time.

Two Most Disruptive Storage Technologies at the NAB 2018 Show

The exhibit halls at the annual National Association of Broadcasters (NAB) show in Las Vegas always contain eye-popping displays highlighting recent technological advances as well as what is coming down the path in the world of media and entertainment. But behind NAB’s glitz and glamour lurks a hard, cold reality; every word recorded, every picture taken, and every scene filmed must be stored somewhere, usually multiple times, and available at a moment’s notice. It is these halls at the NAB show that DCIG visited where it identified two start-ups with storage technologies poised to disrupt business as usual.

Storbyte. Walking the floor at NAB, a tall, blond individual literally yanked me by the arm as I was walking by and asked me if I had ever heard of Storbyte. Truthfully, the answer was No. This person turned out to be Steve Groenke, Storbyte’s CEO, and what ensued was a great series of conversations while at NAB.

Storbyte has come to market with an all-flash array. However, it took a very different approach to solve the problems of longevity, availability and sustainable high write performance in SSDs and storage systems built with them. What makes it so disruptive is it created a product that meets the demand for extreme sustained write performance by slowing down flash and it does so at a fraction of the cost of what other all-flash arrays cost.

In looking at today’s all-flash designs, every flash vendor is actively pursuing high performance storage. The approach they take is to maximize the bandwidth to each SSD. This means their systems must use PCIe attached SSDs addressed via the new NVMe protocol.

Storbyte chose to tackle the problem differently. Its initial target customers had continuous, real-time capture and analysis requirements as they routinely burned through the most highly regarded enterprise class SSDs in about seven months. Two things killed NAND flash in these environments: heat and writes.

To address this problem, Storbyte reduces heat and the number of writes that each flash module experiences by incorporating sixteen mSATA SSDs into each of its Eco*Flash SSDs. Further, Storbyte slows down the CPUs in each of the mSATA module on its system and then wide-stripes writes across all of them. According to Storbyte, this only requires about 25% of the available CPU on each mSATA module so they use less power. By also managing the writes, Storbyte simultaneously extends the life of each mSATA module on its Eco-flash drives.

The end result is a low cost, high performance, very dense, power-efficient all-flash array built using flash cards that rely upon “older”, “slower”, consumer-grade mSATA flash memory modules that can drive 1.6 million IOPS on a 4U system. More notably, its systems cost about a quarter of that of competitive “high performance” all-flash arrays while packing more than a petabyte of raw flash memory capacity in 4U of rack space that use less power than almost any other all-flash array.

Wasabi. Greybeards in the storage world may recognize the Wasabi name as a provider of iSCSI SANs. Well, right name but different company. The new Wasabi recently came out of stealth mode as a low cost, high performance, cloud storage provider. By low cost, we mean 1/5 of the cost of Amazon’s slowest offering (Glacier) and at 6x the speed of Amazon’s highest performing S3 offering. In other words, you can have your low cost cloud storage and eat it too.

What makes its offering so compelling is that it offers storage capacity at $4.99/TB per month. That’s it. No additional egress charges for every time you download files. No complicated monthly statements to decipher to figure out how much you are spending and where. No costly storage architects to hire to figure out how to tier data to optimize performance and costs. This translates into one fast cloud storage tier at a much lower cost than the Big 3 (Amazon AWS, Google Cloud, and Microsoft Azure.)

Granted, Wasabi is a cloud storage provider start-up so there is an element of buyer beware. However, it is privately owned and well-funded. It is experiencing explosive growth with over 1600 customers in just its few months of operation. It anticipates raising another round of funding. It already has data centers scattered throughout the United States and around the world with more scheduled to open.

Even so, past horror stories about cloud providers shutting their doors give every company pause by using a relatively unknown quantity to store their data. In these cases, Wasabi recommends that companies use its solution as your secondary cloud.

Its cloud offering is fully S3 compatible and most companies want a cloud alternative anyway. In this instances, store copies of your data to both Amazon and Wasabi. Once stored, run any queries, production, etc. against the Wasabi cloud. The Amazon egress charges that your company avoids by accessing its data on the Wasabi cloud will more than justify taking the risk of storing the data you routinely access on Wasabi. Then in the unlikely event Wasabi does go out of business (not that it has any plans to do so,) companies still have a copy of data with Amazon that they can fail back to.

This argument seems to resonate well with prospects. While I could not substantiate these claims, Wasabi said that they are seeing multi-petabyte deals coming their way on the NAB show floor. By using Wasabi instead of Amazon in the use case just described, these companies can save hundreds of thousands of dollars per month just by avoiding Amazon’s egress charges while mitigating their risk associated with using a start-up cloud provider such as Wasabi.

Editor’s Note: The spelling of Storbyte was corrected on 4/24.

NVMe: Setting Realistic Expectations for 2018

Non-volatile Memory Express (NVMe) has captured the fancy of the enterprise storage world. Implementing NVMe on all-flash arrays or hyper-converged infrastructure appliances carries with it the promise that companies can leverage these solutions to achieve sub-millisecond response times, drive millions of IOPS, and deliver real-time application analytics and transaction processing. But differences persist between what NVMe promises for these solutions and what they can deliver. Here is a practical look at NVMe delivers on these solutions in 2018.

First and foremost, NVMe is an exciting and needed breakthrough to deliver on the performance characteristics as of early 2018. Unlike the SCSI protocol that it replaces which was designed and implemented with mechanical hard disk drives (HDDs) in mind, NVMe comes to market intended for use with today’s flash-based systems. In fact, as evidence of the biggest difference between SCSI and NVMe, NVMe cannot even interface with HDDs. NVMe is intended to speak flash.

As part of speaking flash, NVMe no longer concerns itself with the limitations of mechanical HDDs. By way of example, HDDs can only handle one command at a time. Whether it is a read or a write, the entire HDD is committed to completing that one command before it can start processing the next one and it only has one channel delivering that command to it.

The limitations of flash, and by extension, NVMe, are exponentially higher. In the case of NVMe, it can support 65,535 queues into the flash media and stack up to 64,000 commands per queue. In other words, over 4 billion commands can theoretically be issued to a single flash media at any time.

Of course, just because NVMe can support over 4 billion commands does not mean that any product or application currently even comes close to doing that. Should they ever do so, and they probably will at some point, it is plausible that published IOP numbers might be in the range of tens or hundreds of millions of IOPs. But as of early 2018, everyone must still develop and mature their infrastructure and applications to support that type of throughput. Further, NVMe as a protocol still must continue to mature its interface to support those kinds of workloads.

So as of early 2018, here is what enterprises can realistically expect from NVMe:

1. If you want NVMe on your all-flash array, you have a short list from which to choose. NVMe capable all-flash arrays that have NVMe interfaces to all SSDs are primarily available from Dell EMC, Huawei, Pivot3, Pure Storage, and Tegile. The number of all-flash arrays that currently support NVMe remains in the minority with only 18% of the 100+ all-flash arrays that DCIG evaluated supporting NVMe connectivity to all back end SSDs.

Source: DCIG

The majority of AFAs currently shipping support a 3, 6, or 12 Gb SAS interface to their backend flash media for good reason: few applications can take full advantage of NVMe’s capabilities. As both applications and NVMe mature, expect the number of AFAs that support NVMe to increase.

2. Your connectivity between your server and shared storage array will likely remain the same in 2018. Enterprises using NAS protocols such as CIFS or NFS or SAN protocols such as FC or iSCSI should expect to do so for 2018 and probably for the next few years. While new standards such as NVMe-oF are emerging and provide millions of IOPs when implemented, such as evidenced by early solutions from providers such as E8 Storage, NVMe is not yet well suited to act as a shared storage protocol between servers and AFA arrays. For now, NVMe remains best suited for communication between storage array controllers and their backend flash media or on servers that have internal flash drives. To use NVMe for any other use cases in enterprise environments is, at this point, premature.

3. NVMe is a better fit for hyper-converged infrastructure solutions than AFAs for now. Enterprises expecting a performance boost from their use of NVMe will likely see it whether they deploy it in hyper-converged infrastructure or AFA solutions. However, enterprises must connect to AFAs using existing storage protocols such as listed above. Conversely, applications running on hyper-converged infrastructure solutions that support NVMe may see better performance than those running on AFAs. Using AFAs, protocol translation over a NAS or SAN must still occur over the storage network to get to the NVMe enabled AFA. Hyper-converged infrastructure solutions negate the need for this additional protocol conversion.

NVMe will improve performance but verify your applications are ready. Stories about the performance improvements that NVMe offers are real and validated in the real world. However, these same users also find that some of their applications using these NVMe-based all-flash arrays are not getting the full benefit that they expected from them because, in part, their applications cannot handle the performance. Some users report that they have uncovered their applications have wait times built into them because the applications were designed to work with slower HDDs. Until the applications themselves are updated to account for AFAs by having those preconfigured wait times removed or minimized, the applications may become the new choke point that prevent enterprises from reaping the full performance benefits that NVMe has to offer.

NVMe is almost without doubt the future for communicating with flash media. But in early 2018, enterprises need to set realistic expectations as to how much of a performance boost NVMe will provide when deployed. Sub-millisecond response times are certainly a realistic expectation and maybe almost a necessity at this point to justify the added expense of using an NVMe array since many SAS-based arrays may achieve this same metric. Further, once an enterprise commits to using NVMe, one also makes the commitment to only using flash media since NVMe provides no option to interface with HDDs.

All-inclusive Licensing is All the Rage in All-flash Arrays

Early in my IT career, a friend who owns a software company told me he had been informed by a peer that he wasn’t charging enough for his software. This peer advised him to adopt a “flinch-based” approach to pricing. He said my friend should start with a base licensing cost that meets margin requirements, and then keep adding on other costs until the prospective customer flinches. My friend found that approach offensive, and so do I. I don’t know how common the “flinch-based” approach is, but as a purchaser of technology goods and services I learned to flinch early and often. I was reminded of this “flinch-based” approach when evaluating some traditional enterprise storage products. Every capability was an extra-cost “option”: each protocol, each client connection, each snapshot feature, each integration point. Happily, this a-la-carte approach to licensing is becoming a thing of the past as vendors embrace all-inclusive licensing for their all-flash array products.

The Trend Toward All-inclusive Licensing in All-Flash Arrays

In the process of updating DCIG’s research on all-flash arrays, we discovered a clear trend toward all-inclusive software feature licensing. This trend was initiated by all-flash array startups. Now even the largest traditional vendors are moving toward all-inclusive licensing. HPE made this change in 2017 for its 3PAR StoreServ products. Now Dell EMC is moving this direction with its all-flash Unity products.

Drivers of All-inclusive Licensing in All-Flash Arrays

Competition from storage startups has played an important role in moving the storage industry toward all-inclusive software feature licensing. Some startups embraced all-inclusive licensing because they knew prospective customers were frustrated by the a-la-carte approach. Others, such as Tegile, embraced all-inclusive licensing from the beginning because many of the software features were inherent to the design of their storage systems. Whatever the motivation, the availability of all-inclusive software feature licensing from these startups put pressure on other vendors to adopt the approach.

Technology advances are also driving the movement toward all-inclusive licensing. Advances in multi-core, multi-gigahertz CPU’s from Intel make it practical to incorporate features such as in-line compression and in-line deduplication into storage systems. These in-line data efficiency features are a good fit with the wear and performance characteristics of NAND-flash, and help to reduce the overall cost and data center footprint of an all-flash array.

The Value of All-inclusive Licensing for All-Flash Array Adopters

All-inclusive licensing is one of the five features that contribute to delivering simplicity on all-flash arrays. Vendors that include all software features fully licensed as part of the standard array package create extra value for purchasers by reducing the number of decision points in the purchasing process and smooths the path to full utilization of the array’s capabilities.

All-inclusive licensing enables agility. Separate license fees for software features reduced the agility of the IT department in responding to changing business requirements because the ordering and purchasing processes added weeks or even months to the implementation process. With all-inclusive licensing eliminates the purchasing delay.

The Value of All-inclusive Licensing for All-flash Array Vendors

All-inclusive licensing translates to more sales. Each decision point during the purchase process slows down the process and creates another opportunity for a customer to say, “No.” All-inclusive licensing smooths the path to purchase. Since all-inclusive licensing also fosters full use of the product’s features and the value customers derive from the product, it should also smooth the path to follow-on sales.

Happier engineers. This benefit may be more abstract, but the best engineers want what they create to actually get used and make a difference. All-inclusive licensing makes it more likely that the features engineers create actually get used.

Bundles May Make Sense for Legacy Solutions

Based on the rationale described above, all-inclusive software feature licensing provides a superior approach to creating value in all-flash arrays. But for vendors seeking to transition from an a-la-carte model, bundles may be a more palatable approach. Bundles enable the vendor to offer some of the benefits of true all-inclusive licensing to new customers without offending existing customers. In cases where a feature depends on technology licensed from another vendor, bundling also offers a way to pass 3rd party licensing costs through to the customer.

Vendors that offer all-inclusive software feature licenses or comprehensive bundles add real value to their all-flash array products, and deserve priority consideration from organizations seeking maximum value, simplicity and agility from their all-flash array purchase.


Data Center Efficiency, Performance, Scalability: How Dell EMC XtremIO, Pure Storage Flash Arrays Differ

Latest DCIG Pocket Analyst Report Compares Dell EMC XtremIO and Pure Storage All-flash Product Families

Hybrid and all-disk arrays still have their place in enterprise data centers but all-flash arrays are “where it’s at” when it comes to hosting and accelerating the performance of production applications. Once reserved only for applications that could cost-justify these arrays, continuing price erosion in the underlying flash media coupled with technologies such as compression and deduplication have put these arrays at a price point within reach of almost any size enterprise. As that occurs, flash arrays from Dell EMC XtremIO and Pure Storage are often on the buying short lists for many companies.

When looking at all-flash arrays, it is easy to fall into the trap that they are all created equal. While it can be truthfully said that every all-flash array is faster and will outperform any of its all-disk or hybrid storage array predecessors, there can be significant differences in how effectively and efficiently each one delivers that performance.

Consider product families from leaders in the all-flash array market: Dell EMC XtremIO and Pure Storage. When you look at their published performance specifications, they both scale to offer hundreds of thousands of IOPS, achieve sub one millisecond response times, and offer capacity optimization features such as compression and deduplication.

It is only when you start to pull back the covers on these two respective product lines that substantial differences between them start to emerge such as:

  • Their data center efficiency in areas such as power consumption and data center footprint
  • How much flash capacity they can ultimately hold
  • What storage protocols they support

This recent published 4-page DCIG Pocket Analyst Report analyzes these attributes and others on all-flash arrays from these two providers. It examines how well their features support these key data center considerations and includes analyst commentary on which product has the edge in this these specific areas. This report also contains a feature comparison matrix to support this analysis.

This report provides the key insight in a concise manner that enterprises need to make the right choice in an all-flash array solution for the rapidly emerging all-flash array data center. This report may be purchased for $19.95 at TechTrove, a new third-party site that hosts and makes independently developed analyst content available for sale.

All-flash data centers are coming and with every all-flash array providing higher levels of performance than previous generations of storage arrays, enterprises need to examine key underlying features that go deeper than simply fast they perform. Their underlying architecture, the storage protocols they support, and the software they use to deliver these features are all features that impact how effective and efficient the array will be in your environment. This DCIG Pocket Analyst Report makes plain some of the key ways that the all-flash arrays from Dell EMC and Pure Storage differentiate themselves from one another. Follow this link to purchase this report.

Author’s Note: The link to the DCIG Pocket Analyst Report comparing the Dell EMC XtremIO and Pure Storage FlashArrays was updated and correct at 12:40 pm CT on 10/18/2017 to point to the correct page on the TechTrove website. Sorry for any confusion!

A Business Case for ‘Doing Something’ about File Data Management

The business case for organizations with petabytes of file data under management to classify and then place it across multiple tiers of storage has never been greater. By distributing this data across disk, flash, tape and the cloud, they stand to realize significant cost savings. The catch is finding a cost-effective solution that makes it easier to administer and manage file data than simply storing it all on flash storage. This is where a solution such as what Quantum now offers come into play.

Organizations love the idea of spending less money on primary storage – especially when they have multiple petabytes of file data residing on flash storage. Further, most organizations readily acknowledge that much of their file data residing on flash storage can reside on lower cost, lower performing media such as disk, the cloud, or even on tape with minimal to no impact to business operations if they know the files are infrequently or never accessed but can be accessed relatively quickly and easily if required.

The problem they encounter is that the “cure” of file data management is worse than the “disease” of inaction. Their concerns focus on the file data management solution itself. Specifically, can they easily implement and then effectively use it in such a way that they can derive value from it short and long term. This uncertainty about the success of implementing a file data management solution that is easier than the status quo of “doing nothing” prompts organizations to do exactly that: nothing.

Quantum, in partnership with DataFrameworks and its ClarityNow! software, gives companies new motivation to act. Other data management and archival solutions give companies the various parts and pieces that they need to manage their file data. However, they leave it up to the customer and their integrators and/or consultants to implement it.

Quantum and DataFrameworks differ in that they offer an integrated, turnkey, end-to-end solution that organizations need to have confidence to proceed. Quantum has integrated DataFrameworks ClarityNow! Software with its Xcellis scale-out storage and Artico archive gateway products to put companies on a fast track for effective file data management.

Source: Quantum

The Xcellis scale-out storage product was added to the Quantum product portfolio in 2015. Yet while the product is relatively new, the technology it uses is not – it bundles a server and storage with Quantum’s StorNext advanced data management software which has existed for years. Quantum packages it with its existing storage products to create an appliance-based solution for faster, more seamless deployments in organizations. Then, by giving organizations the option to include the DataFrameworks ClarityNow! software as part of the appliance, organizations get, in one fell swoop, the file data classification and management features they need in an appliance-based offering.

To give organizations a full range of cost-effective storage options, Quantum enables them to store data to the cloud, other disk storage arrays, and/or tape. As individuals store file data on the Xcellis scale-out storage and files age and/or become inactive, the ClarityNow! software recognizes these traits and others to proactively copy and/or move files to another storage tier.  Alternately, the Artico archive gateway can also be used in a NAS environment to move files onto  the tier or tiers of storage based on preset policies.

It should be noted this solution particularily makes sense in environments that minimally have a few petabytes of data and potentially even tens or hundreds of petabytes of file data under management. It is only when an organization has this amount of file data under management does it make sense for them to proceed with a robust file data management solution backed by the enterprise IT infrastructure such as what Quantum offers.

It is time for organizations who have seen their file data stores swell to petabyte levels who still are doing nothing to re-examine that conviction. Quantum, with its Xcellis scale-out storage solution and its integration with DataFrameworks ClarityNow!, has taken significant strides to make it easier than ever for organizations to deploy the type of file data management solution they need and derive the value they expect. In so doing, organizations can finally see the benefits of “doing something” about bringing their costs and headaches associated with file data management under control as opposed to simply “doing nothing.”

To subscribe and receive regular updates like this from DCIG, follow this link to subscribe to DCIG’s newsletter.

Note: This blog entry was originally published on June 28, 2017.

Four Flash Memory Trends Influencing the Development of Tomorrow’s All-flash Arrays

The annual Flash Memory Summit is where vendors reveal to the world the future of storage technology. Many companies announced innovative products and technical advances at last week’s 2017 Flash Memory Summit that give enterprises a good understanding of what to expect from today’s all-flash products today as well as a glimpse into tomorrow’s products. These previews into the next generation of flash products revealed four flash memory trends sure to influence the development of the next generation of all-flash arrays.

Flash Memory Trend #1: Storage class memory is real, and it is really impressive. Storage class memory (SCM) is a term applied to several different technologies that share two important characteristics. Like flash memory, storage class memory is non-volatile. It retains data after the power is shut off. Like DRAM, storage class memory is very low latency and is byte-addressable, meaning it can be talked to like DRAM memory. Together, these characteristics enable greater-than-10x improvements in system and application performance.

Two years ago, Intel and Micron rocked the conference with the announcement of 3D XPoint storage class memory. In the run up to this year’s Flash Memory Summit, Intel announced both consumer and enterprise SSDs based on 3D XPoint technology under the Optane brand. These products are shipping now for $2.50 to $5.00 per GB. Initial capacities are reminiscent of 10K and 15K enterprise hard drives. SCM-based SSDs outperform flash memory SSDs in terms of consistent low latency and high bandwidth.

Screen shot of Everspin nvNITRO bandwidth

Screen shot of Everspin nvNITRO bandwidth

Other storage class memory technologies also moved out of the lab and into products. Everspin announced 1 Gb MRAM chips, quadrupling the density of last year’s 256 Mb chip. Everspin demonstrated the performance of a single ST-MRAM SSD in a standard desktop PC. The nvNITRO PCIe card achieved a sustained write bandwidth of 5.8 GB/second and nearly 1.5 Million IOPS. Everspin nvNITRO cards are available in 1 GB and 2 GB capacities today, with 16 GB PCIe cards expected by the end of the year.

CROSSBAR announced that it has licensed its ReRAM technology to multiple memory manufacturers. CROSSBAR displayed sample wafers that were produced by two different licensees. Products based on the technology are in development.

DRAM and flash memory will continue to play important roles for the foreseeable future. Nevertheless, each type of SCM enables the greater-than-10x improvements in performance that inspire new system designs. In the near term, storage class memory will be used as a cache, a write buffer, or as a small pool of high performance storage for database transaction logs. In some cases it will also be used as an expanded pool of system memory. SCM may also replace DRAM in many SSDs.

Picture of NAND roadmap

NAND Roadmap

Flash Memory Trend #2: There is still lot of room for innovation in flash memory. Every flash memory manufacturer announced advances in flash memory technology. Manufacturers provided roadmaps showing that flash memory will be the predominant storage technology for years to come.

Samsung’s keynote presenter brandished the 32 TB 2.5” SSD it announced at the conference. This doubled the 16 TB capacity Samsung announced on the same stage just one year ago. Although the presenter was rightly proud of the achievement, the response of the audience was muted, even mild. I hope our response wasn’t discouraging; but frankly, we expected Samsung to pull this off. The presenter reaffirmed our expectations by telling us that Samsung will continue this pace of advancement in NAND flash for at least the next five years.

Flash Memory Trend #3: NVMe and NVMe-oF are important steps on the path to the future. NVMe is the new standard protocol for talking to flash memory and SCM-based storage. It appears that every enterprise vendor is incorporating NVMe into its products. The availability of dual-ported NVMe SSDs from multiple suppliers helped to hasten the transition to NVMe in enterprise storage systems, as will the hot-swap capability for NVMe SSDs announced at the event.

NVMe-over-Fabrics (NVMe-oF) is the new standard for accessing storage across a network. Pure Storage recently announced the all-NVMe FlashArray//X. At FMS, AccelStor announced its second-generation all-NVMe AccelStor NeoSapphire H810 array. E8 Storage and Kaminario also announced NVMe-based arrays.

Micron discussed its Solid Scale scale-out all-flash array with us. Solid Scale is based on Micron’s new NVMe 9200 SSDs and Excelero’s NVMesh software. NVMesh creates a server SAN using the same underlying technology as NVMe-oF. In the case of Solid Scale, the servers are dedicated storage nodes.

Other vendors told us about their forthcoming NVMe and NVMe-oF arrays. In every case, these products will deliver substantial improvements in latency and throughput compared to existing all-flash arrays, and should deliver millions of IOPS.

Photo of Gen-Z Chassis

Gen-Z Concept Chassis

Flash Memory Trend #4: The future is data centric, not processor centric. Ongoing advances in flash memory and storage class memory are vitally important, yet they introduce new challenges for storage system designers and data center architects. Although NVMe over PCIe can deliver 10x improvements in some storage metrics, PCIe is already a bottleneck that limits overall system performance.

We ultimately need a new data access technology, one that will enable much higher performance. Gen-Z promises to be exactly that. Gen-Z is “an open systems interconnect that enables memory access to data and devices via direct-attached, switched, or fabric topologies. This means Gen-Z will allow any device to communicate with any other device as if it were communicating with its local memory.”

Photo of Barry McAuliffe of HPE and Kurtis Bowman of Dell EMC

Barry McAuliffe (HPE) and Kurtis Bowman (Dell EMC)

I spent a couple hours with the Gen-Z Consortium folks and came away impressed. The consortium is working to enable a composable infrastructure in which every type of performance resource becomes a virtualized pool that can be allocated to tasks as needed. The technology was ready to be demonstrated in an FPGA-based implementation, but a fire in the exhibit hall prevented access. Instead, we saw a conceptual representation of a Gen-Z based system.

The Gen-Z Consortium is creating an open interconnect technology on top of which participating organizations can innovate. There are already more than 40 participating organizations including Dell EMC, HPE, Huawei, IBM, Broadcom and Mellanox. I found it refreshing to observe staff from HPE (Barry McAuliffe, VP and Secretary of Gen-Z) and Dell EMC (Kurtis Bowman, President of Gen-Z) working together to advance this data centric architecture.

Implications of These Flash Memory Trends for Enterprise IT

Vendors are shipping storage class memory products today, with more to come by the end of the year. Flash memory manufacturers continue to innovate, and will extend the viability of flash memory as a core data center technology for at least another five years. NVMe and NVMe-oF are real today, and are key technologies for the next generation of storage systems.

Enterprise technologists should plan 2017 through 2020 technology refreshes around NVMe and NVMe-oF. Data center architects and application owners should seek 10:1 improvements in performance, and a similar jump in data center efficiency.

Beyond 2020, enterprise technologists should plan their technology refreshes around a composable data centric architecture. Data center architects should track the development of the Gen-Z ecosystem as a possible foundation for their next-generation data centers.

Software-defined Data Centers Have Arrived – Sort of

Today organizations more so than ever are looking to move to software-defined data centers. Whether they adopt software-defined storage, networking, computing, servers, security, or all of them as part of this initiative, they are starting to conclude that a software-defined world trumps the existing hardware defined one. While I agree with this philosophy in principle, organizations need to carefully dip their toe into the software-defined waters and not dive head-first.

The concept of software-defined data centers is really nothing new. This topic has been discussed for decades and was the subject of one of the first articles I ever published 15 years ago (though the technology was more commonly called virtualization at that time.) What is new, however, is the fact that the complementary, supporting set of hardware technologies needed to enable the software-defined data center now exists.

More powerful processors, higher capacity memory, higher bandwidth networks, scale-out architectures, and other technologies have each contributed, in part, to making software-defined data centers a reality. The recent availability of solid state drives (SSDs) may have been perhaps the technology that ultimately enabled this concept to go from the drawing boards into production. SSDs reduce data access times from milliseconds to microseconds helping to remove one of the last remaining performance bottlenecks to making software-defined data centers a reality.

Yet as organizations look to replace their hardware defined infrastructure with a software-defined data center, they must still proceed carefully. Hardware defined infrastructures may currently cost a lot more than software-defined data centers but they do offer distinct benefits that software-defined solutions currently are still hard-pressed to match.

For instance, the vendors who offer the purpose-built appliances for applications, backup, networking, security, or storage used in hardware defined infrastructures typically provide hardware compatibility lists (HCLs). Each HCL names the applications, operating systems, firmware, etc., for which the appliance is certified to interact with and which the vendor will provide support. Deviate from that HCL and your ability to get support suddenly gets sketchy.

Even HCLs are problematic due to the impossibly large number of possible configurations that exist in enterprise environments which vendors can never thoroughly vet and test.

This has led to the emergence of converged infrastructures. Using these, vendors guarantee that all components in the stack (applications, servers, network, and storage along with their firmware and software) are tested and certified to work together. So long as organizations use the vendor approved and tested hardware and software component in this stack and keep them in sync with the vendor specifications, they should have a reliable solution.

Granted, obtaining solutions that satisfy these converged infrastructure requirements cost more. But for many enterprises paying the premium was worth it. This testing helps to eliminate situations such as I once experienced many years ago.

We discovered in the middle of a system wide SAN upgrade that a FC firmware driver on all the UNIX systems could not detect the LUNs on the new storage systems. Upgrading this driver required us to spend nearly two months with individuals coming in every weekend to apply this fix across all these servers before we could implement and use the new storage systems.

Software-defined data centers may still encounter these types of problems. Even though the software itself may work fine, it cannot account for all the hardware in the environment or guarantee interoperability with them. Further, since software-defined solutions tend to go into low cost and/or rapidly changing environments, there is a good possibility the HCLs and/or converged solutions they do offer are limited in their scope and may have not been subjected to the extensive testing that production environments.

The good news is that software-defined data centers are highly virtualized environments. As such, copies of production environments can be made and tested very quickly. This flexibility mitigates the dangers of creating unsupported, untested production environments. It also provides organizations an easier, faster means to failback to the original configuration should the configuration now work as expected.

But here’s the catch. While software-defined data centers provide flexibility, someone must still possess the skills and knowledge to make the copies, perform the tests, and do the failbacks and recoveries if necessary. Further, software-defined data centers eliminate neither their reliance on underlying hardware components nor the individuals who create and manage them.

Interoperability with the hardware is not a given and people are known to be unpredictable and/or unreliable from time to time, the whole system could go down or function unpredictably without a clear path to resolution. Further, if one encounters interoperability issues initially or at some point in the future, the situation may get thornier. Organizations may have to ask and answer questions such as:

  1. When the vendors start finger pointing, who owns the problem and who will fix it?
  2. What is the path to resolution?
  3. Who has tested the proposed solution?
  4. How do you back out if the proposed solution goes awry?

Software-defined data centers are rightfully creating a lot of buzz but they are still not the be all and end all. While the technology now exists at all levels of the data center to make it practical to deploy this architecture and for companies to realize significant hardware savings in their data center budgets, the underlying best practices and support needed to successfully implement software-defined data are still playing catch-up. Until those are fully in place or you have full assurances of support by a third party, organizations are advised to proceed with caution on any software-defined initiative, data center or otherwise.

Get Ready for More Features and Still Lower All-flash Array Storage Prices

While the overall economy and even the broader technology sector largely boom, the enterprise storage space is feeling the pinch. As storage revenues level off and even drop, many people with whom I spoke at this past week’s HPE Discover 2017 event shared their thoughts as to what is causing this situation. The short answer: there does not appear to be a single reason for the pullback in storage revenue but rather a perfect storm of events that is contributing to this situation. The good news is that this retrenching should ultimately benefit end-users.

I had a chance to stop by the HPE Discover 2017 event this past week in Las Vegas to catch up with many of the individuals in the industry that I know to get their “boots-on-the-ground” perspective on what they are hearing and seeing. Here are some of the thoughts they had to share:

  1. Too many all-flash storage players. The number of companies selling enterprise flash storage products is staggering. Aside from the “big” names in the technology space such as Dell EMC, HDS, HPE, IBM, and NetApp offering flash storage solutions, there are many others including Tegile, Kaminario, Pure Storage, iXsystems, StorTrends, Fujitsu, NEC, Nimbus Data, Tintri, and Nexsan, just to name a few. Further, that list does not include the ones that were recently acquired (Nimble Storage and Solidfire) nor does it fully take into account the multiple lines of all-flash arrays from the big technology players. For example, HPE has all-flash arrays in its XP, 3PAR StoreServ, StoreVirtual, Nimble Storage and MSA product lines.
  2. A race to the bottom. This much competition with so many product lines inevitably leads to price erosion. The cloud storage market (Amazon S3, Google Drive, Microsoft Azure) are not the only ones experiencing the race to the bottom in the per GB pricing model. This number of all-flash array competitors is causing similar downward pricing pressure in all-flash arrays.
  3. Inability to differentiate. Keeping track of the large number of vendors coupled with the large number of all-flash array models can challenge even the most astute technologist. Now try to explain how they differ and what advantages that one offers over the other. For instance, I was uncertain as to exactly why HPE was so interested in acquiring Nimble Storage when it already had multiple storage lines. Turns out, it was Nimble’s Infosight technology, its advanced integration with Docker, and its Cloud Volumes feature that piqued HPE’s interest in acquiring Nimble Storage. Now HPE just needs to communicate those differentiators to the market place and adopt that technology across its other product lines.
  4. Growth of hyperconverged infrastructure and software-defined storage technologies. One of the more difficult factors to measure is to what degree hyperconverged infrastructure and software-defined storage solutions are impacting traditional storage sales. Even now, storage vendors tell me that they rarely encounter vendors of these products in their sales process. However, as these technologies get a foothold in organizations and take root, are they robbing storage vendors of future storage sales? My gut tells me yes but this is largely anecdotal evidence.

The primary factor working against lower all-flash arrays prices is a tight supply of flash. While not cited as a major contributor, individuals did share with me that some storage shipments have slipped due to tight supplies of flash. As such, product sales that vendors expected to make in a specific quarter get pushed out due to the unavailability or late deliveries of flash components.  These same individuals stress that they are working with their suppliers to get this situation corrected and no one cited any specific supplier as the cause of the problem (possibly because they want to stay on their good side.) However, the fact that I heard this as a contributing factor to the storage sales slowdown from multiple sources seems to suggest that until suppliers increase their flash production levels, it may negatively impact their enterprise storage sales and cause higher prices for end-users.

This combination of factors among others is having a cumulative effect of slowing storage sales and eroding prices. However, end-users may and probably should view these factors as huge positives and ultimately working in their favor. While they may not relish the confusion and the time it takes to sort through products to find the right all-flash array for them nor delays in receiving products they order, those that take the time to compare products and get competitive bids will likely be able to obtain a model that very closes matches their needs and get it at a price that meets or comes in below budget.

Deduplicate Differently with Leading Enterprise Midrange All-flash Arrays

If you assume that leading enterprise midrange all-flash arrays (AFAs) support deduplication, your assumption would be correct. But if you assume that these arrays implement and deliver deduplication’s features in the same way, you would be mistaken. These differences in deduplication should influence any all-flash array buying decision as deduplication’s implementation affects the array’s total effective capacity, performance, usability, and, ultimately, your bottom line.

The availability of deduplication technology on all leading enterprise midrange AFAs comes as a relief to many organizations. The raw price per GB of AFAs often precludes them from deploying AFAs in their environment. However, deduplication’s presence enables organizations to deploy AFAs more widely in their environment since it may increase an AFA’s total effective capacity by 2-3x over its total useable capacity.

The operative word in that previous sentence is “may.” Selecting an enterprise midrange all-flash array model from Dell EMC, HDS, HPE, Huawei, or NetApp only guarantees that you will get an array that supports deduplication. One should not automatically assume that any of these vendors will deliver it in the way that your organization can best capitalize on it.

For instance, if you only want to do post-process deduplication, a model from only one of those five vendors listed above supports that option. If you want deduplication included when you buy the array and not have to license it separately, only three of the vendors support that option. If you want to do inline deduplication of production data, then only two of those vendors support that option.

Deduplication on all-flash arrays is highly desirable as it helps drive the price point of flash down to the point where organizations can look to cost-effectively use it more widely in production. However, deduplication only makes sense if the vendor delivers deduplication in a manner that matches the needs of your organization.

To get a glimpse into how these five vendors deduplicate data differently, check out this short, two-page report* from DCIG that examines 14 deduplication features on five different products. This concise, easy-to-understand report provides you with an at-a-glance snapshot of which products support the key deduplication features that organizations need to make the right all-flash array buying decision.

Access to this report* is available through the DCIG Competitive Intelligence Portal and is limited to subscribers to it. However, Unitrends is currently providing complimentary access to the DCIG Competitive Intelligence Portal for end-users. Once registered, individuals may download this report as well as the latest DCIG All-flash Array Buyer’s Guide.

If not already a subscriber, register now to get this report for free today to obtain the information you need to get a better grasp on more than whether these arrays deduplicate data. Rather, learn how they do it differently!

* This report is only available for a limited time to subscribers of the DCIG Competitive Intelligence (CI) Portal. Individuals who work for manufacturers, resellers, or vendors must pay to subscribe to the DCIG CI Portal. All information accessed and reports downloaded from the DCIG CI Portal is for individual, confidential use, and may not be publicly disseminated.

Nimbus Data Reset Puts its ExaFlash D-Series at Forefront of All-flash Array Cost/Performance Curve

A few years ago when all-flash arrays (AFAs) were still gaining momentum, newcomers like Nimbus Data appeared poised to take the storage world by storm. But as the big boys of storage (Dell, HDS, and HPE, among others,) entered the AFA market, Nimbus opted to retrench and rethink the value proposition of its all-flash arrays. Its latest AFA models, the ExaFlash D-Series, is one of the outcomes of that repositioning as these arrays answer the call of today’s hosting providers. These arrays deliver the high levels of availability, flexibility, performance, and storage density that they seek backed by one of the lowest cost per GB price points in the market.

To get a better handle on the changes that have occurred at Nimbus Data over the past few years and the AFA market in general, I spoke with its CEO and founder, Thomas Isakovich. As the predominant enterprise storage players entered the AFA market, Nimbus had to first quantify the ways in which its models differentiated themselves from the pack and then communicate that message to the market place.

In comparing its features to its competitors, it identified some areas where its products out shined the competition. Specifically, its models offered support for multiple different high performance storage network protocols, it had a much lower price point on a per/TB basis, and its all-flash D-Series (one of its four platforms) pack much more flash into a 4U rack unit than models from the largest AFA providers. Notably, the research in the DCIG Competitive Intelligence Portal backs up these claims as the chart below reveals.

DCIG Comparison of Key Nimbus ExaFlash D-Series to Large AFA Providers

Source: DCIG Competitive Intelligence Portal; Names of Competitive Models Available with Paid Subscription*

This analysis of its product feature helped Nimbus to refine and better articulate its go-to-market strategy. For instance, thanks to its scale-out design coupled with its very high performance and low cost point, Nimbus now primarily focuses its sales efforts on hyper-scalers that need AFAs with these specific attributes.

Nimbus finds its most success with cloud infrastructure companies as well as organizations in the life sciences, post-production, and financial services markets. Further, due to the size and specific needs of customers in these markets, it suspended sales through the channel and has switched to primarily relying on direct sales.

The conversation I had with its CEO revealed that Nimbus is alive and well in the AFA market and still innovating much as it did when it first arrived on the scene years ago. However, it is also clear that Nimbus has a much better grasp of its competitive advantages in the market place and has adapted its go-to market plan based upon that insight to ensure its near and long term success.

Server-based Storage Makes Accelerating Application Performance Insanely Easy

In today’s enterprise data centers, when one thinks performance, one thinks flash. That’s great. But that thought process can lead organizations to think that “all-flash arrays” are the only option they have to get high levels of performance for their applications. That thinking is now so outdated. The latest server-based storage solution from Datrium illustrates how accelerating application performance just became insanely easy by simply clicking a button versus resorting to upgrading some hardware in their environment.

As flash transforms the demands of application owners, organizations want more options to cost-effectively deploy and manage it. These include:

  • Putting lower cost flash on servers as it performs better on servers than across a SAN.
  • Hyper-converged solutions have become an interesting approach to server-based storage. However, concerns remain about fixed compute/capacity scaling requirements and server hardware lock-in.
  • Array-based arrays have taken off in large part because they provide a pool of shared flash storage accessible to multiple servers.

Now a fourth, viable flash option has appeared on the market. While I have always had some doubts about server-based storage solutions that employ server-side software, today I changed my viewpoint after reviewing Datrium’s DVX Server-powered Storage System.

Datrium has the obvious advantages over arrays as it leverages the vast, affordable and often under-utilized server resources.  But unlike hyper-converged systems, it scales flexibly and does not require a material change in server sourcing.

To achieve this ends, Datrium has taken a very different approach with its “server-powered” storage system design.  In effect, Datrium split speed from durable capacity in a single end-to-end system.  Storage performance and data services tap host compute and flash cache, driven by Datrium software that is uploaded to the virtual host. It then employs its DVX appliance, an integrated external storage appliance, that permanently holds data and orchestrates the DVX system protects application data in the event of server or flash failure.

This approach has a couple meaningful takeaways versus traditional arrays:

  • Faster flash-based performance given it is local to the server versus accessed across a SAN
  • Lower cost since server flash drives cost far less than flash drives found on an all-flash array.

But it also addresses some concerns that have been raised about hyper-convered systems:

  • Organizations may independently scale compute and capacity
  • Plugs into an organization’s existing infrastructure.

Datrium Offers a New Server-based Storage Paradigm


Source: Datrium

Datrium DVX provides the different approach needed to create a new storage paradigm. It opens new doors for organizations to:

  1. Leverage excess CPU cycles and flash capacity on ESX servers. ESX servers now exhibit the same characteristics that the physical servers they replaced once did: they have excess, idle CPU. By deploying server-based storage software at the hypervisor level, organizations can harness this excess, idle CPU to improve application performance.
  2. Capitalize on lower-cost server-based flash drives. Regardless of where flash drives reside (server-based or array-based,) they deliver high levels of performance. However, server-based flash costs much less than array-based flash while providing greater flexibility to add more capacity going forward.

Accelerating Application Performance Acceleration Just Became Insanely Easy

Access to excess server-based memory, CPU and flash combine to offer another feature that array-based flash can never deliver: push button application performance. By default, when the Datrium storage software installs on ESX hypervisor, it limits itself to 20 percent of the available vCPU available to each VM. However, not every VM uses all of its available vCPU with many VMs only using only 10-40 percent of their available resources.

Using Datrium’s DIESL Hyperdriver Software version, VM administrators can non-disruptively tap into these latent vCPU cycles. Using Datrium’s new Insane Mode, they may increase the available vCPU cycles a VM can access from 20 to 40 percent with a click of a button. While the host VM must have latent vCPU cycles available to accomplish this task, this is a feature that array-based flash would be hard-pressed to ever offer and unlikely could ever do with the click of a button.

Server-based storage designs have shown a lot of promise over the years but have not really had the infrastructure available to them to build a runway to success. That has essentially changed and Datrium is one of the first solutions to come to market that recognizes this fundamental change in the infrastructure of data centers and has brought a product to market to capitalize on it. As evidenced by the Insane Mode in its latest software release, organizations may now harness next generation server-based storage designs and accelerate application performance while dramatically lowering complexity and costs in their environment.

Hyper-converged Infrastructure Adoption is a Journey, not a Destination

Few data center technologies currently generate more buzz than hyper-converged infrastructure solutions. By combining compute, data protection, flash, scale-out, and virtualization into a single self-contained unit, organizations get the best of what each of these individual technologies has to offer with the flexibility to implement each one in such a way that it matches their specific business needs. Yet organizations must exercise restraint in how many attributes they ascribe to hyper-converged infrastructure solutions as their adoption is a journey, not a destination.

In the last few years momentum around hyper-converged infrastructure solutions has been steadily building and for good reason. Organizations want:

  • The flexibility and power of server virtualization
  • To cost-effectively implement the performance of flash in their environment
  • To grow compute or storage with fewer constraints
  • To know their data is protected and easily recoverable
  • To spend less time managing their infrastructure and more time managing their business

Hyper-converged infrastructure solutions more or less check all of these boxes. In so doing, organizations are shifting how they approach everything from how they manage their data centers to their buying habits. For instance, rather than making independent server, networking and storage buying decisions, organizations are making a single purchase of a hyper-converged solution that addresses all of these specific needs.

But here is the trap that organizations should avoid. Some providers promote the idea that hyper-converged infrastructures can replace all of these individual components in any size data center. While that idea may someday come to pass, that day is not today and, in all likelihood, will never be fully realized.

Hyper-converged infrastructure solutions as they stand today are primarily well suited for the needs of small and maybe even mid-sized data centers. That said, the architecture of hyper-converged infrastructure solutions lends itself very well to moving further up to the stack into ever larger data centers in the not too distant future as their technologies and feature sets mature.

But as their capabilities and features mature, hyper-converged infrastructure solutions will still not become plug-n-play where organizations can set-‘em-and-forget-‘em. While an element of those concepts may always exist in hyper-converged solutions, they more importantly lay the groundwork for a needed and necessary evolution in how organizations manage their data centers.

Currently organizations still spend far too much managing their IT infrastructure at a component level. As such, they do not really get the full value out of their IT investment with many of their IT resources utilized at less than optimal levels even as they remain too difficult to efficiently and effectively manage.

By way of example, what should be relatively routine tasks such as data migrations during server or storage upgrades or replacements typically remain fraught with risk and exceeding difficult to accomplish. While providers have certainly made strides in recent years to eliminate some of the difficulty and risks associated with this task, it is still not the predictable, repeatable process that organizations want it to be and that it realistically should be.

This is really where hyper-converged infrastructure solutions come into play. They put a foundation into place that organizations can use to help transform the world of IT from the Wild West that it too often is today back into a discipline that offers the more predictable and understandable outcomes that organizations expect and which IT should rightfully provide.

Organizations of all sizes that look at hyper-converged infrastructure solutions today already find a lot in them to like. The breadth of their features coupled with their ease of install and ongoing management certainly help to make its case for adoption. However smart organizations should look at hyper-converged infrastructure solutions more broadly as a means to introduce a platform that they can then use to start on a journey towards building a more stable and predictable IT environment that they can then leverage in the years to come.

Real-World Performance Testing Can Help Savvy Organizations Future Proof Their Emerging Flash Infrastructure

Almost all size organizations now view flash as a means to accelerate application performance in their infrastructure … and for good reason. Organizations that deploy flash typically see increases in performance by factor of up to 10x. But while many all-flash storage arrays can deliver these increases in performance, savvy organizations must prepare to do more than simply increase workload performance. They need to identify solutions that help them better troubleshoot their emerging flash infrastructure as well as future proof their investment in flash by better modeling anticipated application workloads on all-flash arrays being evaluated before they are acquired.

One of the big advantages of all-flash arrays is that they make it much easier for organizations to improve the performance of almost any application regardless of its type. However the ease in which these all-flash arrays accelerate performance also may prompt organizations to lower their guard and fail to consider all of the potential pitfalls that accompany the deployment of such an array. One can just as easily over-provision an all-flash array as a disk-based array. Given the price per GB differences between the two, the cost penalty for over-provisioning all-flash arrays can be very significant.

Common pitfalls that DCIG hears about include:

  • The all-flash array works fine at first but performance unexpectedly drops. This leaves everyone wondering, “What is the root cause of the problem?” The all-flash array? The storage network? The server? The application? Or some other component?
  • An organization starts by putting a few or even one high-performance application on the all-flash array. It works so well that all of sudden everyone in the organization wants to put their applications on the array so performance on the all-flash array begins to suffer.

Performance analytics software can help in both of these cases as the recently released Load DynamiX 5.0 Storage Performance Analytics solution helps to illustrate. In the first scenario mentioned above, Load DynamiX provides a workload analyzer that examines performance in existing networked storage environments (FC/iSCSI now, CIFS/NFS coming in 1H2016.) This analyzer pulls performance data from the production storage arrays as well as from the Ethernet or FC switches so organizations can visualize existing storage workloads.

The Load DynamiX software then more importantly equips organizations to analyze these workloads as it automates this task using a combination of real-time and historical views of the data. By comparing IOPs, throughput, latency, read/write and random/sequential workload mixes among many others, it can begin to paint a picture of what is actually going on in the environment and identify the root cause of the performance bottleneck. This type of automation and insight becomes especially important when performance bottlenecks occur intermittently and at seemingly random and unpredictable intervals.

Yet maybe what makes the Load DynamiX solution particularly impressive is that after it captures these various pools of performance data, organizations can use it to optionally recreate the same behavior in their labs. In this way, they can experiment and trial possible solutions to the problem in a lab environment without tampering with the production environment and potentially making the situation worse. This gives IT organizations the opportunity to identify a viable solution and verify it works in their lab so they have a higher degree of confidence it will work in their production environment before they start the process of actually implementing the proposed fix.

This ability to capture and model workloads also becomes a very handy feature to have at one’s disposal when trialing new all-flash arrays as one organization recently discovered. It used Load DynamiX to first capture current performance data on its existing environment and then ran it against six (6) all-flash arrays under consideration.

As it turns out, all six (6) of them achieved the desired sub-2ms response times that they were hoping and expecting to get (as opposed to the 10ms response times that they were seeing using their existing disk-based array) when each of these all-flash arrays was tested using the company’s existing Oracle-based application workloads as Chart 1 illustrates.

Chart 1

Chart 1

However the organization then did something very clever. It fully expected that over time the workloads on the all-flash array would increase for the reasons cited above – perhaps by as much as 10x in the years to come. To model those anticipated increases, it again used Load DynamiX to simulate a 10x increase in application workload performance. When measured against this 10x increase in workload, substantial performance differences emerged between the various all-flash arrays as Chart 2 illustrates.

Chart 2

Chart 2

Under this 10x increase in workload, all of the all-flash arrays still outperformed the disk-based array. However only one of these arrays was able to deliver the sustained sub-2ms response times that this organization wanted its all-flash array solution to deliver over time. While a variety of factors came into play that account for these lower performance numbers, , it is noteworthy that all of these all-flash arrays except one had compression and deduplication turned on. As such, as applications workloads increased, it is conceivable and logical to conclude that these data reduction technologies begin to extract a heavier performance toll.

All-flash arrays have been a boon for organizations as they eliminate many of the complex, mind-numbing tasks that highly skilled individuals previously had to perform to coax the maximum amount of performance out of disk-based arrays. However that does not mean performance issues no longer exist once flash is deployed. Using performance analytics software like the Load Dynamix 5.0 Storage Performance Analytics solution, organizations can now better trouble-shoot both their legacy and new all-flash environment as well as make better, more informed choices about all-flash arrays so they can better scale them to match their anticipated increases in workload demands.

DCIG Does NOT “Rig” its Buyer’s Guide Research or Data to Arrive at Predetermined Conclusions or Rankings

DCIG appreciates the attention given to its recently released DCIG 2015-16 All-Flash Array Buyer’s Guide. This type of dialog and feedback is absolutely critical in helping DCIG, the industry as a whole, and most importantly, the buyers and the organizations for which they work to make informed buying decisions about all-flash arrays.

However, in reviewing some of the recent commentary, DCIG thought it prudent to weigh in on a couple of fronts. First, to let the industry know that DCIG does plan to update some information in its published Guide. Second, DCIG clearly needs to take some time to educate the individuals who cover DCIG Buyer’s Guide about DCIG’s process and why some of the accusations made are unfounded and potentially even libelous.

In reviewing some of the criticism about the DCIG 2015-16 All-flash Array Buyer’s Guide, it appears to break down into two major areas of focus:

  1. To come out on top in a Buyer’s Guide a vendor has to pay DCIG up-front (prior to research being done) to get the desired results
  2. The product data in the Buyer’s Guide was inaccurate

Competitive Pay-for-Say Published Research with Predetermined Outcomes Based upon Rigged Data is a Civil Crime in the US

The allegations that DCIG Buyer’s Guides are “pay-to-say” and that vendors pay DCIG ahead of time to get the desired results, are nothing new and have been leveled at DCIG multiple times in the past. In this respect, DCIG has responded to these allegations in prior blog entries and does its best to disclose its methodologies and practices in the “Disclosures” section of every Buyer’s Guide as well as in previously posted blog entries. I will not bother to rehash these points other than to ask individuals to either read the Buyer’s Guide itself or refer them to these three blog entries that DCIG has previously posted that provide clarifying comments:

But let me be direct: to do competitive, pay-for-say research that has predetermined outcomes and relies upon rigging data, as some allege that DCIG does, is more than unethical; it violates United States civil law. As such, DCIG cannot and does not have any part in performing this type of research when preparing its Buyer’s Guide nor does it condone this type of activity.

DCIG is quite confident that the individuals making these irresponsible claims have no defensible grounds on which to base them. Anyone accusing DCIG of conducting its research for its Buyer’s Guide in this manner may be guilty of breaking these same U.S. civil laws and committing the same acts that they allege DCIG violated.

DCIG created and subsequently refined its internal Buyer’s Guide processes over the last few years to ensure each product it covers is fairly reviewed and represented within the constraints of creating a timely snapshot of a given market place. Further, DCIG has worked and continues to work with its attorneys to ensure all of DCIG’s research and publications follow US Civil Law so that the data contained in each and every one of its Buyer’s Guides is appropriate for publication.

In regards to allegations that DCIG knowingly predetermines Buyer’s Guide winners to get a certain result, one only needs to look at two Buyer’s Guides that DCIG has produced in the last year.

One is the DCIG 2014-15 Under $100K Deduplicating Backup Appliance Buyer’s Guide in which EMC came out on top but which EMC did not license. Yet another is the recent DCIG 2015-16 Overall Hybrid Storage Array Buyer’s Guide which came out just last month. Again, the winning company did not license that Guide. Both Guides are available at no charge to subscribers of the DCIG Analysis Portal (end-users and buyers may receive complimentary access.) The publication and availability of these two Guides help to illustrate DCIG’s position as well as refute the irresponsible nature of some of the claims made.

That said, vendors do pay to license Guides that they have already won or that have previously been published.

Part two of this blog series will delve into DCIG’s research as included in the DCIG 2015-16 All-flash Array Buyer’s Guide.

Note: this blog entry was updated at 9:40 am CT on 10/12/2015; 7:20 am CT on 10/13/2015 to clarify some language; and again on 10/29/2015 at noon to clarify some language and correct some grammar.