Nanoseconds, Stubborn SAS, and Other Takeaways from the Flash Memory Summit 2019

Every year at the Flash Memory Summit held in Santa Clara, CA, attendees get a firsthand look at the technologies that will impact the next generation of storage. This year many of the innovations centered on forthcoming interconnects that will better deliver on the performance that flash offers today. Here are DCIG’s main takeaways from this year’s event.

Takeaway #1 – Nanosecond Response Times Demonstrated

PCI Express (PCIe) fabrics can deliver nanosecond response times using resources (CPU, memory, storage) situated on different physical enclosures. In meeting with PCIe provider, Dolphin Interconnect Solutions, it demonstrated how an application could access resources (CPU, flash storage & memory) on different devices across a PCIe fabric in nanoseconds. Separately, GigaIO announced 500 nanosecond end-to-end latency using its PCIe FabreX switches. While everyone else at the show was boasting about microsecond response times, Dolphin and GigaIO introduced nanoseconds into the conversation. Both these companies ship their solutions now.

Takeaway #2 – Impact of NVMe/TCP Standard Confirmed

Ever since we heard the industry planned to port NVMe-oF to TCP, DCIG thought this would accelerate the overall adoption of NVMe-oF. Toshiba confirmed our suspicions. In discussing its Kumoscale product with DCIG, it shared that it has seen a 10x jump in sales since the industry ratified the NVMe/TCP standard. This stems from all the reasons DCIG stated in a previous blog entry such as TCP being well understood, Ethernet being widely deployed, its low cost, and its use of existing infrastructure in organizations.

Takeaway #3 – Fibre Channel Market Healthy, Driven by Enterprise All-flash Array

According to FCIA leaders, the Fibre Channel (FC) market is healthy. FC vendors are selling 8 million ports per year. The enterprise all-flash array market is driving FC infrastructure sales, and 32 Gb FC is shipping in volume. Indeed, DCIG’s research revealed 37 all-flash arrays that support 32 Gb FC connectivity.

Front-end connectivity is often the bottleneck in all-flash array performance, so doubling the speed of those connections can double the performance of the array. Beyond 32 Gb FC, the FCIA has already ratified the 64 Gb standard and is working on the 128 Gb FC. Consequently, FC has a long future in enterprise data centers.

FC-NVMe brings the benefits of NVMe-oF to Fibre Channel networks. FC-NVMe reduces protocol overhead, enabling GEN 5 (16 Gb FC) infrastructure to accomplish the same amount of work while consuming about half the CPU of standard FC.

Takeaway #4 – PCIe Will Not be Denied

All resources (CPU, memory and flash storage) can connect with one another and communicate over PCIe. Further, using PCIe eliminates the need for introducing the overhead associated with storage protocols (FC, InfiniBand, iSCSI, SCSI). All these resources talk the PCIe protocol. With the PCIe 5.0 standard formally ratified in May 2019 and discussions about PCIe 6.0 occurring, the future seems bright for the growing adoption of this protocol. Further, AMD and Intel having both thrown their support behind it.

Takeaway #5 – SAS Will Stubbornly Hang On

DCIG’s research finds that over 75% of AFAs support 12Gb/second SAS now. This predominance makes the introduction of 24G a logical next step for these arrays. A proven, mature, and economical interconnect, few applications can yet drive the performance limits of 12Gb, much less the forthcoming 24G standard. Adding to the likelihood that 24G moves forward, the SCSI Trade Association (STA) reported that the recent 24G plug fest went well.

Editor’s Note: This blog entry was updated on August 9, 2019, to correct grammatical mistakes and add some links.



DCIG Will Provide Update on All-flash Array Advances at Flash Memory Summit 2019

Flash Memory Summit is the world’s largest storage industry event featuring the trends, innovations, and influencers driving the adoption of flash memory. DCIG will again present at the Summit this year. DCIG’s presentation will draw from its independent research into all-flash arrays and the Competitive Intelligence that DCIG performs on behalf of its clients.

The session will highlight recent developments in all-flash arrays and the rapidly changing competitive landscape for these products. Ken Clipperton, DCIG’s Lead Analyst for Storage, will speak on Tuesday, August 6th, from 9:45-10:50 AM. The session is called BMKT-101B-1: Annual Update on Flash Arrays.

Just as DCIG does in its reports, Mr. Clipperton will discuss both the “What” and the “So what?” of these advances in all-flash arrays. The presentation will cover the changes occurring in all-flash arrays, the value they create for organizations implementing them, and the key topic areas that DCIG focuses on in its competitive intelligence reports.

Mr. Clipperton will cover the following topics:

  • Advances in front-end connectivity to the storage network/application servers
  • Advances in back-end connectivity to storage media
  • Integration of storage-class memory
  • Integrations with other elements in the data center
  • Cloud connectivity
  • Delivery models
  • Predictive analytics
  • Proactive support
  • Licensing
  • Storage-as-a-Service (OpEx model)
  • Guarantee programs
  • Expectations about developments in the near-term future

If you will be at FMS, we hope that you will be able to attend this session and then stick around to introduce yourself and share your perspectives on where the AFA marketplace is heading.

Whether you are able to attend FMS or DCIG’s session at the summit, we invite you to sign up for our newsletter. To request more information about DCIG’s Competitive Intelligence services, click on this link.

Be sure to check back on the DCIG website after the event to get our take on the Summit and the products we believe deserve “Best in Show” honors.




Fast Network Connectivity Key to Unlocking All-flash Array Performance

The current generation of all-flash arrays offers enough performance to saturate the network connections between the arrays and application servers in the data center. In many scenarios, the key limiter to all-flash array performance is storage network bandwidth. Therefore, all-flash array vendors have been quick to adopt the latest advances in storage network connectivity.

Fast Networks are Here, and Faster Networks are Coming

Chart showing current and future Ethernet speeds

Ethernet is now available with connection speeds up to 400 Gb per second. Fibre Channel now reaches speeds up to 128 Gb per second. As discussed during a recent SNIA presentation, the roadmaps for both technologies forecast another 2x to 4x increase in performance.

While the fastest connections are generally used to create a storage network fabric among data center switches, many all-flash arrays support fast storage network connectivity.

All-flash Arrays Embrace Fast Network Connectivity

DCIG’s research into all-flash arrays identified thirty-seven (37) models that support 32 Gb FC, seventeen (17) that support 100 Gb Ethernet, and ten (10) that support 100 Gb InfiniBand connectivity. These include products from Dell EMC, FUJITSU Storage, Hitachi Vantara, Huawei, Kaminario, NEC Storage, NetApp, Nimbus Data, Pure Storage and Storbyte.

Summary chart of AFA connectivity support

Source: DCIG

Other Drivers of Fast Network Connectivity

Although all-flash storage is a key driver behind fast network connectivity, there are also several other significant drivers. Each of these has implications for the optimal balance between compute, storage, network bandwidth, and the cost of creating and managing the infrastructure.

These other drivers of fast networking include:

  • Faster servers that offer more capacity and performance density per rack unit
  • Increasing volumes of data require increasing bandwidth
  • Increasing east-west traffic between servers in the data center due to scale-out infrastructure and distributed cloud-native applications
  • The growth of GPU-enabled AI and data mining
  • Larger data centers, especially cloud and co-location facilities that may house tens of thousands of servers
  • Fatter pipes yield more efficient fabrics with fewer switches and cables

Predominant All-Flash Array Connectivity Use Cases

How an all-flash array connects to the network is frequently based on the type of organization deploying the array. While there are certainly exceptions to the rule, the predominant connection methods and use cases can be summarized as follows:

  • Ethernet = Cloud and Service Provider data centers
  • Fibre Channel = Enterprise data centers
  • InfiniBand = HPC environments

Recent advances in network connectivity–and the adoption of these advances by all-flash array providers–creates new opportunities to increase the amount of work that can be accomplished by an all-flash array. Therefore, organizations intending to acquire all-flash storage should consider each product’s embrace of fast network connectivity as an important part of the evaluation process.




The Early Implications of NVMe/TCP on Ethernet Network Designs

The ratification in November 2018 of the NVMe/TCP standard officially opened the doors for NVMe/TCP to begin to find its way into corporate IT environments. Earlier this week I had the opportunity to listen in on a webinar that SNIA hosted which provided an update on NVMe/TCP’s latest developments and its implications for enterprise IT. Here are four key takeaways from that presentation and how these changes will impact corporate data center Ethernet network designs.

First, NVMe/TCP will accelerate the deployment of NVMe in enterprises.

NVMe is already available in networked storage environments using competing protocols such as RDMA which ships as RoCE (RDMA over Converged Ethernet). The challenge is no one (well, very few anyway) use RDMA in any meaningful way in their environment so using RoCE to run NVMe never gained and will likely never gain any momentum.

The availability of NVMe over TCP changes that. Companies already understand TCP, deploy it everywhere, and know how to scale and run it over their existing Ethernet networks. NVMe/TCP will build on this legacy infrastructure and knowledge.

Second, any latency that NVMe/TCP introduces still pales in comparison to existing storage networking protocols.

Running NVMe over TCP does introduces latency versus using RoCE. However, the latency that TCP introduces is nominal and will likely be measured in microseconds in most circumstances. Most applications will not even detect this level of latency due to the substantial jump in performance that natively running NVMe over TCP will provide versus using existing storage protocols such as iSCSI and FC.

Third, the introduction of NVMe/TCP will require companies implement Ethernet network designs that minimize latency.

Ethernet networks may implement buffering in Ethernet switches to handle periods of peak workloads. Companies will need to modify that network design technique when deploying NVMe/TCP as buffering introduces latency into the network and NVMe is highly latency sensitive. Companies will need to more carefully balance how much buffering they introduce on Ethernet switches.

Fourth, get familiar with the term “incast collapse” on Ethernet networks and how to mitigate it.

NVMe can support up to 64,000 queues. Every queue that NVMe opens up initiates a TCP session. Here is where challenges may eventually surface. Simultaneously opening up multiple queues will result in multiple TCP sessions initiating at the same time. This could, in turn, have all these sessions arrive at a common congestion point in the Ethernet network at the same time. The network remedies this by having all TCP sessions backing off at the same time, or an incast collapse, creating latency in the network.

Source: University of California-Berkeley

Historically this has been a very specialized and rare occurrence in networking due to the low probability that such an event would ever take place. But the introduction of NVMe/TCP into the network makes the possibility of such a event much more likely to occur, especially as more companies deploy NVMe/TCP into their environment.

The Ratification of the NVMe/TCP

Ratification of the NVMe/TCP standard potentially makes every enterprise data center a candidate for storage systems that can deliver dramatically better performance to their work loads. Until the performance demands of every workload in a data center are met instantaneously, some workload requests will queue up behind a bottleneck in the data center infrastructure.

Just as introducing flash memory into enterprise storage systems revealed bottlenecks in storage operating system software and storage protocols, NVMe/TCP-based storage systems will reveal bottlenecks in data center networks. Enterprises seeking to accelerate their applications by implementing NVMe/TCP-based storage systems may discover bottlenecks in their networks that need to be addressed in order to see the full benefits that NVMe/TCP-based storage.

To view this presentation in its entirety, follow this link.




TrueNAS M-Series Turns Tech Buzz into Music

NVMe and other advances in non-volatile memory technology are generating a lot of buzz in the enterprise technology industry, and rightly so. As providers integrate these technologies into storage systems, they are closing the gap between the dramatic advances in processing power and the performance of the storage systems that support them. The TrueNAS M-Series from iXsystems provides an excellent example of what can be achieved when these technologies are thoughtfully integrated into a storage system.

DCIG Quick Look

In the process of refreshing its research on enterprise midrange arrays, DCIG discovered that the iXsystems TrueNAS M-Series all-flash and hybrid storage arrays leverage many of the latest technologies, including:

  • Intel® Xeon® Scalable Family Processors

  • Large DRAM caches
  • NVDIMMs

  • NVMe SSDs

  • Flash memory

  • High-capacity hard disk drives

 

The TrueNAS M-Series lineup comprises two models: the M40 and the M50. The M40 is lower entry cost, scalable to 2 PB, and includes 40 GbE connectivity with SAS SSD caching. The M50 scales to 10 PB and adds 100 GbE connectivity with NVMe-based caching.

Both models come standard with redundant storage controllers for high-availability and 24×7 service. Though single-controller configurations are available for less critical applications. 

Advanced Technologies in Perfect Harmony

DCIG analysts are impressed with the way iXsystems engineers have orchestrated the latest technologies in the M50 storage array, achieving maximum end-to-end cost-efficient performance.

The M50 marries 40 Intel® Xeon® Scalable Family Processor cores with up to 3 TB of DRAM, a 32 GB NVDIMM write cache and 15.2 TB of NVMe SSD read-cache in front of up to 10 PB of hard disk storage. (The M-Series can also be configured as an all-flash array.) Moreover, iXsystems attaches each storage expansion shelf directly to each controller via 12 Gb SAS ports. This approach adds back end throughput to the storage system as each shelf is added.

image of TrueNAS M50 array rear view
iXsystems TrueNAS M50

This well-balanced approach carries through to front-end connectivity. The M50 supports the latest advances in high-speed networking, including up to 4 ports of 40/100 Gb Ethernet and 16/32 Gb Fibre Channel connectivity per controller.

TrueNAS is Enterprise Open Source

TrueNAS is built on BSD and ZFS Open Source technology. iXsystems is uniquely positioned to support the full Open Source stack behind TrueNAS. It has developers and expertise in the operating system, file systems and NAS software.

iXsystems also stewards the popular (>10 million downloads) FreeNAS software-defined storage platform. Among other things, FreeNAS functions as the experimental feature and QA testbed for TrueNAS. TrueNAS can even replicate data to and from FreeNAS. Thus, TrueNAS owners benefit from the huge ZFS and FreeNAS Open Source ecosystems.

NVM Advances are in Tune with the TrueNAS Architecture

The recent advances in non-volatile memory are a perfect fit with the TrueNAS architecture.

Geeking out just a bit…

diagram of TrueNAS M50 cacheZFS uses DRAM as a read cache to accelerate read operations. This primary read cache is called the ARC. ZFS also supports a secondary read cache called L2ARC. The M50 can use much of the 1.5 TB of DRAM in each storage controller for the ARC, and combine it with up to 15.2 TB of NVMe-based L2ARC to provide a huge low-latency read cache that offers up to 8 GB/s throughput.

The ZFS Intent Log (ZIL) is where all data to be written is initially stored. These writes are later flushed to disk. The M50 uses NVDIMMs for the ZIL write cache. The NVDIMMs safely provide near-DRAM-speed write caching. This enables the array to quickly acknowledge writes on the front end while efficiently coalescing many random writes into sequential disk operations on the back end.

Broad Protocol Support Enables Many Uses

TrueNAS supports AFP, SMB, NFS, iSCSI and FC protocols plus S3-compliant object storage. It also offers Asigra backup as an integrated service that runs natively on the array. This broad protocol support enables the M50 to cost-effectively provide high performance storage for:

  • File sharing
  • Virtual machine storage
  • Cloud-native apps
  • Backup target

 

All-inclusive Licensing Adds Value

TrueNAS software licensing is all-inclusive; with unlimited snapshots, clones and replication. Thus, there are no add-on license fees to negotiate and no additional PO’s to wait for. This reduces costs, promotes full utilization of the extensive capabilities of the TrueNAS M-Series and increases business agility. 

TrueNAS M50 Turns Tech Buzz into Music

The TrueNAS M50 integrates multiple buzz-worthy technologies to deliver large amounts of low-latency storage. The M50 accelerates a broad range of workloads–safely and economically. Speaking of economics, according to the iXsystems web site, TrueNAS storage can be expanded for less than $100/TB. That should be music to the ears of business people everywhere.




NVMe: Four Key Trends Set to Drive Its Adoption in 2019 and Beyond

Storage vendors hype NVMe for good reason. It enables all-flash arrays (AFAs) to fully deliver on flash’s performance characteristics. Already NVMe serves as an interconnect between AFA controllers and their back end solid state drives (SSDs) to help these AFAs unlock more of the performance that flash offers. However, the real performance benefits that NVMe can deliver will be unlocked as a result of four key trends set to converge in the 2019/2020 time period. Combined, these will open the doors for many more companies to experience the full breadth of performance benefits that NVMe provides for a much wider swath of applications running in their environment.

Many individuals have heard about the performance benefits of NVMe. Using it, companies can reduce latency with response times measured in few hundred microseconds or less. Further, applications can leverage the many more channels that NVMe has to offer to drive throughput to hundreds of GBs per second and achieve millions of IOPs. These types of performance characteristics have many companies eagerly anticipating NVMe’s widespread availability.

To date, however, few companies have experienced the full breadth of performance characteristics that NVMe offers. This stems from:

  • The lack of AFAs on the market that fully support NVMe (about 20%).
  • The relatively small performance improvements that NVMe offers over existing SAS-attached solid-state drives (SSDs); and,
  • The high level of difficulty and cost associated with deploying NVMe in existing data centers.

This is poised to change in the next 12-24 months with four key trends poised to converge that will open up NVMe to a much wider audience.

  1. Large storage vendors getting ready to enter the NVMe market. AFA providers such as Tegile (Western Digital), iXsystems, Huawei, Lenovo, and others ship products that support NVMe. These vendors represent the leading edge of where NVMe innovation has occurred. However, their share of the storage market remains relatively small compared to providers such as Dell EMC, HPE, IBM, and NetApp. As these large storage providers enter the market with AFAs that support NVMe, expect market acceptance and adoption of NVMe to take off.
  2. The availability of native NVMe drivers on all major operating systems. The only two major enterprise operating systems that have currently native NVMe drivers for their OSes are Linux and VMware. However, until Microsoft and, to a lesser degree, Solaris, offer native NVMe drives, many companies will have to hold off on deploying NVMe in their environments. The good news is that all these major OS providers are actively working on NVMe drivers. Further, expect that the availability of these drivers will closely coincide with the availability of NVMe AFAs from the major storage providers and the release of the NVMe-oF TCP standard.
  3. NVMe-oF TCP protocol standard set to be finalized yet in 2018. Connecting the AFA controller to its backend SSDs via NVMe is only one half – and much easier part – of solving the performance problem. The much larger and more difficult problem is easily connecting hosts to AFAs over existing storage networks as it is currently difficult to setup and scale NVMe-oF. The establishment of the NVMe-oF TCP standard will remedy this and facilitate the introduction and use of NVMe-oF between hosts and AFAs using TCP/IP over existing Ethernet storage networks.
  4. The general availability of NVMe-oF TCP offload cards. To realize the full performance benefits of NVMe-oF using TCP, companies are advised to use NVMe-oF TCP offload cards. Using standard Ethernet cards with no offload engine, companies will still see high throughput but very high CPU utilization (up to 50 percent.) Using the forthcoming NVMe-oF TCP offload cards, performance increases by anywhere from 33 to 150 percent versus native TCP cards while only introducing nominal amounts of latency (single to double digit microseconds.)

The business need for NVMe technology is real. While today’s all-flash arrays have tremendously accelerated application performance, NVMe stands poised to unleash another round of up to 10x or more performance improvements. But to do that, a mix of technologies, standards, and programming changes to existing operating systems must converge for mass adoption in enterprises to occur. This combination of events seems poised to happen in the next 12-24 months.




HPE Expands Its Big Tent for Enterprise Data Protection

When it comes to the mix of data protection challenges that exist within enterprises today, these companies would love to identify a single product that they can deploy to solve all their challenges. I hate to be the bearer of bad news, but that single product solution does not yet exist. That said, enterprises will find a steadily improving ecosystem of products that increasingly work well together to address this challenge with HPE being at the forefront of putting up a big tent that brings these products together and delivers them as a single solution.

Having largely solved their backup problems at scale, enterprises have new freedom to analyze and address their broader enterprise data protection challenges. As they look to bring long term data retention, data archiving, and multiple types of recovery (single applications, site fail overs, disaster recoveries, and others) under one big tent for data protection, they find they often need to deploy multiple products.

This creates a situation where each product addresses specific pain points that enterprises have. However, multiple products equate to multiple management interfaces that each have their own administrative policies with minimal or no integration between them. This creates a thornier problem – enterprises are left to manage and coordinate the hand-off of the protection and recovery of data between these different individual data protection products.

A few years HPE started to build a “big tent” to tackle these enterprise data protection and recovery issues. It laid the foundation with its HPE 3PAR StoreServ storage arrays, StoreOnce deduplication storage systems, and Recovery Manager Central (RMC) software to help companies coordinate and centrally manage:

  • Snapshots on 3PAR StoreServ arrays
  • Replication between 3PAR StoreServ arrays
  • The efficient movement of data between 3PAR and StoreOnce systems for backup, long term retention, and fast recoveries

This week HPE expanded its big tent of data protection to give companies more flexibility to protect and recover their data more broadly across their enterprise. It did so in the following ways:

  • HPC RMC 6.0 can directly recover data to HPE Nimble storage arrays. Recoveries from backups can be a multi-step process that may require data to pass through the backup software and the application server before it lands on the target storage array. Beginning December 2018, companies can use RMC to directly recover data to HPE Nimble storage arrays from an HPE StoreOnce system without going through the traditional recovery process just as they can already do to HPE 3PAR StoreServ storage arrays.
  • HPE StoreOnce can directly send and retrieve deduplicated data from multiple cloud providers. Companies sometimes fail to consider that general purpose cloud service providers such as Amazon Web Services (AWS) or Microsoft Azure make no provisions to optimize data stored with them such as deduplicating it. Using HP StoreOnce’s new direct support for AWS, Azure, and Scality, companies can use StoreOnce to first compress and deduplicate data before they store the data in the cloud.
  • Integration between Commvault and HPE StoreOnce systems. Out of the gate, companies can use Commvault to manage StoreOnce operations such as replicating data between StoreOnce systems as well as moving data directly from StoreOnce systems to the cloud. Moreover, as this relationship between Commvault and HPE matures, companies will also be able to use HPE’s StoreOnce Catalyst, HPE’s client-based deduplication software agent, in conjunction with Commvault to backup data on server clients where data may not reside on HPE 3PAR or Nimble storage. Using the HPE StoreOnce Catalyst software, Commvault will deduplicate data on the source before sending it to an HPE StoreOnce system.

Source:HPE

Of these three announcements that HPE made this week, this new relationship with Commvault that accompanies its pre-existing relationships with Micro Focus (formerly HPE Data Protector) and Veritas demonstrate HPE’s commitment to helping enterprises build a big tent for their data protection and recovery initiatives. Storing data on the HPE 3PAR and Nimble and using RMC to manage their backups and recoveries on the StoreOnce systems certainly accelerates and simplifies these functions when companies can do so. But by working with these other partners, it illustrates that HPE recognizes that companies will not store all their data on its systems and that HPE will accommodate companies so they can create a single, larger data protection and recovery solution for their enterprise.




Six Key Differentiators between HPE 3PAR StoreServ and NetApp AFF A-Series All-flash Arrays

Both HPE and NetApp have multiple enterprise storage product lines. Each company also has a flagship product. For HPE it is the 3PAR StoreServ line. For NetApp it is the AFF (all flash FAS) A-Series. DCIG’s latest Pocket Analyst Report examines these flagship all-flash arrays. The report identifies many similarities between the products, including the ability to deliver low latency storage with high levels of availability, and a relatively full set of data management features.

DCIG’s Pocket Analyst Report also identifies six significant differences between the products. These differences include how each product provides deduplication and other data services, hybrid cloud integration, host-to-storage connectivity, scalability, and simplified management through predictive analytics and bundled or all-inclusive software licensing.

DCIG recently updated its research on the dynamic and growing all-flash array marketplace. In so doing, DCIG identified many similarities between the HPE 3PAR StoreServ and NetApp AFF A-Series products including:

  • Unified SAN and NAS protocol support
  • Extensive support for VMware API’s including VMware Virtual Volumes (VVols)
  • Integration with popular virtualization management consoles
  • Rich data replication and data protection offerings

DCIG also identified significant differences between the HPE and NetApp products including:

  • Hardware-accelerated Inline Data Services
  • Predictive analytics
  • Hybrid Cloud Support
  • Host-to-Storage Connectivity
  • Scalability
  • Licensing simplicity

Blurred image of pocket analyst report first page

DCIG’s 4-page Pocket Analyst Report on the Six Key Differentiators between HPE 3PAR StoreServ and NetApp AFF A-Series All-flash Arrays analyzes and compares the flagship all-flash arrays from HPE and NetApp. To see which product has the edge in each of the above categories and why, you can purchase the report on DCIG’s partner site: TechTrove. You may also register on the TechTrove website to be notified should this report become available for no charge at some future time.




Seven Key Differentiators between Dell EMC VMAX and HPE 3PAR StoreServ Systems

Dell EMC and Hewlett Packard Enterprise are enterprise technology stalwarts. Thousands of enterprises rely on VMAX or 3PAR StoreServ arrays to support their most critical applications and to store their most valuable data. Although VMAX and 3PAR predated the rise of the all-flash array (AFA), both vendors have adapted and optimized these products for flash memory. They deliver all-flash array performance without sacrificing the data services enterprises rely on to integrate with a wide variety of enterprise applications and operating environments.

While VMAX and 3PAR StoreServ can both support hybrid flash plus hard disk configurations, the focus is now on all-flash. Nevertheless, the ability of these products to support multiple tiers of storage will be advantageous in a future that includes multiple classes of flash memory along with other non-volatile storage class memory technologies.

Both the VMAX and 3PAR StoreServ can meet the storage requirements of most enterprises, yet differences remain. DCIG compares the current AFA configurations from Dell EMC and HPE in its latest DCIG Pocket Analyst Report. This report will help enterprises determine which product best fits with its business requirements.

DCIG updated its research on all-flash arrays during the first half of 2018. In so doing, DCIG identified many similarities between the VMAX and 3PAR StoreServ products including:

  • Extensive support for VMware API’s including VMware Virtual Volumes (VVols)
  • Integration with multiple virtualization management consoles
  • Rich data replication and data protection offerings
  • Support for a wide variety of client operating systems
  • Unified SAN and NAS

DCIG also identified significant differences between the VMAX and 3PAR StoreServ products including:

  • Data center footprint
  • High-end history
  • Licensing simplicity
  • Mainframe connectivity
  • Performance resources
  • Predictive analytics
  • Raw and effective storage density

blurred image of the front page of the report

To see which vendor has the edge in each of these categories and why, you can access the latest 4-page Pocket Analyst Report from DCIG that analyzes and compares all-flash arrays from Dell EMC and HPE. This report is currently available for purchase on DCIG’s partner site: TechTrove. You may also register on the TechTrove website to be notified should this report become available at no charge at some future time.




Five Key Differentiators between the Latest NetApp AFF A-Series and Hitachi Vantara VSP F-Series Platforms

Every enterprise-class all-flash array (AFA) delivers sub-1 millisecond response times using standard 4K & 8K performance benchmarks, high levels of availability, and a relatively full set of core data management features. As such, enterprises must examine AFA products to determine the differentiators between them. It is when DCIG compared the newest AFAs from leading providers such as Hitachi Vantara and NetApp in its latest DCIG Pocket Analyst Report that differences between them quickly emerged.

Both Hitachi Vantara and NetApp refreshed their respective F-Series and A-Series lines of all-flash arrays (AFAs) in the first half of 2018. In so doing, many of the similarities between the products from these providers persisted in that they both continue to natively offer:

  • Unified SAN and NAS interfaces
  • Extensive support for VMware API’s including VMware Virtual Volumes (VVols)
  • Integration with popular virtualization management consoles
  • Rich data replication and data protection offerings

However, the latest AFA product refreshes from each of these two vendors also introduced some key areas where they diverge. While some of these changes reinforced the strengths of each of their respective product lines, other changes provided some key insights into how these two vendors see the AFA market shaping up in the years to come. This resulted in some key differences in product functionality emerging between the products from these two vendors that will impact them in the years to come.

Clicking on image above will take you to a third party website to access this report.

To help enterprises select the solution that best fits their needs, there are five key ways that the latest AFA products from Hitachi Vantara and NetApp differentiate themselves from one another. These five key differentiators include:

  1. Data protection and data reduction
  2. Flash performance optimization
  3. Predictive analytics
  4. Public cloud support
  5. Storage networking protocols

To see which vendor has the edge in each of these categories and why, you can access the latest 4-page Pocket Analyst Report from DCIG that analyzes and compares these newest all-flash arrays from Hitachi Vantara and NetApp. This report is currently available for sale on DCIG’s partner site: TechTrove. You may also register on the TechTrove website to be notified should this report becomes available at no charge at some future time.




NVMe Unleashing Performance and Storage System Innovation

Mainstream enterprise storage vendors are embracing NVMe. HPE, NetAppPure Storage, Dell EMC, Kaminario and Tegile all offer all-NVMe arrays. According to these vendors, the products will soon support storage class memory as well. NVMe protocol access to flash memory SSDs is a big deal. Support for storage class memory may become an even bigger deal.

NVMe Flash Delivers More Performance Than SAS

NVM express logo

Using the NVMe protocol to talk to SSDs in a storage system increases the efficiency and effective performance capacity of each processor and of the overall storage system. The slimmed down NVMe protocol stack reduces processing overhead compared to legacy SCSI-based protocols. This yields lower storage latency and more IOPS per processor. This is a good thing.

NVMe also delivers more bandwidth per SSD. Most NVMe SSDs connect via four (4) PCIe channels. This yields up to 4 GB/s bandwidth, an increase of more than 50% compared to the 2.4 GB/s maximum of a dual-ported SAS SSD. Since many all-flash arrays can saturate the path to the SSDs, this NVMe advantage translates directly to an increase in overall performance.

The newest generation of all-flash arrays combine these NVMe benefits with a new generation of Intel processors to deliver more performance in less space. It is this combination that, for example, enables HPE to claim that its new Nimble Storage arrays offer twice the scalability of the prior generation of arrays. This is a very good thing.

The early entrants into the NVMe array marketplace charged a substantial premium for NVMe performance. As NVMe goes mainstream, the price gap between NVMe SSDs and SAS SSDs is rapidly narrowing. With many vendors now offering NVMe arrays, competition should soon eliminate the price premium. Indeed, Pure Storage claims to have done so already.

Storage Class Memory is Non-Volatile Memory

Non-volatile memory (NVM) refers to memory that retains data even when power is removed. The term applies to many technologies that have been widely used for decades. These include EPROM, ROM, NAND flash (the type of NVM commonly used in SSDs and memory cards). NVM also refers to newer or less widely used technologies including 3D XPoint, ReRAM, MRAM and STT-RAM.

Because NVM properly refers to a such wide range of technologies, many people are using the term Storage Class Memory (SCM) to refer to emerging byte-addressable non-volatile memory technologies that may soon be used in enterprise storage systems. These SCM technologies include 3D XPoint, ReRAM, MRAM and STT-RAM. SCM offers several advantages compared to NAND flash:

  • Much lower latency
  • Much higher write endurance
  • Byte-addressable (like DRAM memory)

Storage Class Memory Enables Storage System Innovation

Byte-addressable non-volatile memory on NVMe/PCIe opens up a wonderful set of opportunities to system architects. Initially, storage class memory will generally be used as an expanded cache or as the highest performing tier of persistent storage. Thus it will complement rather than replace NAND flash memory in most storage systems. For example, HPE has announced it will use Intel Optane (3D XPoint) as an extension of DRAM cache. Their tests of HPE 3PAR 3D Cache produced a 50% reduction in latency and an 80% increase in IOPS.

Some of the innovative uses of SCM will probably never be mainstream, but will make sense for a specific set of use cases where microseconds can mean millions of dollars. For example, E8 Storage uses 100% Intel Optane SCM in its E8-X24 centralized NVMe appliance to deliver extreme performance.

Remain Calm, Look for Short Term Wins, Anticipate Major Changes

We humans have a tendency to overestimate short term and underestimate long term impacts. In a recent blog article we asserted that NVMe is an exciting and needed breakthrough, but that differences persist between what NVMe promises for all-flash array and hyperconverged solutions and what they can deliver in 2018. Nevertheless, IT professionals should look for real application and requirements-based opportunities for NVMe, even in the short term.

Longer term, the emergence of NVMe and storage class memory are steps on the path to a new data centric architecture. As we have previously suggested, enterprise technologists should plan technology refreshes through 2020 around NVMe and NVMe-oF. Beyond 2020, enterprise technologists should plan their technology refreshes around a composable data centric architecture.




DCIG 2018-19 All-flash Array Buyer’s Guide Now Available

DCIG is pleased to announce the availability of the DCIG 2018-19 All-flash Array Buyer’s Guide edition developed from its enterprise storage array body of research. This 64-page report presents a fresh snapshot of the dynamic all-flash array (AFA) marketplace. It evaluates and ranks thirty-two (32) enterprise class all-flash arrays that achieved rankings of Recommended or Excellent based on a comprehensive scoring of product featuresThese products come from seven (7) vendors including Dell EMCHitachi VantaraHPE, Huawei, NetAppPure Storage and Tegile.

graphical icon for the All-flash Array Buyer's Guide

DCIG’s succinct analysis provides insight into the state of the all-flash array marketplace, the benefits organizations can expect to achieve, and key features organizations should be aware of as they evaluate products. It also provides brief observations about the distinctive features of each product.

The DCIG 2018-19 All-flash Array Buyer’s Guide helps businesses drive time and cost out of the all-flash array selection process by:

  • Describing key product considerations and important changes in the marketplace
  • Gathering normalized data about the features each product supports
  • Providing an objective, third-party evaluation of those features from an end-user perspective
  • Presenting product feature data in standardized one-page data sheets facilitates rapid feature-based comparisons

It is in this context that DCIG presents the DCIG 2018-19 All-Flash Array Buyer’s Guide. As prior DCIG Buyer’s Guides have done, it puts at the fingertips of enterprises a resource that can assist them in this important buying decision.

Access to this Buyer’s Guide edition is available through the following DCIG partner sites: TechTrove.




DCIG 2018-19 Enterprise General Purpose All-flash Array Buyer’s Guide Now Available

DCIG is pleased to announce the availability of the DCIG 2018-19 Enterprise General Purpose All-flash Array Buyer’s Guide developed from its enterprise storage array body of research. This 72-page report presents a fresh snapshot of the dynamic all-flash array (AFA) marketplace. It evaluates and ranks thirty-eight (38) enterprise class all-flash arrays that achieved rankings of Recommended or Excellent. These products come from nine (9) vendors including Dell EMC, Hitachi Vantara, HPE, Huawei, IBM, Kaminario, NetApp, Pure Storage and Tegile.

DCIG’s succinct analysis provides insight into the state of the enterprise all-flash storage array marketplace, the benefits organizations can expect to achieve, and key features organizations should be aware of as they evaluate products. It also provides brief observations about the distinctive features of each product.

The DCIG 2018-19 Enterprise General Purpose All-flash Array Buyer’s Guide helps businesses drive time and cost out of the all-flash array selection process by:

  • Describing key product considerations and important changes in the marketplace
  • Gathering normalized data about the features each product supports
  • Providing an objective, third-party evaluation of those features from an end-user perspective
  • Presenting product feature data in standardized one-page data sheets facilitates rapid feature-based comparisons

It is in this context that DCIG presents the DCIG 2018-19 Enterprise General Purpose All-Flash Array Buyer’s Guide. As prior DCIG Buyer’s Guides have done, it puts at the fingertips of enterprises a resource that can assist them in this important buying decision.

Access to this Buyer’s Guide is available through the following DCIG partner sites: TechTrove




Six Best Practices for Implementing All-flash Arrays

Almost any article published today related to enterprise data storage will talk about the benefits of flash memory. However, while many organizations now use flash in their enterprise, most are only now starting to use it at a scale where they use it to host more than a handful of their applications. As organizations look to deploy flash more broadly in their enterprises, here are six best practices to keep in mind as they do so.

The six best practices outlined below are united by a single overarching principle. That overarching principle is that the data center is not merely a collection of components, it is an interdependent system. Therefore, the results achieved by changing any key component will be constrained by its interactions with the performance limits of other components. Optimal results come from optimizing the data center as a system.

Photograph of scaffolding on a building

Photo by Dan Gold on Unsplash

Best Practice #1: Focus on Accelerating Applications

Business applications are the reason businesses run data centers. Therefore, accelerating applications is a useful focus in evaluating data center infrastructure investments. Eliminating storage perfor­mance bottlenecks by implementing an all-flash array (AFA) may reveal bottlenecks elsewhere in the infrastructure, including in the applications themselves.

Getting the maximum performance benefit from an AFA may require more or faster connections to the data center network, changes to how the network is structured and other network configuration details. Application servers may require new network adapters, more DRAM, adjustments to cache sizes and other server configuration details. Applications may require configuration changes or even some level of recoding. Some AFAs include utilities that will help identify the bottle­necks wherever they occur along the data path.

Best Practice #2: Mind the Failure Domain

Consolidation can yield dramatic savings, but it is prudent to consider the failure domain, and how much of an organization’s infrastructure should depend on any one component—including an all-flash array. While all the all-flash arrays that DCIG covers in its All-flash Array Buyer’s Guides are “highly available” by design, some are better suited to deliver high availability than others. Be sure the one you select matches your requirements and your data center design.

Best Practice #3: Use Quality of Service Features and Multi-tenancy to Consolidate Confidently

Quality of Service (QoS) features enable an array to give criti­cal business applications priority access to storage resources. Multi-tenancy allocates resources to specific business units and/or departments and limits the percentage of resources that they can consume on the all-flash array at one time. Together, these features protect the array from being monopolized by any one application or bad actor.

Best Practice #4: Pursue Automation

Automation can dramatically reduce the amount of time spent on routine storage management and enable new levels of IT agility. This is where features such as predictive analytics come into play. They help to remove the risk associated with managing all-flash arrays in complex, consolidated environments. For instance, they can proactively intervene by identifying problems before they impact production apps and take steps to resolve them.

Best Practice #5: Realign Roles and Responsibilities

Implementing an all-flash storage strategy involves more than technology. It can, and should, reshape roles and responsibilities within the central IT department and between central IT, develop­ers and business unit technologists. Thinking through the possible changes with the various stakeholders can reduce fear, eliminate obstacles, and uncover opportunities to create additional value for the business.

Best Practice #6: Conduct a Proof of Concept Implementation

A good proof-of-concept can validate feature claims and uncover perfor­mance-limiting bottlenecks elsewhere in the infrastructure. However, key to implementing a good proof-of-concept is having an environment where you can accurately host and test your production environment on the AFA.

A Systems Approach Will Yield the Best Result

Organizations that approach the AFA evaluation from a systems perspective–recognizing and honoring the fact that the data center is an interdependent system that includes hardware, software and people—and that apply these six best practices during an all-flash array purchase decision are far more likely to achieve the objectives that prompted them to look at all-flash arrays in the first place.

DCIG is preparing a series of all-flash array buyer’s guides that will help organizations considering the purchase of an all-flash array. DCIG buyer’s guides accelerate the evaluation process and facilitate better-informed decisions. Look for these buyer’s guides beginning in the second quarter of 2018. Visit the DCIG web site to discover more articles that provide actionable analysis for your data center infrastructure decisions.




Seven Significant Trends in the All-Flash Array Marketplace

Much has changed since DCIG published the DCIG 2017-18 All-Flash Array Buyer’s Guide just one year ago. The DCIG analyst team is in the final stages of preparing a fresh snapshot of the all-flash array (AFA) marketplace. As we reflected on the fresh all-flash array data and compared it to the data we collected just a year ago, we observed seven significant trends in the all-flash array marketplace that will influence buying decisions through 2019.

Trend #1: New Entrants, but Marketplace Consolidation Continues

Although new storage providers continue to enter the all-flash array marketplace—primarily focused on NVMe over Fabrics–the larger trend is continued consolidation. HPE acquired Nimble Storage. Western Digital acquired Tegile.

Every well-known provider has made at least one all-flash acquisition. Consequently, some providers are in the process of “rationalizing” their all-flash portfolios. For example, HPE has decided to position Nimble Storage AFAs as “secondary flash”. HPE also announced it will implement Nimble’s InfoSight predictive analytics platform across HPE’s entire portfolio of data center products, beginning with 3PAR StoreServ storage. Dell EMC seems to be positioning VMAX as its lead product for mission critical workloads, Unity for organizations that value simplified operations, XtremIO for VDI/test/dev, and SC for low cost capacity.

Nearly all the AFA providers also offer at least one hyperconverged infrastructure product. These hyperconverged products compete with AFAs for marketing and data center infrastructure budgets. This will create additional pressure on AFA providers and may drive further consolidation in the marketplace.

Trend #2: Flash Capacity is Increasing Dramatically

The raw capacity of the more than 100 all-flash arrays DCIG researched averaged 4.4 petabytes. This is a 5-fold increase compared to the products in the 2017-18 edition. The highest capacity product can provide 70 petabytes (PB) of all-flash capacity. This is a 7-fold increase. Thus, AFAs now offer the capacity required to be the storage resource for all active workloads in any organization.

graph of all-flash array capacity

Source: DCIG, n=102

Trend #3: Storage Density is Increasing Dramatically

The average AFA flash density of the products continues to climb. Fully half of the AFAs that DCIG researched achieve greater than 50 TB/RU. Some AFAs can provide over 200 TB/RU. The combination of all-flash performance and high storage density means that an AFA may be able to meet an organization’s performance and capacity requirements in 1/10th the space of legacy HDD storage systems and the first generation of all-flash arrays. This creates an opportunity for many organizations to realize significant data center cost reductions. Some have eliminated data centers. Others have been able to delay building new data centers.

graph of all-flash array storage density

Source: DCIG, n=102

Trend #4: Rapid Uptake in Components that Increase Performance

Increases in flash memory capacity and density are being matched with new components that increase array performance. These components include:

  • a new generation of multi-core CPUs from Intel
  • 32 Gb Fibre Channel and 25/40/100 Gb Ethernet
  • GPUs
  • ASICS to offload storage tasks
  • NVMe connectivity to SSDs.

Each of these components can unlock more of the performance available from flash memory. Organizations should assess how well these components are integrated to systemically unlock the performance of flash memory and of their own applications.

chart of front end connectivity percentages

Source: DCIG, n=102

Trend #5: Unified Storage is the New Normal

The first generations of all-flash arrays were nearly all block-only SAN arrays. Tegile was perhaps the only truly unified AFA provider. Today, more than half of all all-flash arrays DCIG researched support unified storage. This support for multiple concurrent protocols creates an opportunity to consolidate and accelerate more types of workloads.

Trend #6: Most AFAs can use Public Cloud Storage as a Target

Most AFAs can now use public cloud storage as a target for cold data or for snapshots as part of a data protection mechanism. In many cases this target is actually one of the provider’s own arrays running in a cloud data center or a software-defined storage instance of its stor­age system running in one of the true public clouds.

Trend #7: Predictive Analytics Get Real

Some storage providers can document how predictive stor­age analytics is enabling increased availability, reliability, and application performance. The promise is huge. Progress varies. Every prospective all-flash array purchaser should incorporate predictive analytics capabilities into their evaluation of these products, particularly if the organization intends to consolidate multiple workloads onto a single all-flash array.

Conclusion: All Active Workloads Belong on All-Flash Storage

Any organization that has yet to adopt an all-flash storage infrastructure for all active workloads is operating at a competitive disadvantage. The current generation of all-flash arrays create business value by…

  • making existing applications run faster even as data sets grow
  • accelerating application development
  • enabling IT departments to say, “Yes” to new workloads and then get those new workloads producing results in record time
  • driving down data center capital and operating costs

DCIG expects to finalize our analysis of all-flash arrays and present the resulting snapshot of this dynamic marketplace in a series of buyer’s guides during the second quarter of 2018.




Two Most Disruptive Storage Technologies at the NAB 2018 Show

The exhibit halls at the annual National Association of Broadcasters (NAB) show in Las Vegas always contain eye-popping displays highlighting recent technological advances as well as what is coming down the path in the world of media and entertainment. But behind NAB’s glitz and glamour lurks a hard, cold reality; every word recorded, every picture taken, and every scene filmed must be stored somewhere, usually multiple times, and available at a moment’s notice. It is these halls at the NAB show that DCIG visited where it identified two start-ups with storage technologies poised to disrupt business as usual.

Storbyte. Walking the floor at NAB, a tall, blond individual literally yanked me by the arm as I was walking by and asked me if I had ever heard of Storbyte. Truthfully, the answer was No. This person turned out to be Steve Groenke, Storbyte’s CEO, and what ensued was a great series of conversations while at NAB.

Storbyte has come to market with an all-flash array. However, it took a very different approach to solve the problems of longevity, availability and sustainable high write performance in SSDs and storage systems built with them. What makes it so disruptive is it created a product that meets the demand for extreme sustained write performance by slowing down flash and it does so at a fraction of the cost of what other all-flash arrays cost.

In looking at today’s all-flash designs, every flash vendor is actively pursuing high performance storage. The approach they take is to maximize the bandwidth to each SSD. This means their systems must use PCIe attached SSDs addressed via the new NVMe protocol.

Storbyte chose to tackle the problem differently. Its initial target customers had continuous, real-time capture and analysis requirements as they routinely burned through the most highly regarded enterprise class SSDs in about seven months. Two things killed NAND flash in these environments: heat and writes.

To address this problem, Storbyte reduces heat and the number of writes that each flash module experiences by incorporating sixteen mSATA SSDs into each of its Eco*Flash SSDs. Further, Storbyte slows down the CPUs in each of the mSATA module on its system and then wide-stripes writes across all of them. According to Storbyte, this only requires about 25% of the available CPU on each mSATA module so they use less power. By also managing the writes, Storbyte simultaneously extends the life of each mSATA module on its Eco-flash drives.

The end result is a low cost, high performance, very dense, power-efficient all-flash array built using flash cards that rely upon “older”, “slower”, consumer-grade mSATA flash memory modules that can drive 1.6 million IOPS on a 4U system. More notably, its systems cost about a quarter of that of competitive “high performance” all-flash arrays while packing more than a petabyte of raw flash memory capacity in 4U of rack space that use less power than almost any other all-flash array.

Wasabi. Greybeards in the storage world may recognize the Wasabi name as a provider of iSCSI SANs. Well, right name but different company. The new Wasabi recently came out of stealth mode as a low cost, high performance, cloud storage provider. By low cost, we mean 1/5 of the cost of Amazon’s slowest offering (Glacier) and at 6x the speed of Amazon’s highest performing S3 offering. In other words, you can have your low cost cloud storage and eat it too.

What makes its offering so compelling is that it offers storage capacity at $4.99/TB per month. That’s it. No additional egress charges for every time you download files. No complicated monthly statements to decipher to figure out how much you are spending and where. No costly storage architects to hire to figure out how to tier data to optimize performance and costs. This translates into one fast cloud storage tier at a much lower cost than the Big 3 (Amazon AWS, Google Cloud, and Microsoft Azure.)

Granted, Wasabi is a cloud storage provider start-up so there is an element of buyer beware. However, it is privately owned and well-funded. It is experiencing explosive growth with over 1600 customers in just its few months of operation. It anticipates raising another round of funding. It already has data centers scattered throughout the United States and around the world with more scheduled to open.

Even so, past horror stories about cloud providers shutting their doors give every company pause by using a relatively unknown quantity to store their data. In these cases, Wasabi recommends that companies use its solution as your secondary cloud.

Its cloud offering is fully S3 compatible and most companies want a cloud alternative anyway. In this instances, store copies of your data to both Amazon and Wasabi. Once stored, run any queries, production, etc. against the Wasabi cloud. The Amazon egress charges that your company avoids by accessing its data on the Wasabi cloud will more than justify taking the risk of storing the data you routinely access on Wasabi. Then in the unlikely event Wasabi does go out of business (not that it has any plans to do so,) companies still have a copy of data with Amazon that they can fail back to.

This argument seems to resonate well with prospects. While I could not substantiate these claims, Wasabi said that they are seeing multi-petabyte deals coming their way on the NAB show floor. By using Wasabi instead of Amazon in the use case just described, these companies can save hundreds of thousands of dollars per month just by avoiding Amazon’s egress charges while mitigating their risk associated with using a start-up cloud provider such as Wasabi.

Editor’s Note: The spelling of Storbyte was corrected on 4/24.




Predictive Analytics in Enterprise Storage: More Than Just Highfalutin Mumbo Jumbo

Enterprise storage startups are pushing the storage industry forward faster and in directions it may never have gone without them. It is because of these startups that flash memory is now the preferred place to store critical enterprise data. Startups also advanced the customer-friendly all-inclusive approach to software licensing, evergreen hardware refreshes, and pay-as-you-grow utility pricing. These startup-inspired changes delight customers, who are rewarding the startups with large follow-on purchases and Net Promoter Scores (NPS) previously unseen in this industry. Yet the greatest contribution startups may make to the enterprise storage industry is applying predictive analytics to storage.

The Benefits of Predictive Analytics for Enterprise Storage

Picture of Gilbert and Anne from Anne of Avonlea

Gilbert advises Anne to stop using “highfalutin mumbo jumbo” in her writing. (Note 1)

The end goal of predictive analytics for the more visionary startups goes beyond eliminating downtime. Their goal is to enable data center infrastructures to autonomously optimize themselves for application availability, performance and total cost of ownership based on the customer’s priorities.

The vendors that commit to this path and execute better than their competitors are creating value for their customers. They are also enabling their own organizations to scale up revenues without scaling out staff. Vendors that succeed in applying predictive analytics to storage today also position themselves to win tomorrow in the era of software-defined data centers (SDDC) built on top of composable infrastructures.

To some people this may sound like a bunch of “highfalutin mumbo jumbo”, but vendors are making real progress in applying predictive analytics to enterprise storage and other elements of the technical infrastructure. These vendors and their customers are achieving meaningful benefits including:

  • Measurably reducing downtime
  • Avoiding preventable downtime
  • Optimizing application performance
  • Significantly reducing operational expenses
  • Improving NPS

HPE Quantifies the Benefits of InfoSight Predictive Analytics

Incumbent technology vendors are responding to this pressure from startups in a variety of ways. HPE purchased Nimble Storage, the prime mover in this space, and plans to extend the benefits of Nimble’s InfoSight predictive analytics to its other enterprise infrastructure products. HPE claims its Nimble Storage array customers are seeing the following benefits from InfoSight:

  • 99.9999% of measured availability across its installed base
  • 86% of problems are predicted and automatically resolved before customers even realize there is an issue
  • 85% less time spent managing and resolving storage-related problems
  • 79% savings in operational expense (OpEx)
  • 54% of issues pinpointed are not storage, identified through InfoSight cross-stack analytics
  • 42 minutes: the average level three engineer time required to resolve an issue
  • 100% of issues go directly to level three support engineers, no time wasted working through level one and level two engineers

The Current State of Affairs in Predictive Analytics

HPE is certainly not alone on this journey. In fact, vendors are claiming some use of predictive analytics for more than half of the all-flash arrays DCIG researched.

Source: DCIG; N = 103

Telemetry Data is the Foundation for Predictive Analytics

Storage array vendors use telemetry data collected from the installed product base in a variety of ways. Most vendors evaluate fault data and advise customers how to resolve problems, or they remotely log in and resolve problems for their customers.

Many all-flash arrays transmit not just fault data, but extensive additional telemetry data about workloads back to the vendors. This data includes IOPS, bandwidth, and latency associated with workloads, front end ports, storage pools and more. Some vendors apply predictive analytics and machine learning algorithms to data collected across the entire installed base to identify potential problems and optimization opportunities for each array in the installed base.

Predictive Analytics Features that Matter

Proactive interventions identify something that is going to create a problem and then notify clients about the issue. Interventions may consist of providing guidance in how to avoid the problem or implementing the solution for the client. A wide range of interventions are possible including identifying the date when an array will reach full capacity or identifying a network configuration that could create a loop condition.

Recommending configuration changes enhances application performance at a site by comparing the performance of the same application at similar sites, discovering optimal configurations, and recommending configuration changes at each site.

Tailored configuration changes prevent outages or application performance issues based on the vendor seeing and fixing problems caused by misconfigurations. The vendor deploys the fix to other sites that run the same applications, eliminating potential problems. The vendor goes beyond recommending changes by packaging the changes into an installation script that the customer can run, or by implementing the recommended changes on the customer’s behalf.

Tailored software upgrades eliminate outages based on the vendor seeing and fixing incompatibilities they discover between a software update and specific data center environments. These vendors use analytics to identify similar sites and avoid making the software update available to those other sites until they have resolved the incompatibilities. Consequently, site administrators are only presented with software updates that are believed to be safe for their environment.

Predictive Analytics is a Significant Yet Largely Untapped Opportunity

Vendors are already creating much value by applying predictive analytics to enterprise storage. Yet no vendor or product comes close to delivering all the value that is possible. A huge opportunity remains, especially considering the trends toward software-defined data centers and composable infrastructures. Reflecting for even a few minutes on the substantial benefits that predictive analytics is already delivering should prompt every prospective all-flash array purchaser to incorporate predictive analytics capabilities into their evaluation of these products and the vendors that provide them.

Note 1: Image source: https://jamesmacmillan.wordpress.com/2012/04/02/highfalutin-mumbo-jumbo/




NVMe: Setting Realistic Expectations for 2018

Non-volatile Memory Express (NVMe) has captured the fancy of the enterprise storage world. Implementing NVMe on all-flash arrays or hyper-converged infrastructure appliances carries with it the promise that companies can leverage these solutions to achieve sub-millisecond response times, drive millions of IOPS, and deliver real-time application analytics and transaction processing. But differences persist between what NVMe promises for these solutions and what they can deliver. Here is a practical look at NVMe delivers on these solutions in 2018.

First and foremost, NVMe is an exciting and needed breakthrough to deliver on the performance characteristics as of early 2018. Unlike the SCSI protocol that it replaces which was designed and implemented with mechanical hard disk drives (HDDs) in mind, NVMe comes to market intended for use with today’s flash-based systems. In fact, as evidence of the biggest difference between SCSI and NVMe, NVMe cannot even interface with HDDs. NVMe is intended to speak flash.

As part of speaking flash, NVMe no longer concerns itself with the limitations of mechanical HDDs. By way of example, HDDs can only handle one command at a time. Whether it is a read or a write, the entire HDD is committed to completing that one command before it can start processing the next one and it only has one channel delivering that command to it.

The limitations of flash, and by extension, NVMe, are exponentially higher. In the case of NVMe, it can support 65,535 queues into the flash media and stack up to 64,000 commands per queue. In other words, over 4 billion commands can theoretically be issued to a single flash media at any time.

Of course, just because NVMe can support over 4 billion commands does not mean that any product or application currently even comes close to doing that. Should they ever do so, and they probably will at some point, it is plausible that published IOP numbers might be in the range of tens or hundreds of millions of IOPs. But as of early 2018, everyone must still develop and mature their infrastructure and applications to support that type of throughput. Further, NVMe as a protocol still must continue to mature its interface to support those kinds of workloads.

So as of early 2018, here is what enterprises can realistically expect from NVMe:

1. If you want NVMe on your all-flash array, you have a short list from which to choose. NVMe capable all-flash arrays that have NVMe interfaces to all SSDs are primarily available from Dell EMC, Huawei, Pivot3, Pure Storage, and Tegile. The number of all-flash arrays that currently support NVMe remains in the minority with only 18% of the 100+ all-flash arrays that DCIG evaluated supporting NVMe connectivity to all back end SSDs.

Source: DCIG

The majority of AFAs currently shipping support a 3, 6, or 12 Gb SAS interface to their backend flash media for good reason: few applications can take full advantage of NVMe’s capabilities. As both applications and NVMe mature, expect the number of AFAs that support NVMe to increase.

2. Your connectivity between your server and shared storage array will likely remain the same in 2018. Enterprises using NAS protocols such as CIFS or NFS or SAN protocols such as FC or iSCSI should expect to do so for 2018 and probably for the next few years. While new standards such as NVMe-oF are emerging and provide millions of IOPs when implemented, such as evidenced by early solutions from providers such as E8 Storage, NVMe is not yet well suited to act as a shared storage protocol between servers and AFA arrays. For now, NVMe remains best suited for communication between storage array controllers and their backend flash media or on servers that have internal flash drives. To use NVMe for any other use cases in enterprise environments is, at this point, premature.

3. NVMe is a better fit for hyper-converged infrastructure solutions than AFAs for now. Enterprises expecting a performance boost from their use of NVMe will likely see it whether they deploy it in hyper-converged infrastructure or AFA solutions. However, enterprises must connect to AFAs using existing storage protocols such as listed above. Conversely, applications running on hyper-converged infrastructure solutions that support NVMe may see better performance than those running on AFAs. Using AFAs, protocol translation over a NAS or SAN must still occur over the storage network to get to the NVMe enabled AFA. Hyper-converged infrastructure solutions negate the need for this additional protocol conversion.

NVMe will improve performance but verify your applications are ready. Stories about the performance improvements that NVMe offers are real and validated in the real world. However, these same users also find that some of their applications using these NVMe-based all-flash arrays are not getting the full benefit that they expected from them because, in part, their applications cannot handle the performance. Some users report that they have uncovered their applications have wait times built into them because the applications were designed to work with slower HDDs. Until the applications themselves are updated to account for AFAs by having those preconfigured wait times removed or minimized, the applications may become the new choke point that prevent enterprises from reaping the full performance benefits that NVMe has to offer.

NVMe is almost without doubt the future for communicating with flash media. But in early 2018, enterprises need to set realistic expectations as to how much of a performance boost NVMe will provide when deployed. Sub-millisecond response times are certainly a realistic expectation and maybe almost a necessity at this point to justify the added expense of using an NVMe array since many SAS-based arrays may achieve this same metric. Further, once an enterprise commits to using NVMe, one also makes the commitment to only using flash media since NVMe provides no option to interface with HDDs.




All-inclusive Licensing is All the Rage in All-flash Arrays

Early in my IT career, a friend who owns a software company told me he had been informed by a peer that he wasn’t charging enough for his software. This peer advised him to adopt a “flinch-based” approach to pricing. He said my friend should start with a base licensing cost that meets margin requirements, and then keep adding on other costs until the prospective customer flinches. My friend found that approach offensive, and so do I. I don’t know how common the “flinch-based” approach is, but as a purchaser of technology goods and services I learned to flinch early and often. I was reminded of this “flinch-based” approach when evaluating some traditional enterprise storage products. Every capability was an extra-cost “option”: each protocol, each client connection, each snapshot feature, each integration point. Happily, this a-la-carte approach to licensing is becoming a thing of the past as vendors embrace all-inclusive licensing for their all-flash array products.

The Trend Toward All-inclusive Licensing in All-Flash Arrays

In the process of updating DCIG’s research on all-flash arrays, we discovered a clear trend toward all-inclusive software feature licensing. This trend was initiated by all-flash array startups. Now even the largest traditional vendors are moving toward all-inclusive licensing. HPE made this change in 2017 for its 3PAR StoreServ products. Now Dell EMC is moving this direction with its all-flash Unity products.

Drivers of All-inclusive Licensing in All-Flash Arrays

Competition from storage startups has played an important role in moving the storage industry toward all-inclusive software feature licensing. Some startups embraced all-inclusive licensing because they knew prospective customers were frustrated by the a-la-carte approach. Others, such as Tegile, embraced all-inclusive licensing from the beginning because many of the software features were inherent to the design of their storage systems. Whatever the motivation, the availability of all-inclusive software feature licensing from these startups put pressure on other vendors to adopt the approach.

Technology advances are also driving the movement toward all-inclusive licensing. Advances in multi-core, multi-gigahertz CPU’s from Intel make it practical to incorporate features such as in-line compression and in-line deduplication into storage systems. These in-line data efficiency features are a good fit with the wear and performance characteristics of NAND-flash, and help to reduce the overall cost and data center footprint of an all-flash array.

The Value of All-inclusive Licensing for All-Flash Array Adopters

All-inclusive licensing is one of the five features that contribute to delivering simplicity on all-flash arrays. Vendors that include all software features fully licensed as part of the standard array package create extra value for purchasers by reducing the number of decision points in the purchasing process and smooths the path to full utilization of the array’s capabilities.

All-inclusive licensing enables agility. Separate license fees for software features reduced the agility of the IT department in responding to changing business requirements because the ordering and purchasing processes added weeks or even months to the implementation process. With all-inclusive licensing eliminates the purchasing delay.

The Value of All-inclusive Licensing for All-flash Array Vendors

All-inclusive licensing translates to more sales. Each decision point during the purchase process slows down the process and creates another opportunity for a customer to say, “No.” All-inclusive licensing smooths the path to purchase. Since all-inclusive licensing also fosters full use of the product’s features and the value customers derive from the product, it should also smooth the path to follow-on sales.

Happier engineers. This benefit may be more abstract, but the best engineers want what they create to actually get used and make a difference. All-inclusive licensing makes it more likely that the features engineers create actually get used.

Bundles May Make Sense for Legacy Solutions

Based on the rationale described above, all-inclusive software feature licensing provides a superior approach to creating value in all-flash arrays. But for vendors seeking to transition from an a-la-carte model, bundles may be a more palatable approach. Bundles enable the vendor to offer some of the benefits of true all-inclusive licensing to new customers without offending existing customers. In cases where a feature depends on technology licensed from another vendor, bundling also offers a way to pass 3rd party licensing costs through to the customer.

Vendors that offer all-inclusive software feature licenses or comprehensive bundles add real value to their all-flash array products, and deserve priority consideration from organizations seeking maximum value, simplicity and agility from their all-flash array purchase.

 




The Five (and soon to be Six) Classifications of All-flash Arrays

The all-flash array market has settled down considerably in the last few years. While there are more all-flash arrays (90+ models) and vendors (20+) than ever before, the ways in which these models can be grouped and classified has also become easier. As DCIG looks forward to releasing a series of Buyer’s Guides covering all-flash arrays in the coming months, it can break these all-flash arrays into five (and soon to be six) general classifications based upon their respective architectures and use cases.

When flash first started to find its way into storage arrays around 2010, these all-flash arrays generally found themselves in two general groupings. On one side, you had existing storage arrays built to hold and manage hard disk drives (HDDs) being re-purposed and filled with solid state drives (SSDs). On the other side, you had emerging and new start-ups that were bringing to market all-flash arrays purpose built to manage and optimize flash.

Unfortunately, neither one really addressed the concerns that enterprises had. Existing storage arrays addressed data management, stability, and reliability concerns, but did not really deliver on the full potential of flash’s performance characteristics. New all-flash arrays purpose-built for flash largely delivered on flash’s potential for performance, but still left question marks in their minds in terms of their stability, reliability, and levels of support.

Those concerns on both ends of the spectrum have largely been put to rest by the current generation of all-flash arrays. While differences between their respective data management, performance, reliability, scalability, and stability on each platform yet remain, the gaps between them are not nearly as wide as they once were. It is as these gaps have closed that five specific all-flash array architectures have emerged that make certain models better positioned to handle certain use cases than others. These five include:

  1. Elastic all-flash arrays. This classification of all-flash arrays is best represented by the generation of all-flash arrays that were purpose-built for flash. This includes models such as those from Dell EMC XtremIO, Kaminario, Nimbus Data, and Pure Storage though DCIG would also include models from HPE Nimble and Dell EMC Isilon in this group. The defining characteristic of this group would be their ability to do scale-out, which encompasses the “set-it-and-forget-it” nature of these all-flash array models.
  2. Enterprise all-flash arrays. These arrays are best represented by Dell EMC VMAX, HPE 3PAR, Huawei OceanStor, IBM, and NetApp AFF models. These arrays offer both scale-out and scale-up configurations and are well suited to provide the high levels of performance (1+ million IOPS), do consolidation, and handle the mixed workloads found in enterprise environments.
  3. General purpose all-flash arrays. This classification of all-flash arrays is best represented by products such as Dell EMC Unity, FUJITSU Storage ETERNUS AF Series, Hitachi Vantara VSP F series, Nexsan Unity, NEC Storage M Series and Western Digital Tegile. These  dual controller storage arrays have updated their controllers to better manage and optimize the performance of flash while bringing forward more mature data management capabilities.
  4. High performance all-flash arrays. This is an emerging class of all-flash arrays which are only starting to come to market now from vendors such as E8 Storage and, later this year, from Kaminario. This class of all-flash arrays will redefine “high performance” by offering 10+ million IOPS using NVMe-oF on their front-end interfaces to hosts. While the practicality of implementing these all-flash array solutions is limited in the near term, these arrays provide an early glimpse of what is coming in the not-too-distant future.
  5. Utility all-flash arrays. This final grouping of all-flash arrays includes products such as the HPE MSA series, the IBM FlashSystem 900, the NetApp E-series, and SanDisk Infiniflash. These are for organizations who only intend to connect a relatively few number of applications to the all-flash array, need high levels of performance and reliability, and not a whole lot more. Due to the reduced number of data management features on these arrays and their purpose-built nature, they often come at a very attractive price point on a raw per GB basis when compared with the other all-flash arrays mentioned here.

This maturing of the all-flash array market, however, comes with a caveat. It appears another round of maturation will again occur in the next 5-10 years that will create yet a sixth and perhaps final class of all-flash arrays: Composable All-flash Arrays.

This final classification for all-flash arrays may actually serve to be end game for all five of these current all-flash array classifications as software-defined storage takes hold in enterprises and the need for all-flash arrays to manage and deliver both data management and performance decreases. While that day does not appear to be imminent, in light of how quickly enterprises are adopting the cloud architectures and software-defined storage, the adoption and spread of composable all-flash arrays may occur more quickly that many suspect.

Editor’s note: This blog entry was updated on February 5, 2018, for grammar and technical accuracy of the AFAs mentioned.

Bitnami