DCIG Will Provide Update on All-flash Array Advances at Flash Memory Summit 2019

Flash Memory Summit is the world’s largest storage industry event featuring the trends, innovations, and influencers driving the adoption of flash memory. DCIG will again present at the Summit this year. DCIG’s presentation will draw from its independent research into all-flash arrays and the Competitive Intelligence that DCIG performs on behalf of its clients.

The session will highlight recent developments in all-flash arrays and the rapidly changing competitive landscape for these products. Ken Clipperton, DCIG’s Lead Analyst for Storage, will speak on Tuesday, August 6th, from 9:45-10:50 AM. The session is called BMKT-101B-1: Annual Update on Flash Arrays.

Just as DCIG does in its reports, Mr. Clipperton will discuss both the “What” and the “So what?” of these advances in all-flash arrays. The presentation will cover the changes occurring in all-flash arrays, the value they create for organizations implementing them, and the key topic areas that DCIG focuses on in its competitive intelligence reports.

Mr. Clipperton will cover the following topics:

  • Advances in front-end connectivity to the storage network/application servers
  • Advances in back-end connectivity to storage media
  • Integration of storage-class memory
  • Integrations with other elements in the data center
  • Cloud connectivity
  • Delivery models
  • Predictive analytics
  • Proactive support
  • Licensing
  • Storage-as-a-Service (OpEx model)
  • Guarantee programs
  • Expectations about developments in the near-term future

If you will be at FMS, we hope that you will be able to attend this session and then stick around to introduce yourself and share your perspectives on where the AFA marketplace is heading.

Whether you are able to attend FMS or DCIG’s session at the summit, we invite you to sign up for our newsletter. To request more information about DCIG’s Competitive Intelligence services, click on this link.

Be sure to check back on the DCIG website after the event to get our take on the Summit and the products we believe deserve “Best in Show” honors.




Four Flash Memory Trends Influencing the Development of Tomorrow’s All-flash Arrays

The annual Flash Memory Summit is where vendors reveal to the world the future of storage technology. Many companies announced innovative products and technical advances at last week’s 2017 Flash Memory Summit that give enterprises a good understanding of what to expect from today’s all-flash products today as well as a glimpse into tomorrow’s products. These previews into the next generation of flash products revealed four flash memory trends sure to influence the development of the next generation of all-flash arrays.

Flash Memory Trend #1: Storage class memory is real, and it is really impressive. Storage class memory (SCM) is a term applied to several different technologies that share two important characteristics. Like flash memory, storage class memory is non-volatile. It retains data after the power is shut off. Like DRAM, storage class memory is very low latency and is byte-addressable, meaning it can be talked to like DRAM memory. Together, these characteristics enable greater-than-10x improvements in system and application performance.

Two years ago, Intel and Micron rocked the conference with the announcement of 3D XPoint storage class memory. In the run up to this year’s Flash Memory Summit, Intel announced both consumer and enterprise SSDs based on 3D XPoint technology under the Optane brand. These products are shipping now for $2.50 to $5.00 per GB. Initial capacities are reminiscent of 10K and 15K enterprise hard drives. SCM-based SSDs outperform flash memory SSDs in terms of consistent low latency and high bandwidth.

Screen shot of Everspin nvNITRO bandwidth

Screen shot of Everspin nvNITRO bandwidth

Other storage class memory technologies also moved out of the lab and into products. Everspin announced 1 Gb MRAM chips, quadrupling the density of last year’s 256 Mb chip. Everspin demonstrated the performance of a single ST-MRAM SSD in a standard desktop PC. The nvNITRO PCIe card achieved a sustained write bandwidth of 5.8 GB/second and nearly 1.5 Million IOPS. Everspin nvNITRO cards are available in 1 GB and 2 GB capacities today, with 16 GB PCIe cards expected by the end of the year.

CROSSBAR announced that it has licensed its ReRAM technology to multiple memory manufacturers. CROSSBAR displayed sample wafers that were produced by two different licensees. Products based on the technology are in development.

DRAM and flash memory will continue to play important roles for the foreseeable future. Nevertheless, each type of SCM enables the greater-than-10x improvements in performance that inspire new system designs. In the near term, storage class memory will be used as a cache, a write buffer, or as a small pool of high performance storage for database transaction logs. In some cases it will also be used as an expanded pool of system memory. SCM may also replace DRAM in many SSDs.

Picture of NAND roadmap

NAND Roadmap

Flash Memory Trend #2: There is still lot of room for innovation in flash memory. Every flash memory manufacturer announced advances in flash memory technology. Manufacturers provided roadmaps showing that flash memory will be the predominant storage technology for years to come.

Samsung’s keynote presenter brandished the 32 TB 2.5” SSD it announced at the conference. This doubled the 16 TB capacity Samsung announced on the same stage just one year ago. Although the presenter was rightly proud of the achievement, the response of the audience was muted, even mild. I hope our response wasn’t discouraging; but frankly, we expected Samsung to pull this off. The presenter reaffirmed our expectations by telling us that Samsung will continue this pace of advancement in NAND flash for at least the next five years.

Flash Memory Trend #3: NVMe and NVMe-oF are important steps on the path to the future. NVMe is the new standard protocol for talking to flash memory and SCM-based storage. It appears that every enterprise vendor is incorporating NVMe into its products. The availability of dual-ported NVMe SSDs from multiple suppliers helped to hasten the transition to NVMe in enterprise storage systems, as will the hot-swap capability for NVMe SSDs announced at the event.

NVMe-over-Fabrics (NVMe-oF) is the new standard for accessing storage across a network. Pure Storage recently announced the all-NVMe FlashArray//X. At FMS, AccelStor announced its second-generation all-NVMe AccelStor NeoSapphire H810 array. E8 Storage and Kaminario also announced NVMe-based arrays.

Micron discussed its Solid Scale scale-out all-flash array with us. Solid Scale is based on Micron’s new NVMe 9200 SSDs and Excelero’s NVMesh software. NVMesh creates a server SAN using the same underlying technology as NVMe-oF. In the case of Solid Scale, the servers are dedicated storage nodes.

Other vendors told us about their forthcoming NVMe and NVMe-oF arrays. In every case, these products will deliver substantial improvements in latency and throughput compared to existing all-flash arrays, and should deliver millions of IOPS.

Photo of Gen-Z Chassis

Gen-Z Concept Chassis

Flash Memory Trend #4: The future is data centric, not processor centric. Ongoing advances in flash memory and storage class memory are vitally important, yet they introduce new challenges for storage system designers and data center architects. Although NVMe over PCIe can deliver 10x improvements in some storage metrics, PCIe is already a bottleneck that limits overall system performance.

We ultimately need a new data access technology, one that will enable much higher performance. Gen-Z promises to be exactly that. Gen-Z is “an open systems interconnect that enables memory access to data and devices via direct-attached, switched, or fabric topologies. This means Gen-Z will allow any device to communicate with any other device as if it were communicating with its local memory.”

Photo of Barry McAuliffe of HPE and Kurtis Bowman of Dell EMC

Barry McAuliffe (HPE) and Kurtis Bowman (Dell EMC)

I spent a couple hours with the Gen-Z Consortium folks and came away impressed. The consortium is working to enable a composable infrastructure in which every type of performance resource becomes a virtualized pool that can be allocated to tasks as needed. The technology was ready to be demonstrated in an FPGA-based implementation, but a fire in the exhibit hall prevented access. Instead, we saw a conceptual representation of a Gen-Z based system.

The Gen-Z Consortium is creating an open interconnect technology on top of which participating organizations can innovate. There are already more than 40 participating organizations including Dell EMC, HPE, Huawei, IBM, Broadcom and Mellanox. I found it refreshing to observe staff from HPE (Barry McAuliffe, VP and Secretary of Gen-Z) and Dell EMC (Kurtis Bowman, President of Gen-Z) working together to advance this data centric architecture.

Implications of These Flash Memory Trends for Enterprise IT

Vendors are shipping storage class memory products today, with more to come by the end of the year. Flash memory manufacturers continue to innovate, and will extend the viability of flash memory as a core data center technology for at least another five years. NVMe and NVMe-oF are real today, and are key technologies for the next generation of storage systems.

Enterprise technologists should plan 2017 through 2020 technology refreshes around NVMe and NVMe-oF. Data center architects and application owners should seek 10:1 improvements in performance, and a similar jump in data center efficiency.

Beyond 2020, enterprise technologists should plan their technology refreshes around a composable data centric architecture. Data center architects should track the development of the Gen-Z ecosystem as a possible foundation for their next-generation data centers.




NAB Starts with a Showstopper: Software Defined Storage Starting to Have Some Defining Moments

Arrived in Las Vegas last night to spend three (3) days and nights with a forecasted 90,000 other attendees at the National Association of Broadcasters (NAB) show. As one of NAB’s opening events – and my first stop at the show – was the ShowStoppers event at the Wynn Hotel and Casino near the Las Vegas convention center. There analysts and press got to spend a couple of uninterrupted hours talking with select providers about numerous emerging technologies, one of which was software defined storage.

Ever since the term “software defined storage” first started gaining momentum as the new buzz word in storage cycles, I have to admit I am and remain a bit jaded as to whether or not software defined storage will take off and gain adoption. I was around back in the 2001-2004 time-frame where storage virtualization was going to solve all corporate storage challenges. After riding a huge wave of hype (much larger than anything I have so far seen from the software defined storage crowd,) storage virtualization came crashing back down to earth and crashed – hard.

Yet here I was at the NAB Showstoppers event last night and I probably spent a good 45 minutes to an hour discussing this topic with Avere and Scality who were exhibiting at this event. In discussing this with Avere, it has observed a definite pick-up in interest in software defined storage among large enterprises in the last year – as in organizations are interested in actually acquiring it.

In talking to Rebecca Thompson, Avere’s VP of Marketing, those who display the greatest interest in software defined storage are already well down the path of adopting software defined networking.  As such, these organizations are much more open to the conversation of also implementing software defined storage as well.

Avere would tend to see more of these initiatives than others as its solutions typically show up in large organizations as edge filers that can function as gateways to back-end public clouds. It is on the public storage cloud connectivity where it is specifically starting to see interest in software defined storage as companies want the flexibility to access these public storage pools using S3 APIs. While Amazon is still the big player in this space, more public cloud storage providers are starting to offer support for these APIs and, based upon Rebecca’s comments, support for them is limited to just a handful (under five.)

Organizations then use Avere’s FXT appliances to store frequently accessed data locally (on premise) on flash or disk and then store data via the S3 protocols as it ages with these various cloud storage providers.  Since a number of tape storage providers are also using releasing tape libraries that are beginning to support these S3 protocols (Oracle and Spectra Logic specifically come to mind)  on these libraries. In the near term, she said probably not. Avere’s current and prospective clients are primarily looking to store data in active archives and it did not feel tape met that criteria.

After talking to Avere, the DCIG team wandered over to speak with Scality which had the term “Software Defined Storage” prominently featured on its booth that was just around the corner from Avere. In talking with Scality about this topic, it confirmed what Avere shared with DCIG – that it has also seen a definite uptick in user interest in software defined storage to the point where organizations were no longer are just talking about it but are also buying it. Also, like Avere, it said that organizations with multi-petabyte data storage requirements were the most interested in it.

This piqued my interest as one of my concerns with software defined anything (networking or storage) is how support calls are handled – particularly in the case of storage. Scality shared that it has agreements with all of the major server providers (Dell, HP, SuperMicro, etc) to resell its software and function as the first of support. If anything, Scality has found in its own research that its software is the root cause of an issue when one occurs only about 5 percent of the time. The rest of the time it is a hardware issue.

More notably, it is finding that by remaining hardware agnostic and staying focused on solely being a software provider, any underlying issues (hardware or software) are dealt with more thoroughly. Scality felt that providers who offer both the hardware and software in a single appliance tend to muddy the waters as to where the problem actually resides. They never fully troubleshoot the issue as they fail to identify its root cause so it never fully gets fixed. Instead, they may keep replacing or upgrading hardware until it goes away only to potentially resurface later.

Again, not sure about the accuracy of that viewpoint but I can certainly see how that occurs.

Regardless, these are my first insights gained from NAB. Will try to keep posting on a daily basis as to what information I glean while at the show.

Bitnami