Nanoseconds, Stubborn SAS, and Other Takeaways from the Flash Memory Summit 2019

Every year at the Flash Memory Summit held in Santa Clara, CA, attendees get a firsthand look at the technologies that will impact the next generation of storage. This year many of the innovations centered on forthcoming interconnects that will better deliver on the performance that flash offers today. Here are DCIG’s main takeaways from this year’s event.

Takeaway #1 – Nanosecond Response Times Demonstrated

PCI Express (PCIe) fabrics can deliver nanosecond response times using resources (CPU, memory, storage) situated on different physical enclosures. In meeting with PCIe provider, Dolphin Interconnect Solutions, it demonstrated how an application could access resources (CPU, flash storage & memory) on different devices across a PCIe fabric in nanoseconds. Separately, GigaIO announced 500 nanosecond end-to-end latency using its PCIe FabreX switches. While everyone else at the show was boasting about microsecond response times, Dolphin and GigaIO introduced nanoseconds into the conversation. Both these companies ship their solutions now.

Takeaway #2 – Impact of NVMe/TCP Standard Confirmed

Ever since we heard the industry planned to port NVMe-oF to TCP, DCIG thought this would accelerate the overall adoption of NVMe-oF. Toshiba confirmed our suspicions. In discussing its Kumoscale product with DCIG, it shared that it has seen a 10x jump in sales since the industry ratified the NVMe/TCP standard. This stems from all the reasons DCIG stated in a previous blog entry such as TCP being well understood, Ethernet being widely deployed, its low cost, and its use of existing infrastructure in organizations.

Takeaway #3 – Fibre Channel Market Healthy, Driven by Enterprise All-flash Array

According to FCIA leaders, the Fibre Channel (FC) market is healthy. FC vendors are selling 8 million ports per year. The enterprise all-flash array market is driving FC infrastructure sales, and 32 Gb FC is shipping in volume. Indeed, DCIG’s research revealed 37 all-flash arrays that support 32 Gb FC connectivity.

Front-end connectivity is often the bottleneck in all-flash array performance, so doubling the speed of those connections can double the performance of the array. Beyond 32 Gb FC, the FCIA has already ratified the 64 Gb standard and is working on the 128 Gb FC. Consequently, FC has a long future in enterprise data centers.

FC-NVMe brings the benefits of NVMe-oF to Fibre Channel networks. FC-NVMe reduces protocol overhead, enabling GEN 5 (16 Gb FC) infrastructure to accomplish the same amount of work while consuming about half the CPU of standard FC.

Takeaway #4 – PCIe Will Not be Denied

All resources (CPU, memory and flash storage) can connect with one another and communicate over PCIe. Further, using PCIe eliminates the need for introducing the overhead associated with storage protocols (FC, InfiniBand, iSCSI, SCSI). All these resources talk the PCIe protocol. With the PCIe 5.0 standard formally ratified in May 2019 and discussions about PCIe 6.0 occurring, the future seems bright for the growing adoption of this protocol. Further, AMD and Intel having both thrown their support behind it.

Takeaway #5 – SAS Will Stubbornly Hang On

DCIG’s research finds that over 75% of AFAs support 12Gb/second SAS now. This predominance makes the introduction of 24G a logical next step for these arrays. A proven, mature, and economical interconnect, few applications can yet drive the performance limits of 12Gb, much less the forthcoming 24G standard. Adding to the likelihood that 24G moves forward, the SCSI Trade Association (STA) reported that the recent 24G plug fest went well.

Editor’s Note: This blog entry was updated on August 9, 2019, to correct grammatical mistakes and add some links.



iXsystems FreeNAS Mini XL+ and Mini E Expand the Reach of Open Source Storage to Small Offices

On July 25, 2019, iXsystems® announced two new storage systems. The FreeNAS® Mini XL+ provides a new top-end model in the FreeNAS Mini product line, and the FreeNAS Mini E provides a new entry-level model. These servers are mini-sized yet provide professional-grade network-attached storage. 

FreeNAS Minis are Professional-grade and Whisper Quiet

Picture of new FreeNAS Mini E and Mini XL plus storage systems

Source: iXsystems

The FreeNAS Mini XL+ and Mini E incorporate technologies normally associated with enterprise servers, such as ECC memory, out-of-band management, and NAS-grade hard drives. Both are engineered for and powered by the widely adopted ZFS-based FreeNAS Open Source storage OS. Thus, the Mini XL+ and Mini E provide file, block, and S3 object storage to meet nearly any SOHO/SMB storage requirement.

Early in my IT career, I purchased a tower server that was marketed to small businesses as a convenient under-desk solution. The noise and heat generated by this server quickly helped me understand why so many small business servers were running in closets. The FreeNAS Mini is not this kind of server.

All FreeNAS Mini models are designed to share space with people. They are compact and “whisper quiet” for use in offices and homes. They are also power-efficient, drawing a maximum of 56 to 106 Watts for the Mini E and Mini XL+, respectively.

Next-Generation Technology Powers Up the FreeNAS Mini XL+ and Mini E

The Mini XL+ and E bring multiple technology upgrades to the FreeNAS Mini platform. These include:

  • Intel Atom C3000 Series CPUs
  • DDR4 ECC DRAM
  • +1 2.5” Hot-swappable Bay (Mini XL+)
  • PCI Express 3.0 (Mini XL+)
  • IPMI iKVM (HTML5-based)
  • USB 3.0
  • Standard Dual 10 Gb Ethernet Ports (Mini XL+)
  • Quad 1 Gb Ethernet Ports (Mini E)

FreeNAS Mini is a Multifunction Solution

FreeNAS Mini products are well-equipped to compete against other small form factor NAS appliances; and perhaps even tower servers because of their ability to run network applications directly on the storage appliance.

Indeed, the combination of more powerful hardware, application plug-ins, and the ability to run hypervisor or containerized applications directly on the storage appliance makes the FreeNAS Mini a multi-function SOHO/ROBO solution.

FreeNAS plugins are based on pre-configured FreeBSD containers called jails that are simple to install. iXsystems refers to these plugins as “Network Application Services”. The plugins are available across all TrueNAS® and FreeNAS products, including the new FreeNAS Mini E and XL+.

The available plugins include quality commercial and open source applications covering a range of use cases, including:

  • Backup (Asigra)
  • Collaboration (NextCloud)
  • DevOps (GitLab)
  • Entertainment (Plex)
  • Hybrid cloud media management (Iconik)
  • Security (ClamAV)
  • Surveillance video (ZoneMinder)

FreeNAS Mini Addresses Many Use Cases

The FreeNAS Mini XL+ and Mini E expand the range of use cases for the FreeNAS product line.

Remote, branch or home office. The FreeNAS Mini creates value for any business that needs professional-grade storage. It will be especially appealing to organizations that need to provide reliable storage across multiple locations. The Mini’s combination of a dedicated management port, IPMI, and TrueCommand management software enables comprehensive remote monitoring and management of multiple Minis.

FreeNAS Mini support for S3 object storage includes bidirectional file sync with popular cloud storage services and private S3 storage. This enables low-latency local file access with off-site data protection for home and branch offices. 

Organizations can also deploy and manage FreeNAS systems at the edge and use TrueNAS systems where enterprise-class support and HA are required. Indeed, iXsystems has many clients that deploy both TrueNAS and FreeNAS. In doing so, they gain the benefit of a single storage operating environment across all their locations, all of which can be managed centrally via TrueCommand.

Managed Service Provider. TrueCommand and IPMI also enable managed service providers (MSPs) to cost-effectively manage a whole fleet of FreeNAS or TrueNAS systems across their entire client base. TrueCommand enables role-based access controls, allowing MSPs to assign systems into teams broken down by separate clients and admins..

Bulk data transfer. FreeNAS provides robust replication options, but sometimes the fastest way to move large amounts of data is to physically ship it from site to site. Customers can use the Mini XL+ to rapidly ingest, store, and transfer over 70 TB of data.

Convenient Purchase of Preconfigured or Custom Configurations

iXsystems has increased the appeal of the FreeNAS Mini by offering multiple self-service purchasing options. It offers a straightforward online ordering tool that allows the purchaser to configure and purchase any of the FreeNAS Mini products directly from iXsystems. iXsystems also makes preconfigured systems available for rapid ordering and delivery via Amazon Prime. Either method enables purchase with a minimal amount of fuss and a maximum amount of confidence. 

Thoughtfully Committed to Expanding the Reach of Open Source Storage

Individuals and businesses that purchase the new FreeNAS Mini XL+ or Mini E are doing more than simply acquiring high-quality storage systems for themselves. They are also supporting the ongoing development of Open Source projects such as FreeBSD and OpenZFS. 

iXsystems has decades of expertise in system design and development of Open Source software including FreeNAS, FreeBSD, OpenZFS, and TrueOS®. Its recent advances in GUI-based management for simplified operations are making sophisticated Open Source technology more comfortable to non-technical users. 

iXsystems has thoughtfully engineered the FreeNAS Mini E and XL+ for FreeNAS, the world’s most widely deployed Open Source storage software. In doing so, they have created high-quality storage systems that offer much more than just NAS storage. Quietly. Affordably. 

For a thorough hands-on technical review of the FreeNAS Mini XL+, see this article on ServetheHome.

Additional product information, including detailed specifications and documentation, is available on the iXsystems FreeNAS Mini product page.




DCIG Will Provide Update on All-flash Array Advances at Flash Memory Summit 2019

Flash Memory Summit is the world’s largest storage industry event featuring the trends, innovations, and influencers driving the adoption of flash memory. DCIG will again present at the Summit this year. DCIG’s presentation will draw from its independent research into all-flash arrays and the Competitive Intelligence that DCIG performs on behalf of its clients.

The session will highlight recent developments in all-flash arrays and the rapidly changing competitive landscape for these products. Ken Clipperton, DCIG’s Lead Analyst for Storage, will speak on Tuesday, August 6th, from 9:45-10:50 AM. The session is called BMKT-101B-1: Annual Update on Flash Arrays.

Just as DCIG does in its reports, Mr. Clipperton will discuss both the “What” and the “So what?” of these advances in all-flash arrays. The presentation will cover the changes occurring in all-flash arrays, the value they create for organizations implementing them, and the key topic areas that DCIG focuses on in its competitive intelligence reports.

Mr. Clipperton will cover the following topics:

  • Advances in front-end connectivity to the storage network/application servers
  • Advances in back-end connectivity to storage media
  • Integration of storage-class memory
  • Integrations with other elements in the data center
  • Cloud connectivity
  • Delivery models
  • Predictive analytics
  • Proactive support
  • Licensing
  • Storage-as-a-Service (OpEx model)
  • Guarantee programs
  • Expectations about developments in the near-term future

If you will be at FMS, we hope that you will be able to attend this session and then stick around to introduce yourself and share your perspectives on where the AFA marketplace is heading.

Whether you are able to attend FMS or DCIG’s session at the summit, we invite you to sign up for our newsletter. To request more information about DCIG’s Competitive Intelligence services, click on this link.

Be sure to check back on the DCIG website after the event to get our take on the Summit and the products we believe deserve “Best in Show” honors.




Fast Network Connectivity Key to Unlocking All-flash Array Performance

The current generation of all-flash arrays offers enough performance to saturate the network connections between the arrays and application servers in the data center. In many scenarios, the key limiter to all-flash array performance is storage network bandwidth. Therefore, all-flash array vendors have been quick to adopt the latest advances in storage network connectivity.

Fast Networks are Here, and Faster Networks are Coming

Chart showing current and future Ethernet speeds

Ethernet is now available with connection speeds up to 400 Gb per second. Fibre Channel now reaches speeds up to 128 Gb per second. As discussed during a recent SNIA presentation, the roadmaps for both technologies forecast another 2x to 4x increase in performance.

While the fastest connections are generally used to create a storage network fabric among data center switches, many all-flash arrays support fast storage network connectivity.

All-flash Arrays Embrace Fast Network Connectivity

DCIG’s research into all-flash arrays identified thirty-seven (37) models that support 32 Gb FC, seventeen (17) that support 100 Gb Ethernet, and ten (10) that support 100 Gb InfiniBand connectivity. These include products from Dell EMC, FUJITSU Storage, Hitachi Vantara, Huawei, Kaminario, NEC Storage, NetApp, Nimbus Data, Pure Storage and Storbyte.

Summary chart of AFA connectivity support

Source: DCIG

Other Drivers of Fast Network Connectivity

Although all-flash storage is a key driver behind fast network connectivity, there are also several other significant drivers. Each of these has implications for the optimal balance between compute, storage, network bandwidth, and the cost of creating and managing the infrastructure.

These other drivers of fast networking include:

  • Faster servers that offer more capacity and performance density per rack unit
  • Increasing volumes of data require increasing bandwidth
  • Increasing east-west traffic between servers in the data center due to scale-out infrastructure and distributed cloud-native applications
  • The growth of GPU-enabled AI and data mining
  • Larger data centers, especially cloud and co-location facilities that may house tens of thousands of servers
  • Fatter pipes yield more efficient fabrics with fewer switches and cables

Predominant All-Flash Array Connectivity Use Cases

How an all-flash array connects to the network is frequently based on the type of organization deploying the array. While there are certainly exceptions to the rule, the predominant connection methods and use cases can be summarized as follows:

  • Ethernet = Cloud and Service Provider data centers
  • Fibre Channel = Enterprise data centers
  • InfiniBand = HPC environments

Recent advances in network connectivity–and the adoption of these advances by all-flash array providers–creates new opportunities to increase the amount of work that can be accomplished by an all-flash array. Therefore, organizations intending to acquire all-flash storage should consider each product’s embrace of fast network connectivity as an important part of the evaluation process.




TrueNAS Plugins Converge Services for Simple Hybrid Cloud Enablement

iXsystems is taking simplified service delivery to a new level by enabling a curated set of third-party services to run directly on its TrueNAS arrays. TrueNAS already provided multi-protocol unified storage to include file, block and S3-compatible object storage. Now preconfigured plugins converge additional services onto TrueNAS for simple hybrid cloud enablement.

TrueNAS Technology Provides a Robust Foundation for Hybrid Cloud Functionality

iXsystems is known for enterprise-class storage software and rock-solid storage hardware. This foundation lets iXsystems customers run select third-party applications as plugins directly on the storage arrays—whether TrueNAS, FreeNAS Mini or FreeNAS Certified. Several of these plugins dramatically simplify the deployment of hybrid public and private clouds.

How it Works

iXsystems works with select technology partners to preconfigure their solutions to run on TrueNAS using FreeBSD jails, iocage plugins, and bhyve virtual machines. By collaborating with these technology partners, iXsystems enables rapid IT service delivery and drives down the total cost of technology infrastructure. The flexibility to extend TrueNAS functionality via these plugins transforms the appliances into complete solutions that streamline common workflows.

Benefits of Curated Third-party Service Plugins

There are many advantages to this pre-integrated plugin approach:

  • Plugins are preconfigured for optimal operation on TrueNAS
  • Services can be added any time through the web interface
  • Simply turn it on, download the plugin and enter the associated login credentials
  • Plugins eliminate network latency by moving processing to the storage array
  • Third party applications can be run in a virtual machine without purchasing separate server hardware

Hybrid Cloud Data Protection

The integrated Asigra Cloud Backup software protects cloud, physical, and virtual environments. It is an enterprise-class backup solution that uniquely helps prevent malware from compromising backups. Asigra embeds cybersecurity software in its Cloud Backup software. It goes the extra mile to protect backup repositories, ensuring businesses can recover from malware attacks in their production environments.

Asigra is also one of the only enterprise backup solutions that offers agentless backup support across all types of environments: cloud, physical, and virtual. This flexibility makes adopting and deploying Asigra Cloud Backup easy with zero disruption to clients and servers. The integration of Asigra with TrueNAS is Storage Magazine’s Backup Product of the year for 2018.

Hybrid Cloud Media Management

TrueNAS arrays from iXsystems are heavily used in the media and entertainment industry, including several major film and television studios. iXsystems storage accelerates workflows with any device file sharing, multi-tier caching technology, and the latest interconnect technologies on the marketplace.  iXsystems recently announced a partnership with Cantemo to integrate its iconik software.

iconik is a hybrid cloud-based video and content management hub. Its main purpose is managing processes including ingestion, annotation, cataloging, collaboration, storage, retrieval, and distribution of digital assets. The main strength of the product is the support for managing metadata and transcoding of audio, video, and image files, but can store essentially all file formats. Users can choose to keep large original files on-premise yet still view and access the entire library in the cloud using proxy versions where required.

The Cantemo solutions are used to manage media across the entire asset lifecycle, from ingest to archive. iconik is used across a variety of industries including Fortune 500 IT companies, advertising agencies, broadcasters, houses of worship, and media production houses. Cantemo’s clients include BBC Worldwide, Nike, Madison Square Garden, The Daily Telegraph, The Guardian and many other leading media companies.

Enabling iconik on TrueNAS streamlines multimedia workflows and increases productivity for iXsystems customers who choose to enable the Cantemo service.

Cloud Sync

Both Asigra and Cantemo include hybrid cloud data management capabilities within their feature sets. iXsystems also supports file synchronization with many business-oriented and personal public cloud storage services. These enable staff to be productive anywhere—whether working with files locally or in the cloud.

Supported public cloud providers include Amazon Cloud Drive, Amazon S3, Backblaze B2, Box, Dropbox, Google Cloud Storage, Google Drive, Hubic, Mega, Microsoft Azure Blob Storage, Microsoft OneDrive, pCloud and Yandex. The Cloud Sync tool also supports file sync via SFTP and WebDAV.

More Technology Partnerships Planned

According to iXsystems, they will extend TrueNAS pre-integration to more technology partners where such partnerships provide win-win benefits for all involved. This intelligent strategy allows iXsystems to focus on enhancing core TrueNAS storage services, and it enables TrueNAS customers to quickly and confidently implement best-of-breed applications directly on their TrueNAS arrays.

All TrueNAS Owners Benefit

TrueNAS plugins provide a simple and flexible way for all iXsystems customers to add sophisticated hybrid-cloud media management and data protection services to their IT environments. Existing TrueNAS customers can gain the benefits of this plugin capability by updating to the most recent version of the TrueNAS software.




Caching vs Tiering with Storage Class Memory and NVMe – A Tale of Two Systems

Dell EMC announced that it will soon add Optane-based storage to its PowerMAX arrays, and that PowerMAX will use Optane as a storage tier, not “just” cache. This statement implies using Optane as a storage tier is superior to using it as a cache. But is it?

PowerMAX will use Storage Class Memory as Tier in All-NVMe System

Some people criticized Dell EMC for taking an all-NVMe approach–and therefore eliminating hybrid (flash memory plus HDD) configurations. Yet the all-NVMe decision gave the engineers an opportunity to architect PowerMAX around the inherent parallelism of NVMe. Dell EMC’s design imperative for the PowerMAX is performance over efficiency. And it does perform:

  • 290 microsecond latency
  • 150 GB per second of throughput
  • 10 million IOPS

These results were achieved with standard flash memory NVMe SSDs. The numbers will get even better when Dell EMC adds Optane-based storage class memory (SCM) as a tier. Once SCM has been added to the array, Dell EMC’s fully automated storage tiering (FAST) technology will monitor array activity and automatically move the most active data to the SCM tier and less active data to the flash memory SSDs.

The intelligence of the tiering algorithms will be key to delivering great results in production environments. Indeed, Dell EMC states that, “Built-in machine learning is the only cost-effective way to leverage SCM”.

HPE “Memory-Driven Flash” uses Storage Class Memory as Cache

HPE is one of many vendors taking the caching path to integrating SCM into their products. It recently began shipping Optane-based read caching via 750 GB NVMe SCM Module add-in cards. In testing, HPE 3PAR 20850 arrays equipped with this “HPE Memory-Driven Flash” delivered:

  • Sub-200 microseconds of latency for most IO
  • Nearly 100% of IO in under 300 microseconds
  • 75 GB per second of throughput
  • 4 million IOPS

These results were achieved with standard 12 Gb SAS SSDs providing the bulk of the storage capacity. HPE Memory-Driven Flash is currently shipping for HPE 3PAR Storage, with availability on HPE Nimble Storage yet in 2019.

An advantage of caching approach is that even a relatively small amount of SCM can enable a storage system to deliver SCM performance by dynamically caching hot data, even when it is storing most of the data on much slower and less expensive media. As with tiering, the intelligence of the algorithms is key to delivering great results in production environments.

The performance HPE is achieving with SCM is good news for other arrays based on caching-oriented storage operating systems. In particular, ZFS-based products such as those offered by Tegile, iXsystems and OpenDrives, should see substantial performance gains when they switch to using SCM for the L2ARC read cache.

What is Best – Tier or Cache?

I favor the caching approach. Caching is more dynamic than tiering, responding to workloads immediately rather than waiting for a tiering algorithm to move active data to the fastest tier on some scheduled basis. A tiering-based system may completely miss out on the opportunity to accelerate some workloads. I also favor caching because I believe it will bring the benefits of SCM within reach of more organizations.

Whether using SCM as a capacity tier or as a cache, the intelligence of the algorithms that automate the placement of data is critical. Many storage vendors talk about using artificial intelligence and machine learning (AI/ML) in their storage systems. SCM provides a new, large, persistent, low-latency class of storage for AI/ML to work with in order to deliver more performance in less space and at a lower cost per unit of performance.

The right way to integrate NVMe and SCM into enterprise storage is to do so–as a tier, as a cache or as both tier and cache–and then use automated intelligent algorithms to make the most of the storage class memory that is available.

Prospective enterprise storage array purchasers should take a close look at how the systems use (or plan to use) storage class memory and how they use AI/ML to inform caching and/or storage tiering decisions to deliver cost-effective performance.

 

This is the first in a series of articles about Persistent Memory and its use in enterprise storage. The second article in the series is NVMe-oF Delivering While Persistent Memory Remains Mostly a Promise.

Revised on 4/5/2019 to add the link to the next article in the series.




Number of Appliances Dedicated to Deduplicating Backup Data Shrinks even as Data Universe Expands

One would think that with the continuing explosion in the amount of data being created every year, the number of appliances that can reduce the amount of data stored by deduplicating it would be increasing. That statement is both true and flawed. On one hand, the number of backup and storage appliances that can deduplicate data has never been higher and continues to increase. On the other hand, the number of vendors that create physical target-based appliances dedicated to the deduplication of backup data continues to shrink.

Data Universe Expands

In November 2018 IDC released a report where it estimated the amount of data that will be created, captured, and replicated will increase five-fold from the current 33 zettabytes (ZBs) to about 175 ZBs in 2025. Whether one agrees with that estimate, there is little doubt that there are more ways than ever in which data gets created. These include:

  • Endpoint devices such as PCs, tablets, and smart phones
  • Edge devices such as sensors that collect data
  • Video and audio recording devices
  • Traditional data centers
  • The creation of data through the backup, replication and copying of this created data
  • The creation of metadata that describes, categorizes, and analyzes this data

All these sources and means of creating data means there is more data than ever under management. But as this occurs, the number of the products originally developed to control this data growth – hardware appliances that specialize in the deduplication of backup data after it is backed up such as those from Dell EMC, ExaGrid, and HPE – has shrunk in recent years.

Here are the top five reasons for this trend.

1. Deduplication has Moved onto Storage Arrays.

Many storage arrays, both primary and secondary, give companies the option to deduplicate data. While these arrays may not achieve the same deduplication ratios as appliances purpose-built for the deduplication of backup data, their combination of lower costs and highs levels of storage capacity offset the inabilities of their deduplication software to optimize backup data.

2. Backup software offers deduplication capabilities.

Rather than waiting to deduplicate backup data on a hardware appliance, almost all enterprise backup software products can deduplicate on either the client or the backup server before storing it. This eliminates the need to use a storage device dedicated to deduplicating data.

3. Virtual appliances that perform deduplication on the rise.

Some providers, such as Quest Software, have exited the physical deduplication backup target appliance market and re-emerged with virtual appliances that deduplicate data. These give companies new flexibility to use hardware from any provider they want and implement their software-defined data center strategy more aggressively.

4. Newly created data may not deduplicate well or at all.

A lot of the new data that companies may not deduplicate well or at all. Audio or video files may not change and will only deduplicate if full backups are done – which may be rare. Encrypted data will not deduplicate at all. In these circumstances, deduplication appliances are rarely if ever needed.

5. Multiple backup copies of the data may not be needed.

Much of the data collected from edge and endpoint devices may only need a couple of copies of data, if that. Audio and video files may also fall into this same category of not needing to retain more than a couple copies of data. To get the full benefits of a target-based deduplication appliance, one needs to backup the same data multiple times – usually at least six times if not more. This reduced need to backup and retain multiple copies of data diminishes the need for these appliances.

Remaining Deduplication Appliances More Finely Tuned for Enterprise Requirements

The reduction in the number of vendors shipping physical target-based deduplication backup appliances seems almost counter-intuitive in the light of the ongoing explosion in data growth that we are witnessing. But when one considers must of data being created and its corresponding data protection and retention requirements, the decrease in the number of target-based deduplication appliances available is understandable.

The upside is that the vendors who do remain and the physical target-based deduplication appliances that they ship are more finely tuned for the needs of today’s enterprises. They are larger, better suited for recovery, have more cloud capabilities, and account for some of these other broader trends mentioned above. These factors and others will be covered in the forthcoming DCIG Buyer’s Guide on Enterprise Deduplication Backup Appliances.




The Early Implications of NVMe/TCP on Ethernet Network Designs

The ratification in November 2018 of the NVMe/TCP standard officially opened the doors for NVMe/TCP to begin to find its way into corporate IT environments. Earlier this week I had the opportunity to listen in on a webinar that SNIA hosted which provided an update on NVMe/TCP’s latest developments and its implications for enterprise IT. Here are four key takeaways from that presentation and how these changes will impact corporate data center Ethernet network designs.

First, NVMe/TCP will accelerate the deployment of NVMe in enterprises.

NVMe is already available in networked storage environments using competing protocols such as RDMA which ships as RoCE (RDMA over Converged Ethernet). The challenge is no one (well, very few anyway) use RDMA in any meaningful way in their environment so using RoCE to run NVMe never gained and will likely never gain any momentum.

The availability of NVMe over TCP changes that. Companies already understand TCP, deploy it everywhere, and know how to scale and run it over their existing Ethernet networks. NVMe/TCP will build on this legacy infrastructure and knowledge.

Second, any latency that NVMe/TCP introduces still pales in comparison to existing storage networking protocols.

Running NVMe over TCP does introduces latency versus using RoCE. However, the latency that TCP introduces is nominal and will likely be measured in microseconds in most circumstances. Most applications will not even detect this level of latency due to the substantial jump in performance that natively running NVMe over TCP will provide versus using existing storage protocols such as iSCSI and FC.

Third, the introduction of NVMe/TCP will require companies implement Ethernet network designs that minimize latency.

Ethernet networks may implement buffering in Ethernet switches to handle periods of peak workloads. Companies will need to modify that network design technique when deploying NVMe/TCP as buffering introduces latency into the network and NVMe is highly latency sensitive. Companies will need to more carefully balance how much buffering they introduce on Ethernet switches.

Fourth, get familiar with the term “incast collapse” on Ethernet networks and how to mitigate it.

NVMe can support up to 64,000 queues. Every queue that NVMe opens up initiates a TCP session. Here is where challenges may eventually surface. Simultaneously opening up multiple queues will result in multiple TCP sessions initiating at the same time. This could, in turn, have all these sessions arrive at a common congestion point in the Ethernet network at the same time. The network remedies this by having all TCP sessions backing off at the same time, or an incast collapse, creating latency in the network.

Source: University of California-Berkeley

Historically this has been a very specialized and rare occurrence in networking due to the low probability that such an event would ever take place. But the introduction of NVMe/TCP into the network makes the possibility of such a event much more likely to occur, especially as more companies deploy NVMe/TCP into their environment.

The Ratification of the NVMe/TCP

Ratification of the NVMe/TCP standard potentially makes every enterprise data center a candidate for storage systems that can deliver dramatically better performance to their work loads. Until the performance demands of every workload in a data center are met instantaneously, some workload requests will queue up behind a bottleneck in the data center infrastructure.

Just as introducing flash memory into enterprise storage systems revealed bottlenecks in storage operating system software and storage protocols, NVMe/TCP-based storage systems will reveal bottlenecks in data center networks. Enterprises seeking to accelerate their applications by implementing NVMe/TCP-based storage systems may discover bottlenecks in their networks that need to be addressed in order to see the full benefits that NVMe/TCP-based storage.

To view this presentation in its entirety, follow this link.




All-inclusive Software Licensing: Best Feature Ever … with Caveats

On the surface, all-inclusive software licensing sounds great. You get all the software features that the product offers at no additional charge. You can use them – or not use them – at your discretion. It simplifies product purchases and ongoing licensing.

But what if you opt not to use all the product’s features or only need a small subset of them? In those circumstances, you need to take a hard look at any product that offers all-inclusive software licensing to determine if it will deliver the value that you expect.

Why We Like All-Inclusive Software Licensing

All-inclusive software licensing has taken off in recent years with more enterprise data storage and data protection products than ever delivering their software licensing in this manner. Further, this trend shows no signs of abating for the following reasons:

  • It makes lives easier for the procurement since they do not have manage and negotiate software licensing separately.
  • It makes lives easier for the IT staff who want to use its features only to find out they cannot use them because they do not have a license to use them.
  • It helps the vendors because their customers use their features. The more they use and like the features, the more apt they are to keep using the product long term.
  • It provides insurance for the companies involved that if they do unexpectedly need a feature, they do not have to go back to the proverbial well and ask for more money to license it.
  • It helps IT be more responsive to changes in business requirements. Business need can change unexpectedly. It happens where IT is assured that a certain feature will never be of interest to the end user. Suddenly, this “never gonna need it” becomes a “gotta have it” requirement.

All-inclusive software licensing solves these dilemmas and others.

The Best Feature Ever … Has Some Caveats

The reasons as to why companies may consider all-inclusive software licensing the best feature ever are largely self-evident. But there are some caveats as to why companies should minimally examine all-inclusive software licensing before they select any product that supports it.

  1. Verify you will use the features offered by the platform. It is great that a storage platform offers deduplication, compression, thin provisioning, snapshots, replication, metro clusters, etc., etc. at no extra charge. But if you do not use these features now and have no plans to use them, guess what? You are still going to indirectly pay for them if you buy the product.
  2. Verify the provider measures and knows which of its features are used. When you buy all-inclusive software licensing, you generally expect the vendor to support it and continue to develop it. But how does the vendor know which of its features are being used, when they are being used, and for what purposes? It makes no sense for the provider to staff its support lines with experts in replication or continue developing its replication features if no one uses it. Be sure you select a product that regularly monitors and reports back to the providers which of its features are used, how they are used and actively supports and develops them.
  3. Match your requirements to the features available on the product. It still pays to do your homework. Know your requirements and then evaluate products with all-inclusive software licensing based upon them.
  4. Verify the software works well in your environment. I have run across a few providers who led the way in providing all-inclusive software licensing. Yet the ones who selected the product based on this offering found out the features were not as robust as they anticipated or were so difficult to use that they had to abandon using them. In short, having a license to use software that does not work in your environment does not help anyone.
  5. Try to quantify if other companies use the specific software features. Ideally, you want to know that others like you use the feature in production. This can help you avoid become an unsuspecting beta-tester for that feature.

Be Grateful but Wary

I, for one, am grateful that providers have come around with more of them making all-inclusive software licensing available as a licensing option for their products. But the software features that vendors include with their all-inclusive software licensing vary from product to product. They also differ in their maturity, robustness, and fullness of support.

It behooves everyone to hop on the all-inclusive software licensing bandwagon. But as you do, verify to which train you hitched your wagon and that it will take you to where you want to go.




TrueNAS M-Series Turns Tech Buzz into Music

NVMe and other advances in non-volatile memory technology are generating a lot of buzz in the enterprise technology industry, and rightly so. As providers integrate these technologies into storage systems, they are closing the gap between the dramatic advances in processing power and the performance of the storage systems that support them. The TrueNAS M-Series from iXsystems provides an excellent example of what can be achieved when these technologies are thoughtfully integrated into a storage system.

DCIG Quick Look

In the process of refreshing its research on enterprise midrange arrays, DCIG discovered that the iXsystems TrueNAS M-Series all-flash and hybrid storage arrays leverage many of the latest technologies, including:

  • Intel® Xeon® Scalable Family Processors

  • Large DRAM caches
  • NVDIMMs

  • NVMe SSDs

  • Flash memory

  • High-capacity hard disk drives

 

The TrueNAS M-Series lineup comprises two models: the M40 and the M50. The M40 is lower entry cost, scalable to 2 PB, and includes 40 GbE connectivity with SAS SSD caching. The M50 scales to 10 PB and adds 100 GbE connectivity with NVMe-based caching.

Both models come standard with redundant storage controllers for high-availability and 24×7 service. Though single-controller configurations are available for less critical applications. 

Advanced Technologies in Perfect Harmony

DCIG analysts are impressed with the way iXsystems engineers have orchestrated the latest technologies in the M50 storage array, achieving maximum end-to-end cost-efficient performance.

The M50 marries 40 Intel® Xeon® Scalable Family Processor cores with up to 3 TB of DRAM, a 32 GB NVDIMM write cache and 15.2 TB of NVMe SSD read-cache in front of up to 10 PB of hard disk storage. (The M-Series can also be configured as an all-flash array.) Moreover, iXsystems attaches each storage expansion shelf directly to each controller via 12 Gb SAS ports. This approach adds back end throughput to the storage system as each shelf is added.

image of TrueNAS M50 array rear view
iXsystems TrueNAS M50

This well-balanced approach carries through to front-end connectivity. The M50 supports the latest advances in high-speed networking, including up to 4 ports of 40/100 Gb Ethernet and 16/32 Gb Fibre Channel connectivity per controller.

TrueNAS is Enterprise Open Source

TrueNAS is built on BSD and ZFS Open Source technology. iXsystems is uniquely positioned to support the full Open Source stack behind TrueNAS. It has developers and expertise in the operating system, file systems and NAS software.

iXsystems also stewards the popular (>10 million downloads) FreeNAS software-defined storage platform. Among other things, FreeNAS functions as the experimental feature and QA testbed for TrueNAS. TrueNAS can even replicate data to and from FreeNAS. Thus, TrueNAS owners benefit from the huge ZFS and FreeNAS Open Source ecosystems.

NVM Advances are in Tune with the TrueNAS Architecture

The recent advances in non-volatile memory are a perfect fit with the TrueNAS architecture.

Geeking out just a bit…

diagram of TrueNAS M50 cacheZFS uses DRAM as a read cache to accelerate read operations. This primary read cache is called the ARC. ZFS also supports a secondary read cache called L2ARC. The M50 can use much of the 1.5 TB of DRAM in each storage controller for the ARC, and combine it with up to 15.2 TB of NVMe-based L2ARC to provide a huge low-latency read cache that offers up to 8 GB/s throughput.

The ZFS Intent Log (ZIL) is where all data to be written is initially stored. These writes are later flushed to disk. The M50 uses NVDIMMs for the ZIL write cache. The NVDIMMs safely provide near-DRAM-speed write caching. This enables the array to quickly acknowledge writes on the front end while efficiently coalescing many random writes into sequential disk operations on the back end.

Broad Protocol Support Enables Many Uses

TrueNAS supports AFP, SMB, NFS, iSCSI and FC protocols plus S3-compliant object storage. It also offers Asigra backup as an integrated service that runs natively on the array. This broad protocol support enables the M50 to cost-effectively provide high performance storage for:

  • File sharing
  • Virtual machine storage
  • Cloud-native apps
  • Backup target

 

All-inclusive Licensing Adds Value

TrueNAS software licensing is all-inclusive; with unlimited snapshots, clones and replication. Thus, there are no add-on license fees to negotiate and no additional PO’s to wait for. This reduces costs, promotes full utilization of the extensive capabilities of the TrueNAS M-Series and increases business agility. 

TrueNAS M50 Turns Tech Buzz into Music

The TrueNAS M50 integrates multiple buzz-worthy technologies to deliver large amounts of low-latency storage. The M50 accelerates a broad range of workloads–safely and economically. Speaking of economics, according to the iXsystems web site, TrueNAS storage can be expanded for less than $100/TB. That should be music to the ears of business people everywhere.




NVMe: Four Key Trends Set to Drive Its Adoption in 2019 and Beyond

Storage vendors hype NVMe for good reason. It enables all-flash arrays (AFAs) to fully deliver on flash’s performance characteristics. Already NVMe serves as an interconnect between AFA controllers and their back end solid state drives (SSDs) to help these AFAs unlock more of the performance that flash offers. However, the real performance benefits that NVMe can deliver will be unlocked as a result of four key trends set to converge in the 2019/2020 time period. Combined, these will open the doors for many more companies to experience the full breadth of performance benefits that NVMe provides for a much wider swath of applications running in their environment.

Many individuals have heard about the performance benefits of NVMe. Using it, companies can reduce latency with response times measured in few hundred microseconds or less. Further, applications can leverage the many more channels that NVMe has to offer to drive throughput to hundreds of GBs per second and achieve millions of IOPs. These types of performance characteristics have many companies eagerly anticipating NVMe’s widespread availability.

To date, however, few companies have experienced the full breadth of performance characteristics that NVMe offers. This stems from:

  • The lack of AFAs on the market that fully support NVMe (about 20%).
  • The relatively small performance improvements that NVMe offers over existing SAS-attached solid-state drives (SSDs); and,
  • The high level of difficulty and cost associated with deploying NVMe in existing data centers.

This is poised to change in the next 12-24 months with four key trends poised to converge that will open up NVMe to a much wider audience.

  1. Large storage vendors getting ready to enter the NVMe market. AFA providers such as Tegile (Western Digital), iXsystems, Huawei, Lenovo, and others ship products that support NVMe. These vendors represent the leading edge of where NVMe innovation has occurred. However, their share of the storage market remains relatively small compared to providers such as Dell EMC, HPE, IBM, and NetApp. As these large storage providers enter the market with AFAs that support NVMe, expect market acceptance and adoption of NVMe to take off.
  2. The availability of native NVMe drivers on all major operating systems. The only two major enterprise operating systems that have currently native NVMe drivers for their OSes are Linux and VMware. However, until Microsoft and, to a lesser degree, Solaris, offer native NVMe drives, many companies will have to hold off on deploying NVMe in their environments. The good news is that all these major OS providers are actively working on NVMe drivers. Further, expect that the availability of these drivers will closely coincide with the availability of NVMe AFAs from the major storage providers and the release of the NVMe-oF TCP standard.
  3. NVMe-oF TCP protocol standard set to be finalized yet in 2018. Connecting the AFA controller to its backend SSDs via NVMe is only one half – and much easier part – of solving the performance problem. The much larger and more difficult problem is easily connecting hosts to AFAs over existing storage networks as it is currently difficult to setup and scale NVMe-oF. The establishment of the NVMe-oF TCP standard will remedy this and facilitate the introduction and use of NVMe-oF between hosts and AFAs using TCP/IP over existing Ethernet storage networks.
  4. The general availability of NVMe-oF TCP offload cards. To realize the full performance benefits of NVMe-oF using TCP, companies are advised to use NVMe-oF TCP offload cards. Using standard Ethernet cards with no offload engine, companies will still see high throughput but very high CPU utilization (up to 50 percent.) Using the forthcoming NVMe-oF TCP offload cards, performance increases by anywhere from 33 to 150 percent versus native TCP cards while only introducing nominal amounts of latency (single to double digit microseconds.)

The business need for NVMe technology is real. While today’s all-flash arrays have tremendously accelerated application performance, NVMe stands poised to unleash another round of up to 10x or more performance improvements. But to do that, a mix of technologies, standards, and programming changes to existing operating systems must converge for mass adoption in enterprises to occur. This combination of events seems poised to happen in the next 12-24 months.




HPE Expands Its Big Tent for Enterprise Data Protection

When it comes to the mix of data protection challenges that exist within enterprises today, these companies would love to identify a single product that they can deploy to solve all their challenges. I hate to be the bearer of bad news, but that single product solution does not yet exist. That said, enterprises will find a steadily improving ecosystem of products that increasingly work well together to address this challenge with HPE being at the forefront of putting up a big tent that brings these products together and delivers them as a single solution.

Having largely solved their backup problems at scale, enterprises have new freedom to analyze and address their broader enterprise data protection challenges. As they look to bring long term data retention, data archiving, and multiple types of recovery (single applications, site fail overs, disaster recoveries, and others) under one big tent for data protection, they find they often need to deploy multiple products.

This creates a situation where each product addresses specific pain points that enterprises have. However, multiple products equate to multiple management interfaces that each have their own administrative policies with minimal or no integration between them. This creates a thornier problem – enterprises are left to manage and coordinate the hand-off of the protection and recovery of data between these different individual data protection products.

A few years HPE started to build a “big tent” to tackle these enterprise data protection and recovery issues. It laid the foundation with its HPE 3PAR StoreServ storage arrays, StoreOnce deduplication storage systems, and Recovery Manager Central (RMC) software to help companies coordinate and centrally manage:

  • Snapshots on 3PAR StoreServ arrays
  • Replication between 3PAR StoreServ arrays
  • The efficient movement of data between 3PAR and StoreOnce systems for backup, long term retention, and fast recoveries

This week HPE expanded its big tent of data protection to give companies more flexibility to protect and recover their data more broadly across their enterprise. It did so in the following ways:

  • HPC RMC 6.0 can directly recover data to HPE Nimble storage arrays. Recoveries from backups can be a multi-step process that may require data to pass through the backup software and the application server before it lands on the target storage array. Beginning December 2018, companies can use RMC to directly recover data to HPE Nimble storage arrays from an HPE StoreOnce system without going through the traditional recovery process just as they can already do to HPE 3PAR StoreServ storage arrays.
  • HPE StoreOnce can directly send and retrieve deduplicated data from multiple cloud providers. Companies sometimes fail to consider that general purpose cloud service providers such as Amazon Web Services (AWS) or Microsoft Azure make no provisions to optimize data stored with them such as deduplicating it. Using HP StoreOnce’s new direct support for AWS, Azure, and Scality, companies can use StoreOnce to first compress and deduplicate data before they store the data in the cloud.
  • Integration between Commvault and HPE StoreOnce systems. Out of the gate, companies can use Commvault to manage StoreOnce operations such as replicating data between StoreOnce systems as well as moving data directly from StoreOnce systems to the cloud. Moreover, as this relationship between Commvault and HPE matures, companies will also be able to use HPE’s StoreOnce Catalyst, HPE’s client-based deduplication software agent, in conjunction with Commvault to backup data on server clients where data may not reside on HPE 3PAR or Nimble storage. Using the HPE StoreOnce Catalyst software, Commvault will deduplicate data on the source before sending it to an HPE StoreOnce system.

Source:HPE

Of these three announcements that HPE made this week, this new relationship with Commvault that accompanies its pre-existing relationships with Micro Focus (formerly HPE Data Protector) and Veritas demonstrate HPE’s commitment to helping enterprises build a big tent for their data protection and recovery initiatives. Storing data on the HPE 3PAR and Nimble and using RMC to manage their backups and recoveries on the StoreOnce systems certainly accelerates and simplifies these functions when companies can do so. But by working with these other partners, it illustrates that HPE recognizes that companies will not store all their data on its systems and that HPE will accommodate companies so they can create a single, larger data protection and recovery solution for their enterprise.




Two Hot Technologies to Consider for Your 2019 Budgets

Hard to believe but the first day of autumn is just two days away and with fall weather always comes cooler temperatures (which I happen to enjoy!) This means people are staying inside a little more and doing those fun, end of year activities that everyone enjoys – such as planning their 2019 budgets. As you do so, solutions from BackupAssist and StorMagic are two hot new technologies for companies to consider making room for in the New Year.

BackupAssist 365.

BackupAssist 365 backs up files and emails stored in the cloud. While backup of cloud-based data may seem rather ho-hum in today’s artificial intelligent, block chain obsessed, digital transformation focused world, it solves a real world that nearly every size organization faces: how to cost-effectively and simply protect all those pesky files and emails that people store in cloud applications such as DropBox, Office 365, Google Drive, OneDrive, Gmail, Outlook and others.

To do so, BackupAssist 365 adopted two innovative yet practical approaches to protect files and emails.

  • First, it interfaces directly with these various cloud providers to backup this data. Using your login permissions (which you provide when configuring the software,) BackupAssist 365 accesses data directly in the cloud. This negates the need for your server, PC, or laptop to be turned on when these backups occur so backups can occur at any time.
  • Second, it does cloud-to-local In other words, rather than running up more data transfer and network costs that come with backing up to another cloud, it backs the data backup to local storage on your site. While that may seem a little odd in today’s cloud-centric world, companies can get a great deal of storage capacity for nominal amounts of money. Since it only does an initial full backup and then differential backups thereafter, the ongoing data transfer costs are nominal and the amount of storage capacity that one should need onsite equally small.

Perhaps the best part about BackupAssist 365 is its cost (or lack thereof.) BackupAssist 365 licenses its software on a per user basis with each user email account counting as one user license. However, this one email account covers the backup of that user’s data in any cloud service used by that user. Further, the cost is only $1/month per user with a decreasing cost for greater number of users. In fact, the cost is so low on a per user basis, companies may not even need to budget for this service. They can just start using it and expense their credit cards to keep it below corporate radar screens.

StorMagic SvSAN

The StorMagic SvSAN touches on another two hot technology trends that I purposefully (or not so purposefully) left out above: hyperconverged infrastructure or HCI and edge computing. However, unlike many of the HCI and edge computing plays in the marketplace such as Cisco HyperFlex, Dell EMC VxRail, and Nutanix, StorMagic has not forgotten about cost constraints that branch, remote, and small offices face.

As Cisco, Dell EMC, Nutanix and others chase the large enterprise data center opportunities, they often leave remote, branch, and small offices with two choices: pay up or find another solution. Many of these size offices are opting to find alternative solutions.

This is where StorMagic primarily plays. For a less well-known player, they play much bigger than they may first appear. Through partnerships with large providers such as Cisco and Lenovo among others, StorMagic comes to market with highly available, two-server systems that scale across dozens, hundreds, or even thousands of remote sites. To get a sense of StorMagic’s scalability, walk into any of the 2,000+ Home Depots in the United States or Mexico and ask to look at the computer system that hosts their compute and storage. If the Home Depot lets you and you can find it, you will find a StorMagic system running somewhere in the store.

The other big challenge that each StorMagic system also addresses is security. Because their systems can be deployed almost anywhere in any environment, it does make them susceptible to theft. In fact, in talking to one of its representatives, he shared a story where someone drove a forklift through the side of a building and stole a computer system at one of its customer sites. Not that it mattered. To counter these types of threats, StorMagic encrypts all the data on its HCI solutions with its own software that is FIPS 140-2 compliant.

Best of all, to get these capabilities, companies do not have to break the bank to acquire one of these systems. The list price for the Standard Edition of the SvSAN software, which includes 2TB of usable storage, high availability, and remote management, is $2,500.

As companies look ahead and plan their 2019 budgets, they need to take care of their operational requirements but they may also want to dip their toes in the water to get the latest and greatest technologies. These two technologies give companies the opportunities to do both. Using BackupAssist 365, companies can quickly and easily address their pesky cloud file and email backup challenges while StorMagic gives them the opportunity to affordably and safely explore the HCI and edge computing waters.




Six Key Differentiators between HPE 3PAR StoreServ and NetApp AFF A-Series All-flash Arrays

Both HPE and NetApp have multiple enterprise storage product lines. Each company also has a flagship product. For HPE it is the 3PAR StoreServ line. For NetApp it is the AFF (all flash FAS) A-Series. DCIG’s latest Pocket Analyst Report examines these flagship all-flash arrays. The report identifies many similarities between the products, including the ability to deliver low latency storage with high levels of availability, and a relatively full set of data management features.

DCIG’s Pocket Analyst Report also identifies six significant differences between the products. These differences include how each product provides deduplication and other data services, hybrid cloud integration, host-to-storage connectivity, scalability, and simplified management through predictive analytics and bundled or all-inclusive software licensing.

DCIG recently updated its research on the dynamic and growing all-flash array marketplace. In so doing, DCIG identified many similarities between the HPE 3PAR StoreServ and NetApp AFF A-Series products including:

  • Unified SAN and NAS protocol support
  • Extensive support for VMware API’s including VMware Virtual Volumes (VVols)
  • Integration with popular virtualization management consoles
  • Rich data replication and data protection offerings

DCIG also identified significant differences between the HPE and NetApp products including:

  • Hardware-accelerated Inline Data Services
  • Predictive analytics
  • Hybrid Cloud Support
  • Host-to-Storage Connectivity
  • Scalability
  • Licensing simplicity

Blurred image of pocket analyst report first page

DCIG’s 4-page Pocket Analyst Report on the Six Key Differentiators between HPE 3PAR StoreServ and NetApp AFF A-Series All-flash Arrays analyzes and compares the flagship all-flash arrays from HPE and NetApp. To see which product has the edge in each of the above categories and why, you can purchase the report on DCIG’s partner site: TechTrove. You may also register on the TechTrove website to be notified should this report become available for no charge at some future time.




Data Center Challenges and Technology Advances Revealed at Flash Memory Summit 2018

If you want to get waist-deep in the technologies that will impact the data centers of tomorrow, the Flash Memory Summit 2018 (FMS) held this week in Santa Clara is the place to do it. This is where the flash industry gets its geek on and everyone on the exhibit floor speaks bits and bytes. However, there is no better place to learn about advances in flash memory that are sure to show up in products in the very near future and drive further advances in data center infrastructure.

Flash Memory Summit logoKey themes at the conference include:

  • Processing and storing ever growing amounts of data is becoming more and more challenging. Faster connections and higher capacity drives are coming but are not the whole answer. We need to completely rethink data center architecture to meet these challenges.
  • Artificial intelligence and machine learning are expanding beyond their traditional high-performance computing research environments and into the enterprise.
  • Processing must be moved closer to—and perhaps even into—storage.

Multiple approaches to addressing these challenges were championed at the conference that range from composable infrastructure to computational storage. Some of these solutions will complement one another. Others will compete with one another for mindshare.

NVMe and NVMe-oF Get Real

For the near term, NVMe and NVMe over Fabrics (NVMe-oF) are clear mindshare winners. NVMe is rapidly becoming the primary protocol for connecting controllers to storage devices. A clear majority of product announcements involved NVMe.

WD Brings HDDs into the NVMe World

Western Digital announced the OpenFlex™ architecture and storage products. OpenFlex speaks NVMe-oF across an Ethernet network. In concert with OpenFlex, WD announced Kingfish™, an open API for orchestrating and managing OpenFlex™ infrastructures.

Western Digital is the “anchor tenant” for OpenFlex with a total of seventeen (17) data center hardware and software vendors listed as launch partners in the press release. Notably, WD’s NVMe-oF attached 1U storage device provides up to 168TB of HDD capacity. That’s right – the OpenFlex D3000 is filled with hard disk drives.

While NVMe’s primary use case it to connect to flash memory and emerging ultra-fast memory, companies still want their HDDs. Using Western Digital, organizations can have their NVMe and still get the lower cost HDDs they want.

Gen-Z Composable Infrastructure Standard Gains Momentum

The emerging Gen-Z as a memory-centric architecture is designed for nanosecond latencies. Since last year’s FMS, Gen-Z has made significant progress toward this objective. Consider:

  • The consortium publicly released the Gen-Z Core Specification 1.0 on February 13, 2018. Agreement upon a set of 1.0 standards is a critical milestone in the adoption of any new standard. The fact that the consortium’s 54 members agreed to it suggest broad industry adoption.
  • Intel’s adoption of the SFF-TA-1002 “Gen-Z” universal connector for its “Ruler” SSDs reflects increased adoption of the Gen-Z standards. Making this announcement notable is that Intel is NOT currently a member of the Gen-Z consortium which indicates that Gen-Z standards are gaining momentum even outside of the consortium.
  • The Gen-Z booth included a working Gen-Z connection between a server and a removable DRAM module in another enclosure. This is the first example of a processor being able to use DRAM that is not local to the processor but is instead coming out of a composable pool. This is a concept similar to how companies access shared storage in today’s NAS and SAN environments.

Other Notable FMS Announcements

Many other innovative solutions to data center challenges were also made at the FMS 2018 which included:

  • Solarflare NVMe over TCP enables rapid low-latency data movement over standard Ethernet networks.
  • ScaleFlux computational storage avoids the need to move the data by integrating FPGA-based computation into storage devices.
  • Intel’s announcement of its 660P Series of SSDs that employ quad level cell (QLC) technology. QLC stores more data in less space and at a lower cost.

Recommendations

Based on the impressive progress we observed at Flash Memory Summit 2018, we can reaffirm the recommendations we made coming out of last year’s summit…

  • Enterprise technologists should plan technology refreshes through 2020 around NVMe and NVMe-oF. Data center architects and application owners should seek 10:1 improvements in performance, and a similar jump in data center efficiency.
  • Beyond 2020, enterprise technologists should plan their technology refreshes around a composable data centric architecture. Data center architects should track the development of the Gen-Z ecosystem as a possible foundation for their next-generation data centers.



Seven Key Differentiators between Dell EMC VMAX and HPE 3PAR StoreServ Systems

Dell EMC and Hewlett Packard Enterprise are enterprise technology stalwarts. Thousands of enterprises rely on VMAX or 3PAR StoreServ arrays to support their most critical applications and to store their most valuable data. Although VMAX and 3PAR predated the rise of the all-flash array (AFA), both vendors have adapted and optimized these products for flash memory. They deliver all-flash array performance without sacrificing the data services enterprises rely on to integrate with a wide variety of enterprise applications and operating environments.

While VMAX and 3PAR StoreServ can both support hybrid flash plus hard disk configurations, the focus is now on all-flash. Nevertheless, the ability of these products to support multiple tiers of storage will be advantageous in a future that includes multiple classes of flash memory along with other non-volatile storage class memory technologies.

Both the VMAX and 3PAR StoreServ can meet the storage requirements of most enterprises, yet differences remain. DCIG compares the current AFA configurations from Dell EMC and HPE in its latest DCIG Pocket Analyst Report. This report will help enterprises determine which product best fits with its business requirements.

DCIG updated its research on all-flash arrays during the first half of 2018. In so doing, DCIG identified many similarities between the VMAX and 3PAR StoreServ products including:

  • Extensive support for VMware API’s including VMware Virtual Volumes (VVols)
  • Integration with multiple virtualization management consoles
  • Rich data replication and data protection offerings
  • Support for a wide variety of client operating systems
  • Unified SAN and NAS

DCIG also identified significant differences between the VMAX and 3PAR StoreServ products including:

  • Data center footprint
  • High-end history
  • Licensing simplicity
  • Mainframe connectivity
  • Performance resources
  • Predictive analytics
  • Raw and effective storage density

blurred image of the front page of the report

To see which vendor has the edge in each of these categories and why, you can access the latest 4-page Pocket Analyst Report from DCIG that analyzes and compares all-flash arrays from Dell EMC and HPE. This report is currently available for purchase on DCIG’s partner site: TechTrove. You may also register on the TechTrove website to be notified should this report become available at no charge at some future time.




Five Key Differentiators between the Latest NetApp AFF A-Series and Hitachi Vantara VSP F-Series Platforms

Every enterprise-class all-flash array (AFA) delivers sub-1 millisecond response times using standard 4K & 8K performance benchmarks, high levels of availability, and a relatively full set of core data management features. As such, enterprises must examine AFA products to determine the differentiators between them. It is when DCIG compared the newest AFAs from leading providers such as Hitachi Vantara and NetApp in its latest DCIG Pocket Analyst Report that differences between them quickly emerged.

Both Hitachi Vantara and NetApp refreshed their respective F-Series and A-Series lines of all-flash arrays (AFAs) in the first half of 2018. In so doing, many of the similarities between the products from these providers persisted in that they both continue to natively offer:

  • Unified SAN and NAS interfaces
  • Extensive support for VMware API’s including VMware Virtual Volumes (VVols)
  • Integration with popular virtualization management consoles
  • Rich data replication and data protection offerings

However, the latest AFA product refreshes from each of these two vendors also introduced some key areas where they diverge. While some of these changes reinforced the strengths of each of their respective product lines, other changes provided some key insights into how these two vendors see the AFA market shaping up in the years to come. This resulted in some key differences in product functionality emerging between the products from these two vendors that will impact them in the years to come.

Clicking on image above will take you to a third party website to access this report.

To help enterprises select the solution that best fits their needs, there are five key ways that the latest AFA products from Hitachi Vantara and NetApp differentiate themselves from one another. These five key differentiators include:

  1. Data protection and data reduction
  2. Flash performance optimization
  3. Predictive analytics
  4. Public cloud support
  5. Storage networking protocols

To see which vendor has the edge in each of these categories and why, you can access the latest 4-page Pocket Analyst Report from DCIG that analyzes and compares these newest all-flash arrays from Hitachi Vantara and NetApp. This report is currently available for sale on DCIG’s partner site: TechTrove. You may also register on the TechTrove website to be notified should this report becomes available at no charge at some future time.




NVMe Unleashing Performance and Storage System Innovation

Mainstream enterprise storage vendors are embracing NVMe. HPE, NetAppPure Storage, Dell EMC, Kaminario and Tegile all offer all-NVMe arrays. According to these vendors, the products will soon support storage class memory as well. NVMe protocol access to flash memory SSDs is a big deal. Support for storage class memory may become an even bigger deal.

NVMe Flash Delivers More Performance Than SAS

NVM express logo

Using the NVMe protocol to talk to SSDs in a storage system increases the efficiency and effective performance capacity of each processor and of the overall storage system. The slimmed down NVMe protocol stack reduces processing overhead compared to legacy SCSI-based protocols. This yields lower storage latency and more IOPS per processor. This is a good thing.

NVMe also delivers more bandwidth per SSD. Most NVMe SSDs connect via four (4) PCIe channels. This yields up to 4 GB/s bandwidth, an increase of more than 50% compared to the 2.4 GB/s maximum of a dual-ported SAS SSD. Since many all-flash arrays can saturate the path to the SSDs, this NVMe advantage translates directly to an increase in overall performance.

The newest generation of all-flash arrays combine these NVMe benefits with a new generation of Intel processors to deliver more performance in less space. It is this combination that, for example, enables HPE to claim that its new Nimble Storage arrays offer twice the scalability of the prior generation of arrays. This is a very good thing.

The early entrants into the NVMe array marketplace charged a substantial premium for NVMe performance. As NVMe goes mainstream, the price gap between NVMe SSDs and SAS SSDs is rapidly narrowing. With many vendors now offering NVMe arrays, competition should soon eliminate the price premium. Indeed, Pure Storage claims to have done so already.

Storage Class Memory is Non-Volatile Memory

Non-volatile memory (NVM) refers to memory that retains data even when power is removed. The term applies to many technologies that have been widely used for decades. These include EPROM, ROM, NAND flash (the type of NVM commonly used in SSDs and memory cards). NVM also refers to newer or less widely used technologies including 3D XPoint, ReRAM, MRAM and STT-RAM.

Because NVM properly refers to a such wide range of technologies, many people are using the term Storage Class Memory (SCM) to refer to emerging byte-addressable non-volatile memory technologies that may soon be used in enterprise storage systems. These SCM technologies include 3D XPoint, ReRAM, MRAM and STT-RAM. SCM offers several advantages compared to NAND flash:

  • Much lower latency
  • Much higher write endurance
  • Byte-addressable (like DRAM memory)

Storage Class Memory Enables Storage System Innovation

Byte-addressable non-volatile memory on NVMe/PCIe opens up a wonderful set of opportunities to system architects. Initially, storage class memory will generally be used as an expanded cache or as the highest performing tier of persistent storage. Thus it will complement rather than replace NAND flash memory in most storage systems. For example, HPE has announced it will use Intel Optane (3D XPoint) as an extension of DRAM cache. Their tests of HPE 3PAR 3D Cache produced a 50% reduction in latency and an 80% increase in IOPS.

Some of the innovative uses of SCM will probably never be mainstream, but will make sense for a specific set of use cases where microseconds can mean millions of dollars. For example, E8 Storage uses 100% Intel Optane SCM in its E8-X24 centralized NVMe appliance to deliver extreme performance.

Remain Calm, Look for Short Term Wins, Anticipate Major Changes

We humans have a tendency to overestimate short term and underestimate long term impacts. In a recent blog article we asserted that NVMe is an exciting and needed breakthrough, but that differences persist between what NVMe promises for all-flash array and hyperconverged solutions and what they can deliver in 2018. Nevertheless, IT professionals should look for real application and requirements-based opportunities for NVMe, even in the short term.

Longer term, the emergence of NVMe and storage class memory are steps on the path to a new data centric architecture. As we have previously suggested, enterprise technologists should plan technology refreshes through 2020 around NVMe and NVMe-oF. Beyond 2020, enterprise technologists should plan their technology refreshes around a composable data centric architecture.




DCIG 2018-19 All-flash Array Buyer’s Guide Now Available

DCIG is pleased to announce the availability of the DCIG 2018-19 All-flash Array Buyer’s Guide edition developed from its enterprise storage array body of research. This 64-page report presents a fresh snapshot of the dynamic all-flash array (AFA) marketplace. It evaluates and ranks thirty-two (32) enterprise class all-flash arrays that achieved rankings of Recommended or Excellent based on a comprehensive scoring of product featuresThese products come from seven (7) vendors including Dell EMCHitachi VantaraHPE, Huawei, NetAppPure Storage and Tegile.

graphical icon for the All-flash Array Buyer's Guide

DCIG’s succinct analysis provides insight into the state of the all-flash array marketplace, the benefits organizations can expect to achieve, and key features organizations should be aware of as they evaluate products. It also provides brief observations about the distinctive features of each product.

The DCIG 2018-19 All-flash Array Buyer’s Guide helps businesses drive time and cost out of the all-flash array selection process by:

  • Describing key product considerations and important changes in the marketplace
  • Gathering normalized data about the features each product supports
  • Providing an objective, third-party evaluation of those features from an end-user perspective
  • Presenting product feature data in standardized one-page data sheets facilitates rapid feature-based comparisons

It is in this context that DCIG presents the DCIG 2018-19 All-Flash Array Buyer’s Guide. As prior DCIG Buyer’s Guides have done, it puts at the fingertips of enterprises a resource that can assist them in this important buying decision.

Access to this Buyer’s Guide edition is available through the following DCIG partner sites: TechTrove.




DCIG 2018-19 Enterprise General Purpose All-flash Array Buyer’s Guide Now Available

DCIG is pleased to announce the availability of the DCIG 2018-19 Enterprise General Purpose All-flash Array Buyer’s Guide developed from its enterprise storage array body of research. This 72-page report presents a fresh snapshot of the dynamic all-flash array (AFA) marketplace. It evaluates and ranks thirty-eight (38) enterprise class all-flash arrays that achieved rankings of Recommended or Excellent. These products come from nine (9) vendors including Dell EMC, Hitachi Vantara, HPE, Huawei, IBM, Kaminario, NetApp, Pure Storage and Tegile.

DCIG’s succinct analysis provides insight into the state of the enterprise all-flash storage array marketplace, the benefits organizations can expect to achieve, and key features organizations should be aware of as they evaluate products. It also provides brief observations about the distinctive features of each product.

The DCIG 2018-19 Enterprise General Purpose All-flash Array Buyer’s Guide helps businesses drive time and cost out of the all-flash array selection process by:

  • Describing key product considerations and important changes in the marketplace
  • Gathering normalized data about the features each product supports
  • Providing an objective, third-party evaluation of those features from an end-user perspective
  • Presenting product feature data in standardized one-page data sheets facilitates rapid feature-based comparisons

It is in this context that DCIG presents the DCIG 2018-19 Enterprise General Purpose All-Flash Array Buyer’s Guide. As prior DCIG Buyer’s Guides have done, it puts at the fingertips of enterprises a resource that can assist them in this important buying decision.

Access to this Buyer’s Guide is available through the following DCIG partner sites: TechTrove

Bitnami