If your mental image of a high-end storage array is rack after rack full of blinking lights, you would be right only 30% of the time. Today, nearly half of all high-end storage array models support configurations that fit into less than 8RU—that is just fourteen inches of rack space. How can this be?
DCIG recently updated its research into enterprise storage and has begun working on reports covering the high-end storage array segment of the marketplace. As we looked at the results and spoke with vendors, it became clear that several recent advances in storage technology are redefining the high-end. This article identifies four of those technologies and the impact they are having.
High Availability is the Primary Criterion for High-End Storage
DCIG’s defines high-end storage as the storage enterprises choose to run their mission-critical workloads. These storage systems must provide the capacity and performance necessary to meet day-to-day business requirements. However, the primary criterion for high-end storage is high availability. The two high availability capabilities that are essential at the high-end are:
- Replication supporting at least three (3) data centers. High-end storage must provide synchronous replication between at least two (2) arrays to enable zero-RPO failover. The array’s replication technology must also allow for at least one geographically remote system, generally through asynchronous replication.
- Resilient architecture. A high-end array must be able to survive multiple failures and provide for non-disruptive upgrades. Most often, this resilience is delivered through advanced clustering technology, as exemplified by the HPE Primera and by Huawei‘s SmartMatrix architecture. Some vendors enable non-disruptive operations through a stateless controller architecture. This is the approach taken by Pure Storage in its FlashArray//X series.
The high availability requirements that most enterprises have for their mission-critical workloads no longer require rack-scale storage. Four advances in storage technology that are redefining the high-end and shrinking the footprint of enterprise storage are NVMe, NVMe/FC, storage-class memory, and predictive analytics with proactive support.
Storage Technology #1: NVMe
Designed for SSDs. The NVMe specification was designed from the ground up for non-volatile memories. The slimmed-down NVMe protocol stack reduces processing overhead compared to legacy SCSI-based protocols such as SAS. This yields lower storage latency and more IOPS per processor
Highly parallel. The NVMe architecture brings a new high-performance queuing mechanism that replaces SATA’s single queue per SSD with up to 65,535 I/O queues, each supporting up to 65,535 outstanding commands. Many vendors take advantage of this capability by mapping dedicated I/O queues to each CPU core.
More bandwidth per SSD. NVMe also delivers more bandwidth per SSD. Most NVMe SSDs connect via four (4) PCIe channels. This yields up to 4 GB/s bandwidth per SSD, 50% more than the 2.4 GB/s maximum of a dual-ported SAS SSD. This increased bandwidth matches well with increasing SSD capacities of 7.68, 15.3 and even 30 TB per SSD.
More than 70% of the high-end storage arrays DCIG researched now support NVMe connectivity to storage media. The rest connect via 12 Gb SAS. Many of the products support both technologies.
Storage Technology #2: NVMe/FC
NVMe/FC extends the benefits of NVMe storage across a Fibre Channel network fabric connecting application hosts to networked storage. Of the 37 high-end storage products DCIG researched, 35% now support NVMe/FC.
Storage Technology #3: Storage-class Memory (SCM)
Enterprise storage vendors are incorporating storage class memory into their designs to reduce latency further and enhance throughput. HPE uses 750 GB NVMe Optane SCM add-in cards as a cache in its systems. Dell EMC uses dual-ported NVMe Optane SSDs as a storage layer in its PowerMAX. More than 25% of the high-end storage arrays also incorporate SCM on the memory bus via NVDIMMs.
Storage Technology #4: Predictive Analytics with Proactive Support
Predictive analytics with proactive support enhances availability by identifying and resolving problems before they cause degraded performance or downtime. Some vendors are even incorporating AI processors into their high-end storage arrays. Beyond eliminating downtime, predictive analytics enable data center infrastructures to autonomously optimize themselves for application availability, performance and total cost of ownership based on the customer’s priorities.
Storage vendors and their customers are achieving meaningful benefits including:
- Measurably reducing downtime
- Avoiding preventable downtime
- Optimizing application performance
- Significantly reducing operational expenses
A Powerful Combined Impact
Storage vendors are bringing all of these technologies together in their high-end storage arrays. The results include extreme levels of availability and performance in a very small footprint. In doing to, they are redefining high-end storage and placing these high-end capabilities within reach of a larger number of businesses. Enterprises considering a storage refresh or that are seeking to optimize data center performance should expand their search for solutions to include high-end storage. In doing so, they will gain a new picture of high-end storage.
DCIG will continue to cover developments in enterprise storage. If you haven’t already done so, please sign up for the weekly DCIG Newsletter so that we can keep you informed of these ongoing developments.