The DRAM era of memory – volatile, scarce, and expensive – is drawing to a close. The era of big memory – large, persistent, virtualized, and composable — is about to replace it.
Big Memory Needs Software-defined Memory
Multiple technology advances are enabling the transition to the big memory era. These include new CPUs, interconnects, fabrics, and–most importantly—storage class persistent memories such as Optane and MRAM.
Big Memory needs software-defined memory to bring the benefits of advances in persistent memory to enterprise applications. Software-defined memory makes these advanced technologies available to applications in standard ways.
It makes sense for this capability to be a specialized virtualization layer that makes pools of memory available to operating systems, hypervisors, and applications. This way, software applications can experience the benefits of big memory without developers having to write multi-tiered memory management into their applications.
The Big Memory Revolution Is Upon Us
We are in the early phases of a persistent-memory-enabled revolution in performance, cost, and capacity. Multiple vendors are now shipping storage class memory in their enterprise servers and storage systems. Now that storage class memories are available in production volumes, the ecosystem is coming together. The revolution has begun.
Progress is Faster Than Many People Realize
Progress is faster than many people realize. The first enterprise products using Intel Optane SSDs and memory debuted in 2019. Well-publicized delays in Intel CPUs supporting these new standards limited the availability of products integrating persistent memoryfp. But a new generation of servers is now becoming available. These support memory capacities of up to 6TB per CPU. These same servers can support much more memory via data fabrics.
Advances in Infrastructure Standards and Products Creating the Next-generation Data Center
Making big memory broadly available to a wide range of standard workloads requires an ecosystem beyond the storage class memory itself. Standards such as DDR5, PCIe 5.0, CXL, Gen-Z, and NVMe are essential components in—and enablers of—the next-generation data center.
DDR5 is the next generation of DRAM. It advances the state of the art in multiple ways, including 2x the bandwidth and 4x the capacity of DDR4 memory.
PCIe 5.0 takes memory expansion beyond the DDR bus, quadrupling the performance of the widely deployed PCIe 3.0. The PCIe 5.0 specification was approved in 2019. Products that incorporate PCIe 5.0 are available now, with many more products expected throughout 2022.
Compute Express Link (CXL) extends the memory beyond the DDR bus, creating a data fabric. Compute Express Link (CXL) is a high-performance, low-latency fabric based on the PCIe 5.0 physical interface. CXL is significant because it creates the opportunity for servers to create even larger pools of DRAM and persistent memory than is supported by DDR5.
Gen-Z* extends the data fabric from a rack to a row, or even multiple rows, in the data center. This creates an opportunity for dramatically more powerful data center architectures that will provide CPUs, GPGPUs, DPUs, and other specialized accelerators with direct access to these pools of memory.
*On November 10, 2021, the Gen-Z Consortium announced it had signed a Letter of Intent that would transfer the Gen-Z Specifications and all Gen-Z assets to the CXL Consortium. This should further accelerate the adoption of a memory coherent interface with CXL as the sole industry standard moving forward. DCIG views this development as a good thing.
The Software-defined Memory Landscape
Persistent memories such as Intel Optane support multiple access modes, including some that are not persistent.
- Most enterprise products that integrate Optane treat it as persistent block storage
- Caching-oriented storage systems use Optane DIMMs or SSDs as a persistent caching layer
- Tiering-oriented storage systems use Optane SSDs as the new fastest tier of storage
Just as RAMdisks are faster than hard disk drives, Optane SSDs are faster than NAND flash SSDs. Nevertheless, the full benefits of Optane can only be unlocked when addressing it as memory rather than as a fast disk.
Some database vendors have updated their applications to support the use of Optane DIMMs as byte-addressable memory. By treating Optane DIMMs as a large memory tier, they enable orders-of-magnitude improvements in performance. These performance increases occur when the entire working set fits into this new, large-capacity memory space. This is because it eliminates disk I/O for data processing.
The Need for a Software-defined Memory Layer
Enterprise data centers are becoming software-defined and composable. While some companies tout their software-defined data center (SDDC) solutions, one of the vital performance resources—memory has up to this point been excluded.
We need software-defined memory to create a sizeable fabric-attached pool of memory that can be allocated and orchestrated, much like containers are today.
Software-defined Memory Options
Several enterprise technology providers have announced software-defined memory solutions. These include:
- VMware with its Project Capitola
- Samsung’s Open Source Scalable Memory Development Kit (SMDK) for CXL fabrics
- MemVerge Memory Machine
Both Project Capitola and SMDK were announced as technical previews in October 2021. They demonstrate the need for software-defined memory and will eventually be competitors in this space.
MemVerge is the software-defined memory pioneer. Its Memory Machine has been available since 2020. The Memory Machine merges multiple types of memory into a coherent pool and works in current datacenter and cloud infrastructures.
MemVerge Does for Memory What ESXi Did for CPU
Before VMware, many servers routinely utilized 20% of available CPU cycles. ESXi enabled server consolidation by enabling full utilization of CPU cycles.
MemVerge does the same thing for memory resources, including large-capacity persistent memory stores. The Memory Machine abstracts all available memory, including dynamic and persistent memory, and pools that memory into units that can be allocated to virtual servers.
This makes large heterogenous memories available to existing software applications without having to rewrite the applications. Thus, MemVerge enables big memory to fit into current enterprise datacenters.
In many datacenters, memory is now the primary cost and performance bottleneck. MemVerge eliminates this bottleneck, unleashing a new wave of cost-saving consolidation opportunities. It also opens up a whole new range of performance options for demanding data-intensive workloads.
MemVerge Does for Memory What Snapshots did for Storage
Because MemVerge delivers both big memory virtualization and enterprise data services for memory. This unlocks vast new opportunities to accelerate day-to-day application performance. It also enables quick recovery from application crashes.
Experienced storage administrators quickly grasp the opportunities these data services create for their organizations. For example, they can use snapshots to:
- roll-back to a point-in-time
- move an application to another server
- rapidly clone an application, including the entire state
Conclusion – The Software-defined Memory Era Begins Now
The era of large, software-defined memory begins now.
The need is obvious. The enabling technologies are real.
Organizations have realized many benefits from virtualizing compute, storage, and the network. Those benefits include dramatic reductions in the data center CAPEX and OPEX with enhanced resiliency.
Memory is the last performance resource to be virtualized. That is happening now, realized in the MemVerge Memory Machine solution. Early adopters are achieving success across multiple extremely demanding use cases.
Right now is the time to reimagine what is possible. Begin planning your next infrastructure around software-defined big memory.
Keep Up to Date with DCIG
To be notified of new DCIG articles, reports, and webinars, sign up for DCIG’s free weekly Newsletter.
Technology providers interested in licensing DCIG TOP 5 reports or having DCIG produce custom reports, please contact DCIG for more information
Editors Note: MemVerge is a DCIG client, and DCIG is developing reports for them. However, MemVerge did not have any editorial input into this article.