Compute Express Link (CXL) is the new locus of data center innovation. It will soon prove to be a key enabler of improvements in data center performance and efficiency for hyperscalers and enterprises alike. David Hall, VP of NVIDIA Solutions at Lambda, recently said that the differentiation in data centers will be in the interconnect. CXL is that interconnect.
Significant advances in the Compute Express Link™ CPU-to-device interconnect were on display at SC23, and more are being revealed at the Consumer Electronics Expo this week. The CXL ecosystem includes IT infrastructure giants as well as many startups. At SC23, the CXL Consortium booth included demos from IntelliProp, MemVerge, Microchip, Micron, Samsung, XConn Technologies, and many more. Though the products on display were, for the most part, engineering samples, these companies will begin to deliver the benefits of this rising interconnect standard to the marketplace beginning in 2024.
CXL Transforms PCIe to DCIe (Data Center Interconnect)
PCIe stands for “Peripheral Component Interconnect express.” PCIe is the ubiquitous standard for connecting high-speed components within a computer such as graphic cards and network interface cards. CXL uses the same hardware interface as PCIe but transforms it from a peripheral interconnect within a computer into a Data Center Interconnect linking performance resources.
Beginning with CXL 3.0, CXL-enabled devices plugged into a PCIe interface are not mere peripherals. Instead, they become new nodes on a high-performance, low-latency data fabric. CXL was designed with shared memory in mind, so the protocols are very low overhead. Much more so than Ethernet, for example.
CXL Enables Data Center Performance
Prior to the emergence of NVMe, storage latencies were discussed in terms of milliseconds. As providers implemented end-to-end NVMe, milliseconds became microseconds. Multiple vendors now claim to deliver NVMe-based storage performance of less than 100 microseconds.
CXL increases memory capacity. With CXL, the conversation shifts from microseconds to nanoseconds, the same latency category we use for DRAM. Early CXL memory products are demonstrating sub-200 nanosecond latencies. While nearly double the latency of a CPU’s DRAM, this is a dramatic improvement over any alternative approach to making large pools of memory available to applications.
CXL increases memory bandwidth. My initial thoughts about the benefits of CXL memory were about how it could increase performance by providing more memory capacity to memory-intensive applications. However, one of the CXL demonstrations at SC23 showed CXL-attached memory delivering substantial performance improvements to an HPC application compared to using 100% local DRAM, even though the application was using the same amount of total memory. The application had been memory bandwidth-bound, not memory capacity-bound. Thus, another way CXL enables data center performance is by making more memory bandwidth available to applications.
CXL Enables Data Center Efficiency
Anyone paying attention to data center infrastructure is aware of the proliferation of processors in the data center, sometimes referred to as the “xPU.” In addition to CPUs, we now have GPUs, NPUs, FPGAs, SmartNICs, and other accelerators. Up to now, these processors have been tied to a CPU in a specific appliance. With CXL, these devices become nodes on a fabric. Thus, CXL will encourage further xPU proliferation by facilitating the integration of new workload-specific accelerators into the data center.
For example, Panmnesia recently announced a CXL-based AI accelerator that it claims will speed up AI searches by more than 100x. The achievement earned the company an Innovation Award at the Consumer Electronics Show (CES) 2024. Another CXL innovator demonstrated a solution delivering 2x the performance while using 1/2 the power of traditional infrastructures. These types of increases in performance per rack unit and watt will deliver valuable improvements in data center efficiency.
CXL Implications for IT Infrastructure Planning
While CXL is mostly at the engineering samples stage of delivery, multiple vendors tell me that they will move to production in 2024. They are seeing strong interest from hyper scalers due to the performance and efficiency that CXL can deliver. I believe that CXL-enable innovations will be available to enterprise technology architects within current technology refresh cycles for many organizations. Thus, enterprise IT leaders would do well to learn about CXL and track the progress of the technology in 2024 and beyond.
KEEP UP TO DATE WITH DCIG
To be notified of new DCIG articles, reports, and webinars, sign up for DCIG’s free weekly Newsletter.
To learn about DCIG’s future research and publications, see the DCIG Research & Publication Calendar.
Technology providers interested in licensing DCIG TOP 5 reports or having DCIG produce custom reports, please contact DCIG for more information.