Research Goal
The growing demand for compute performance, together with advances in silicon technology, has driven the integration of multiple heterogeneous accelerators into single Systems-on-Chip (SoCs). This integration aims to deliver higher performance and energy efficiency for compute-intensive workloads. While data access between memory subsystems and accelerators has been extensively optimized, data exchange between accelerators remains largely overlooked, limiting the overall performance of heterogeneous SoCs.
Data copying across heterogeneous accelerators raises three interrelated challenges:
- Memory-boundedness of modern workloads.
Modern workloads, such as large language models (LLMs), are increasingly memory-bound due to limited data reuse. Simply scaling compute resources is insufficient if the underlying data movement cannot keep up. - Support for complex in-memory data layouts.
In-memory data layouts must align with the diverse access patterns of different accelerators. Suboptimal layouts can drastically increase inference latency—by up to two orders of magnitude—because explicit data layout transformations are both energy- and latency-intensive. Although Direct Memory Access (DMA) engines offer high bandwidth utilization, they are typically efficient only for contiguous memory accesses. Supporting complex, accelerator-specific layouts often requires additional software loops for address generation, causing excessive control overhead and underutilization of on-chip interconnect bandwidth. - Efficient point-to-multipoint (P2MP) data movement.
When the same data must be copied to multiple destinations (e.g., broadcasting model parameters or shared activations), traditional DMA engines perform repeated read–write operations for each target. This leads to redundant traffic and poor energy efficiency. Addressing this P2MP requirement calls for multicast-like capabilities. However, standard interconnect protocols lack native multicast support, and existing P2MP solutions—such as multicast-capable Networks-on-Chip (NoCs)—introduce significant hardware overhead and require protocol modifications, undermining scalability and compatibility with existing SoC fabrics.
Recent Results for XDMA
This project proposes a distributed DMA architecture, called XDMA, enabling flexible and efficient data movement within heterogeneous multi-accelerator SoCs to address these three challenges. To tackle the in-memory layout problem, a data streaming engine integrates hardware-based address generators, replacing software-based ones and thereby reducing control overhead while sustaining high interconnect utilization. To support efficient SoC-level broadcasting, an application-layer broadcasting mechanism named Chainwrite relocates the multicast operation from network routers to the DMA endpoints. Chainwrite preserves the peer-to-peer nature of data transfers while providing scalable, energy-efficient delivery of identical data to an arbitrary number of destinations.
XDMA Frontend and the XDMA Backend are both open-sourced.