Event - 23 February 2024

BitWave: Exploiting Column-Based Bit-Level Sparsity for Deep Learning Acceleration

Lectured by Man (Amanda) Shi

What

Bit-serial computation facilitates bit-wise sequential data processing, offering numerous benefits, such as a reduced area footprint and dynamically-adaptive computational preci- sion. It has emerged as a prominent approach, particularly in leveraging bit-level sparsity in Deep Neural Networks (DNNs). However, existing bit-serial accelerators exploit bit-level sparsity to reduce computations by skipping zero bits, but they suffer from inefficient memory accesses due to the irregular indices of the non-zero bits.

As memory accesses typically are the dominant contributor to DNN accelerator performance, this paper introduces a novel computing approach called ”bit-column-serial” and a compatible architecture design named ”BitWave.” BitWave harnesses the advantages of the ”bit-column-serial” approach, leveraging struc- tured bit-level sparsity in combination with dynamic dataflow techniques. This achieves a reduction in computations and memory footprints through redundant computation skipping and weight compression. BitWave is able to mitigate the performance drop or the need for retraining that is typically associated with sparsity-enhancing techniques using a post-training optimization involving selected weight bit-flips. Empirical studies conducted on four deep-learning benchmarks demonstrate the achievements of BitWave: (1) Maximally realize 13.25× higher speedup, 7.71× efficiency compared to state-of-the-art sparsity-aware accelera- tors. (2) Occupying 1.138 mm2 area and consuming 17.56 mW power in 16nm FinFet process node.

When

23/2/2024 11:00 - 12:00

Where

ESAT Aula L