Research Goal: Computing at the (extreme) edge requires highly energy-efficient hardware and flexible hardware to map diverse ML and DL workloads, enabling various applications on a single platform. Moreover, it needs algorithms and models specifically designed for resource-constrained devices, requiring a careful co-optimization of hardware and software. This research focuses on the first aspect of the challenges mentioned above of (extreme) edge-computing, i.e., the design of energy-efficient and flexible hardware architectures and hardware-software co-optimization strategies to enable early design space exploration of hardware architectures.
Gap in SotA: Most state-of-the-art ML processors are specialized for accelerating a single application/workload, which leaves little to no room for future-proofing of hardware platforms to evolving ML workloads. As applications and algorithms change rapidly, the need for high-performance flexible system-on-chips is becoming prominent.
Results: The research lookes into possible design solutions for building single-core flexible hardware accelerators for DL and motivates the need for building homogeneous and heterogeneous multi-core systems to enable flexibility and energy efficiency through multiple test chips and architectural optimizations.
TinyVers: a versatile all-digital heterogeneous multi-core system-on-chip with a highly flexible ML accelerator, a RISC-V core, non-volatile memory, and a power management unit.
DIANA: a digital and analog in-memory computing core controlled by a single RISC-V core to build a highly energy-efficient heterogeneous multi-core system-on-chip.
PATRONoC: A specialized high-performance network-on-chip optimized for multi-core DNN computing platforms.
> Jain, Vikram, et al. "TinyVers: A Tiny Versatile System-on-chip with State-Retentive eMRAM for ML Inference at the Extreme Edge." arXiv preprint arXiv:2301.03537 (2023). Accepted in Journal of Solid-State Circuits (JSSC), 2023.
> K. Ueyoshi et al., "DIANA: An End-to-End Energy-Efficient Digital and ANAlog Hybrid Neural Network SoC," 2022 IEEE International Solid- State Circuits Conference (ISSCC), San Francisco, CA, USA, 2022.
> V. Jain et al., "PATRONoC: Parallel AXI Transport Reducing Overhead for Networks-on-Chip targeting Multi-Accelerator DNN Platforms at the Edge", Submitted to DAC 2023.