Heterogeneous Multi-core System-on-Chips for Ultra Low Power Machine Learning Application at the Edge

Pouya Houshmand, Giuseppe Sarda, and Ryan Antonio , Marian Verhelst Hardware-efficient AI and ML

Research Goal: Computing at the (extreme) edge requires highly energy-efficient hardware and flexible hardware to map diverse ML and DL workloads, enabling various applications on a single platform. Moreover, it needs algorithms and models specifically designed for resource-constrained devices, requiring a careful co-optimization of hardware and software. This research focuses on the first aspect of the challenges mentioned above of (extreme) edge-computing, i.e., the design of energy-efficient and flexible hardware architectures and hardware-software co-optimization strategies to enable early design space exploration of hardware architectures.

Gap in SotA: Most state-of-the-art ML processors are specialized for accelerating a single application/workload, which leaves little to no room for future-proofing of hardware platforms to evolving ML workloads. As applications and algorithms change rapidly, the need for high-performance flexible system-on-chips is becoming prominent.

Results: The research lookes into possible design solutions for building single-core flexible hardware accelerators for DL and motivates the need for building homogeneous and heterogeneous multi-core systems to enable flexibility and energy efficiency through multiple test chips and architectural optimizations.

TinyVers: a versatile all-digital heterogeneous multi-core system-on-chip with a highly flexible ML accelerator, a RISC-V core, non-volatile memory, and a power management unit.

DIANA: a digital and analog in-memory computing core controlled by a single RISC-V core to build a highly energy-efficient heterogeneous multi-core system-on-chip.

PATRONoC: A specialized high-performance network-on-chip optimized for multi-core DNN computing platforms. 

Get in touch
Pouya Houshmand
Phd student
Giuseppe Sarda
Phd student
Ryan Antonio
Phd student
Marian Verhelst
Academic staff

Other research topics in Hardware-efficient AI and ML

Automated Causal CNN Scheduling Optimizer for Real-Time Edge Accelerators
Hardware-efficient AI and ML
Jun Yin | Marian Verhelst
A Scalable Heterogenous Multi-accelerator Platform for AI and ML
Hardware-efficient AI and ML
Ryan Antonio | Marian Verhelst
Uncertainty-Aware Design Space Exploration for AI Accelerators
Hardware-efficient AI and ML
Jiacong Sun | Georges Gielen and Marian Verhelst
Integer GEMM Accelerator for SNAX
Hardware-efficient AI and ML
Xiaoling Yi | Marian Verhelst
Improving GPGPU micro architecture for future AI workloads
Hardware-efficient AI and ML
Giuseppe Sarda | Marian Verhelst
SRAM based digital in memory compute macro in 16nm
Hardware-efficient AI and ML
Weijie Jiang | Wim Dehaene
Scalable large array nanopore readouts for proteomics and next-generation sequencing
Analog and power management circuits, Hardware-efficient AI and ML, Biomedical circuits and sensor interfaces
Sander Crols | Filip Tavernier and Marian Verhelst
Design space exploration of in-memory computing DNN accelerators
Hardware-efficient AI and ML
Pouya Houshmand and Jiacong Sun | Marian Verhelst
Multi-core architecture exploration for layer-fused deep learning acceleration
Hardware-efficient AI and ML
Pouya Houshmand and Arne Symons | Marian Verhelst
HW-algorithm co-design for Bayesian inference of probabilistic machine learning
Ultra-low power digital SoCs and memories, Hardware-efficient AI and ML
Shirui Zhao | Marian Verhelst
Design space exploration for machine learning acceleration
Hardware-efficient AI and ML
Arne Symons | Marian Verhelst
Enabling Fast Exploration of the Depth-first Scheduling Space for DNN Accelerators
Hardware-efficient AI and ML
Arne Symons | Marian Verhelst
Optimized deployment of AI algorithms on rapidly-changing heterogeneous multi-core compute platforms
Ultra-low power digital SoCs and memories, Hardware-efficient AI and ML
Josse Van Delm | Marian Verhelst
High-throughput high-efficiency SRAM for neural networks
Ultra-low power digital SoCs and memories, Hardware-efficient AI and ML
Wim Dehaene and Marian Verhelst

Want to work with us?

Get in touch or discover the way we can collaborate.