Automated Causal CNN Scheduling Optimizer for Real-Time Edge Accelerators

Jun Yin , Marian Verhelst Hardware-efficient AI and ML

Research Goal: Spatio-Temporal Convolutional Neural Networks (ST-CNN) allow extending CNN capabilities from image processing to consecutive temporal-pattern recognition. Generally, state-of-the-art (SotA) ST-CNNs inflate the feature maps and weights from well-known CNN backbones to represent the additional time dimension. However, edge computing applications would suffer tremendously from such large computation/memory overhead. Fortunately, the overlapping nature of ST-CNN enables various optimizations, such as the dilated causal convolution structure and Depth-First (DF) layer fusion to reuse the computation between time steps and CNN sliding windows, respectively.

This research is being carried out in a joint project with Bosch within the EU Marie-Curie Project I-SPOT, within the application domain of automotive acoustic perception.

Gap in SotA: A large amount of computations can be saved from such joint workload topology-scheduling optimization without the need for any network retraining. However, they all lack the full consideration of ST-CNN features: On the one hand, the dilated causal convolution structure is rigid when optimizing a batch of frames. On the other hand, the existing Depth-First (DF) layer fusion optimizers are not capable of handling inter-frame causal overlaps and exploring better layer-fusion ranges. 

Recent results: This project has yielded a unified optimization framework, ACCO, to perform joint design space exploration (DSE) on the workload transformation, hardware computation scheduling, and memory allocation. ACCO proposes the formal extension of the DF design space towards ST-CNN workloads with an automatic layer-fusion transformation. The performance is verified on four case studies, including ablation studies and SotA comparisons to demonstrate ACCO’s strength, optimizing representative ST-CNN models for both the single and batch frame(s).

Get in touch
Jun Yin
Phd student
Marian Verhelst
Academic staff

Publications about this research topic

Yin, Jun, et al. "ACCO: Automated Causal CNN Scheduling Optimizer for Real-Time Edge Accelerators." in proceedings of 2023 IEEE 41st international conference on computer design (ICCD). IEEE, 2023.

Other research topics in Hardware-efficient AI and ML

A Scalable Heterogenous Multi-accelerator Platform for AI and ML
Hardware-efficient AI and ML
Ryan Antonio | Marian Verhelst
Uncertainty-Aware Design Space Exploration for AI Accelerators
Hardware-efficient AI and ML
Jiacong Sun | Georges Gielen and Marian Verhelst
Integer GEMM Accelerator for SNAX
Hardware-efficient AI and ML
Xiaoling Yi | Marian Verhelst
Improving GPGPU micro architecture for future AI workloads
Hardware-efficient AI and ML
Giuseppe Sarda | Marian Verhelst
SRAM based digital in memory compute macro in 16nm
Hardware-efficient AI and ML
Weijie Jiang | Wim Dehaene
Scalable large array nanopore readouts for proteomics and next-generation sequencing
Analog and power management circuits, Hardware-efficient AI and ML, Biomedical circuits and sensor interfaces
Sander Crols | Filip Tavernier and Marian Verhelst
Design space exploration of in-memory computing DNN accelerators
Hardware-efficient AI and ML
Pouya Houshmand and Jiacong Sun | Marian Verhelst
Multi-core architecture exploration for layer-fused deep learning acceleration
Hardware-efficient AI and ML
Pouya Houshmand and Arne Symons | Marian Verhelst
HW-algorithm co-design for Bayesian inference of probabilistic machine learning
Ultra-low power digital SoCs and memories, Hardware-efficient AI and ML
Shirui Zhao | Marian Verhelst
Design space exploration for machine learning acceleration
Hardware-efficient AI and ML
Arne Symons | Marian Verhelst
Enabling Fast Exploration of the Depth-first Scheduling Space for DNN Accelerators
Hardware-efficient AI and ML
Arne Symons | Marian Verhelst
Optimized deployment of AI algorithms on rapidly-changing heterogeneous multi-core compute platforms
Ultra-low power digital SoCs and memories, Hardware-efficient AI and ML
Josse Van Delm | Marian Verhelst
High-throughput high-efficiency SRAM for neural networks
Ultra-low power digital SoCs and memories, Hardware-efficient AI and ML
Wim Dehaene and Marian Verhelst
Heterogeneous Multi-core System-on-Chips for Ultra Low Power Machine Learning Application at the Edge
Hardware-efficient AI and ML
Pouya Houshmand, Giuseppe Sarda, and Ryan Antonio | Marian Verhelst

Want to work with us?

Get in touch or discover the way we can collaborate.