In this research topic we focus on finding the optimal solutions for accelerating the performance of machine learning algorithms. This involves exploring different design and mapping options and evaluating their effectiveness in terms of speed, accuracy, and energy consumption. The ultimate goal is to find the best trade-off between these factors to produce a fast and efficient machine learning accelerator that can be used in a variety of applications. The research in this area involves the use of advanced tools and techniques, including simulation, modeling, and optimization, to help guide the design process and ensure that the resulting accelerator is optimal for the specific use case.
To this extent, we develop ZigZag, an open-source hardware architecture-mapping design space exploration framework for deep learning accelerators. ZigZag relies on an analytical cost model capable of accurately characterizing hardware performance of single-core accelerators.
Using ZigZag we have built multiple extensions which explore multi-core accelerators, depth-first processing, novel NN architectures such as transformer networks, etc.