In recent years, GPGPUs have been driving Machine Learning (ML) research, providing enough performance to scale models size for more and more features and intelligence. However, the architecture's flexibility and large parallelism causes high hardware costs and limits computing efficiency. Moving forward hasn't been easy, as most of today's research on GPUs is profit driven, closed-source, hindering common and fast development.
In this context, our research aims to improve the current state-of-the-art of open-source GPU platforms, contributing to a common, accessible platform for research across the hardware, software and algorithm stack.
Our focus is on boosting the open source Vortex GPU efficiency, by extending the system micro architecture and improving the software to hardware mapping.
G. M. Sarda, N. Shah, D. Bhattacharjee, P. Debacker and M. Verhelst, "Optimising GPGPU Execution Through Runtime Micro-Architecture Parameter Analysis," 2023 IEEE International Symposium on Workload Characterization (IISWC), Ghent, Belgium, 2023