Chao FANG specializes in efficient deep neural network (DNN) acceleration, with expertise spanning algorithm design, hardware architecture, and VLSI. His research interests lie in 1) large language model (LLM) acceleration, 2) sparse DNN optimization, 3) next-generation floating-point arithmetic for deep learning, and 4) RISC-V processor integration with domain-specific accelerators. He has strong technical proficiency in Python, Verilog, SystemVerilog, C/C++, and deep learning frameworks like PyTorch, complemented by extensive experience with VLSI design tools.
Chao Fang was born in Guangdong, China, in 1997. He received his B.E. degree from the School of Precision Instrument and Opto-electronics Engineering at Tianjin University, China, in 2019. He is currently pursuing his Ph.D. degree in information and communication engineering at Nanjing University, China under the supervision of Prof. Zhongfeng Wang, while serving as a visiting Ph.D. student in the MICAS research group at KU Leuven, Belgium, under the supervision of Prof. Marian Verhelst.
His current research interests focus on algorithm-hardware co-optimization for deep neural networks (DNNs), with particular emphasis on efficient hardware architectures for large language models (LLMs), exploiting sparsity in DNNs, and integrating DNN accelerators with RISC-V processors.
Compute Platforms for AI and Embedded Processing: Project