Efficient Hardware Architectures for Neuro-Symbolic AI: Exploiting Sparsity and Accelerating Coupled Workloads

Simon Haepers , Marian Verhelst Hardware-efficient AI and ML

Research Goal: The current generation of AI, largely driven by Deep Neural Networks (DNNs), is often criticized for its lack of transparency, data inefficiency, and limited robustness. Neuro-Symbolic AI (NeSy) integrates neural perception with explicit symbolic reasoning to overcome these challenges, paving the way for systems that are inherently explainable, safe, and trustworthy. While this fusion offers significant cognitive capabilities, the underlying symbolic and probabilistic components differ vastly from the dense vector operations of DNNs. This research aims to characterize these workloads and design novel computer architectures that can efficiently execute this unique class of hybrid computation.

Gap in the SotA: Existing hardware, optimized for dense linear algebra, is severely inefficient for symbolic and probabilistic workloads. These reasoning components are typically memory-bound and characterized by complex, irregular memory access patterns, often resulting in extremely low hardware utilization and high data movement overheads. Crucially, current architectures fail to exploit the inherent, unstructured sparsity prevalent in symbolic and probabilistic reasoning, which is essential for reducing the energy and computational cost associated with these logic-driven tasks. Additionally, efficiently coupling neural and symbolic workloads on heterogeneous platforms remains a key challenge, as the distinct computational characteristics of these components require careful co-design to minimize interface overheads and maximize system-level performance.

Get in touch
Simon Haepers
Phd student
Marian Verhelst
Academic staff

Other research topics in Hardware-efficient AI and ML

Vertically-Integrated Logic Fabrics for Future 3D Computing Platforms
Hardware-efficient AI and ML
Jannes Willemen | Marian Verhelst
Precision-Scalable Microscaling Hardware for Continual Learning at the Edge
Hardware-efficient AI and ML
Stef Cuyckens | Marian Verhelst
XDMA: A Distributed DMA for Flexible and Efficient Data Movement in Heterogeneous Multi-Accelerator SoCs
Hardware-efficient AI and ML
Yunhao Deng and Fanchen Kong | Marian Verhelst
Anda: Unlocking Efficient LLM Inference with a Variable-Length Grouped Activation Data Format
Hardware-efficient AI and ML
Man Shi, Arne Symons, Robin Geens, and Chao Fang | Marian Verhelst
Massive parallelism for combinatorial optimisation problems
Hardware-efficient AI and ML
Toon Bettens and Sofie De Weer | Wim Dehaene and Marian Verhelst
Carbon-aware Design Space Exploration for AI Accelerators
Hardware-efficient AI and ML
Jiacong Sun | Georges Gielen and Marian Verhelst
Decoupled Control Flow and Memory Orchestration in the Vortex GPGPU
Hardware-efficient AI and ML
Giuseppe Sarda | Marian Verhelst
Automated Causal CNN Scheduling Optimizer for Real-Time Edge Accelerators
Hardware-efficient AI and ML
Jun Yin | Marian Verhelst
A Scalable Heterogenous Multi-accelerator Platform for AI and ML
Hardware-efficient AI and ML
Ryan Antonio | Marian Verhelst
Uncertainty-Aware Design Space Exploration for AI Accelerators
Hardware-efficient AI and ML
Jiacong Sun and Fanchen Kong | Georges Gielen and Marian Verhelst
Integer GEMM Accelerator for SNAX
Hardware-efficient AI and ML
Xiaoling Yi | Marian Verhelst
Improving GPGPU micro architecture for future AI workloads
Hardware-efficient AI and ML
Giuseppe Sarda | Marian Verhelst
SRAM based digital in memory compute macro in 16nm
Hardware-efficient AI and ML
Weijie Jiang | Wim Dehaene
Scalable large array nanopore readouts for proteomics and next-generation sequencing
Analog and power management circuits, Hardware-efficient AI and ML, Biomedical circuits and sensor interfaces
Sander Crols | Filip Tavernier and Marian Verhelst
Design space exploration of in-memory computing DNN accelerators
Hardware-efficient AI and ML
Pouya Houshmand and Jiacong Sun | Marian Verhelst
Multi-core architecture exploration for layer-fused deep learning acceleration
Hardware-efficient AI and ML
Arne Symons | Marian Verhelst
HW-algorithm co-design for Bayesian inference of probabilistic machine learning
Ultra-low power digital SoCs and memories, Hardware-efficient AI and ML
Shirui Zhao | Marian Verhelst
Design space exploration for machine learning acceleration
Hardware-efficient AI and ML
Arne Symons | Marian Verhelst
Enabling Fast Exploration of the Depth-first Scheduling Space for DNN Accelerators
Hardware-efficient AI and ML
Arne Symons | Marian Verhelst
Optimized deployment of AI algorithms on rapidly-changing heterogeneous multi-core compute platforms
Ultra-low power digital SoCs and memories, Hardware-efficient AI and ML
Josse Van Delm | Marian Verhelst
High-throughput high-efficiency SRAM for neural networks
Ultra-low power digital SoCs and memories, Hardware-efficient AI and ML
Wim Dehaene and Marian Verhelst
Heterogeneous Multi-core System-on-Chips for Ultra Low Power Machine Learning Application at the Edge
Hardware-efficient AI and ML
Pouya Houshmand, Giuseppe Sarda, and Ryan Antonio | Marian Verhelst

Want to work with us?

Get in touch or discover the way we can collaborate.