Research Goal: The current generation of AI, largely driven by Deep Neural Networks (DNNs), is often criticized for its lack of transparency, data inefficiency, and limited robustness. Neuro-Symbolic AI (NeSy) integrates neural perception with explicit symbolic reasoning to overcome these challenges, paving the way for systems that are inherently explainable, safe, and trustworthy. While this fusion offers significant cognitive capabilities, the underlying symbolic and probabilistic components differ vastly from the dense vector operations of DNNs. This research aims to characterize these workloads and design novel computer architectures that can efficiently execute this unique class of hybrid computation.
Gap in the SotA: Existing hardware, optimized for dense linear algebra, is severely inefficient for symbolic and probabilistic workloads. These reasoning components are typically memory-bound and characterized by complex, irregular memory access patterns, often resulting in extremely low hardware utilization and high data movement overheads. Crucially, current architectures fail to exploit the inherent, unstructured sparsity prevalent in symbolic and probabilistic reasoning, which is essential for reducing the energy and computational cost associated with these logic-driven tasks. Additionally, efficiently coupling neural and symbolic workloads on heterogeneous platforms remains a key challenge, as the distinct computational characteristics of these components require careful co-design to minimize interface overheads and maximize system-level performance.