Bridging Neurons and Symbols for Natural Language Processing and Knowledge Graphs Reasoning @ AAAI 2026
Overview
The success of neural networks is witnessed in various domains, e.g., human-like question-answering, playing games (Schrittwieser et al. 2020), solving IMO tasks (Trinh et al. 2024), code generation (Zhu et al. 2025). However, accompanied by these exciting successes are LLMs’ unpredictable behaviours (Park et al. 2024), errors in simple abstract reasoning (Lampinen et al. 2024), and the irrationality of making correct answers with incorrect explanations (Creswell, Shanahan, and Higgins 2023). Though breaking a complex reasoning task into multiple steps improve the reasoning performance (Wei et al. 2022a) and Code-prompts may significantly improve the causal reasoning performace of LLMs (Liu et al. 2025b), it remains unclear whether they can achieve the rigour of symbolic reasoning or, more radical, whether LLMs reason at all (Mitchell 2023).
This brings unpredictable risks to our society (Bengio et al. 2024).
Accompanied by the first neural network, Sphere Neural Network, which achieves the rigour of syllogistic reasoning without training data (Dong, Jamnik, and Liò 2025), this workshop intrigues theorists and practitioners in NLP and KG reasoning to reconsider various problems, for example, using novel neural-symbolic collaborative distillation (NesyCD) method for learning the complex reasoning abilities of LLMs (Liao et al. 2025).
The state-of-the-art in deep learning for NLP and KG shows that there are many open research questions to be addressed at the interface of symbolic and neural approaches
and that bridging neurons and symbols may break the glass ceiling of deep learning for NLP (Dong et al. 2024).
This workshop provides a platform for AI/ML practitioners and theorists to find new paths towards reliable and safe AI.
Topics
This workshop aims to enable an exchange of ideas to enhance the determinacy of neural networks in NLP and knowledge graph reasoning, and to promote AI’s explainability, reliability, and safety. Topics include, but are not limited to (1) Using KG or Symbolic Knowledge to improve the quality of LLMs; (2) using symbolic knowledge and natural language to interpret the reasoning mechanism of LLMs; (3) Distilling symbolic knowledge from LLMs; (4) Datasets and evaluation matrices supporting both neurosymbolic approaches to NLP tasks; (5) New unified representations for symbolic and neuralmeaning representation; (6) Innovative applications of hybrid models combining symbolic and neural architectures.