Overview

Recent exploration shows that LLMs, e.g., ChatGPT, may pass the Turing test in human-like chatting but have limited capability even for simple reasoning tasks (Biever, 2023). It remains unclear whether LLMs reason or not (Melanie, 2023). Human reasoning has been characterized as a dual-process phenomenon (Sun, 2023) or as mechanisms fast and slow thinking (Kahneman, 2011). These findings suggest two directions for exploring neural reasoning: starting from existing neural networks to enhance the reasoning performance with the target of symbolic-level reasoning, and starting from symbolic reasoning to explore its novel neural implementation. These two directions will ideally meet somewhere in the middle and will lead to representations that can act as a bridge for novel neural computing, which qualitatively differs from traditional neural networks, and for novel symbolic computing, which inherits the good features of neural computing. Hence the name of our workshop, with a focus on Natural Language Processing and Knowledge Graph reasoning. This workshop promotes research in both directions, particularly seeking novel proposals from the second direction.

For paper submissions, please use the following link: Submission Link.

Invited Speakers

Pascale Fung

Pascale Fung

The Hong Kong University of Science and Technology

Human Value Representation in Large Language Models - Bridging the Neural and the Symbolic

Abstract: The widespread application of Large Language Models (LLMs) in many different areas and fields has necessitated their explicit alignment to human values and preferences. LLMs have learned human values from their pre-training data, through Reinforcement Learning with Human Feedback (RLHF) and through other forms of value fine-tuning. Nevertheless we lack a systematic way of analyzing the scope and distribution of such human values embedded in LLMs. One can use surveys of value-relevant questions to prompt LLMs for analysis and comparison. But surveys are a form of sparse sampling. In this talk, I will present UniVar, a high dimension representation of human values trained from a value taxonomy and 8 different language models in 6 different languages representing a sampling of the world’s culture. We then show the UniVar representation distributions of 4 LLMs, namely ChatGPT, Llama 2, Sola, and Yi, in English, Chinese, Japanese, Indonesian, Arabic and French, which clearly demonstrate the proximity of cultures that share similar values, such as Chinese and Japanese, or Indonesian and Arabic. This is the first time where a high dimensional neural representation has been shown to be effective in generalizing the survey based symbolic representation of human values.

Juanzi Li

Juanzi Li

Tsinghua University

Neural-symbolic Programming for Explainable Knowledge-intensive Question Answering

Abstract: Explainable knowledge-intensive QA aims at returning not only the accurate knowledge answer but also the explicit reasoning process, which can enhance the interpretability and reliability of QA systems. However, state-of-the-art large language models suffer from the notorious hallucination problem, and knowledge graph based methods, such as question semantic parsing, face generalization issues. In this talk, I will present our neural-symbolic framework for explainable knowledge-intensive QA. Specifically, I will introduce our experiences in knowledge-oriented programming, automatic program induction, and probabilistic tree-of-thought reasoning by integrating the parametric knowledge of LLMs and retrieved textual knowledge.

Alessandro Lenci

Alessandro Lenci

University of Pisa

The Semantic Gap in LLMs and How to Bridge It

The unprecedented success of LLMs in carrying out linguistic interactions disguises the fact that, at closer inspection, their knowledge of meaning and inference abilities are still quite limited and different from human ones. They generate human-analogue texts, but still fall short of fully understanding them. I will refer to this as the “semantic gap” of LLMs. Some claim that this gap depends on the lack of grounding of text-only LLMs. I instead argue that the problem lies in the very type of representations these models acquire. They learn highly complex association spaces that on the other hand correspond only partially to truly semantic and inferential ones. This prompts the need to investigate the missing links to bridge the gap between LLMs as sophisticated statistical engines and full-fledged semantic agents.

Volker Tresp

Volker Tresp

Ludwig-Maximilians-University Munich

The Tensor Brain: A Unified Theory of Perception, Memory and Semantic Decoding

Abstract: We discuss a unified computational theory of an agent’s perception and memory. In our model, both perception and memory are realized by different operational modes of the oscillating interactions between a symbolic index layer and a subsymbolic representation layer. The symbolic index layer contains indices for concepts, predicates, and episodic instances known to the agent. The index layer labels the activation pattern in the representation layer and then feeds back the embedding of that label to the representation layer. The embedding vectors are implemented as connection weights linking both layers. An index is a focal point of activity and competes with other indices. Embeddings have an integrative character: the embedding vector for a concept index integrates all that is known about that concept, and the embedding vector for an episodic index represents the world state at that instance. The subsymbolic representation layer is the main communication platform. In cognitive neuroscience, it would correspond to, what authors call, the “mental canvas” or the “global workspace” and reflects the cognitive brain state. In bottom-up mode, scene inputs activate the representation layer, which then activates the index layer. In top-down mode, an index activates the representation layer, which might subsequently activate even earlier processing layers. This last process is called the embodiment of a concept.

Organizers

Tiansi Dong

Tiansi Dong

Fraunhofer IAIS

Erhard Hinrichs

Erhard Hinrichs

University of Tübingen

Zhen Han

Zhen Han

Amazon Inc.

Kang Liu

Kang Liu

Chinese Academy of Sciences

Yangqiu Song

Yangqiu Song

The Hong Kong University of Science and Technology

Yixin Cao

Yixin Cao

Singapore Management University

Christian F. Hempelmann

Christian F. Hempelmann

Texas A&M-Commerce

Rafet Sifa}

Rafet Sifa

University of Bonn